Intel has pulled open the curtain on a secretly developed mega-chip called Knights Mill, a key component in its artificial-intelligence strategy.
The chip — which belongs to the family of high-performance Xeon Phi processors — gives Intel a legitimate opportunity to tackle machine learning. It is targeted at servers and workstations, and will be available in 2017.
Intel was caught off-guard with the emergence of artificial intelligence as a way to analyze and present data. Knights Mill, introduced on Wednesday at the ongoing Intel Developer Forum, will fill a big hole in company’s chip lineup.
Knights Mill is a “next generation” Xeon Phi chip, Diane Bryant, executive vice president and general manager of Intel’s Data Center Group, said during a keynote at IDF.
Based on what Intel has said so far, the chip will have stacked memory and fast throughput, but more detailed information won’t be revealed for a while. It’s a new kind of Xeon Phi chip, and it won’t succeed the recent Knights Landing supercomputing chip.
The goal with Knights Mill is to create a chip that can calculate quickly and make decisions based on probabilities and associations, said Jason Waxman, corporate vice president and general manager of Intel’s Data Center Solutions Group, in an interview.
The chips will have many cores, and make approximations using learning models and algorithms, Waxman said.
The new chip is not to be confused with Intel’s next-generation supercomputing chip called Knights Hill, which doesn’t have a release date yet. The Knights Mill chip won’t upset the release of Knights Hill, which was first outlined in late 2014 and will succeed the recent Knights Landing chip, which has up to 72 cores and was released earlier this year.
With Knights Mill, Intel finally has a chip to take on the dominance of Nvidia’s GPUs in machine learning, which allow software to be trained to do tasks like image recognition and data analysis more efficiently. Google has also developed its own Tensor Processing Unit (TPU), which is used alongside GPUs in machine learning.
But there’s a difference between Knights Mill and competitive chips. Knights Mill will be a primary chip, meaning it’ll be able boot up computers. That will give it an advantage over Google’s TPU and Nvidia’s GPU, which are co-processors and still need CPUs to work in servers or workstations.
The chip also gives a big boost to Intel’s AI strategy, which is still coming together. Intel last week announced plans to acquire Nervana Systems, which offers deep-learning software and chip technology, for an estimated $350 million.
The Nervana acquisition brings talent, intellectual property and also most importantly, software, which will be optimized to work with Knights Mill, Waxman said.
High-performance chips typically focus on double-precision performance for more accurate calculations, but Knights Mill is designed differently. The chip’s cores focus on “low precision calculations,” which can be stringed together for approximations that can help the chip make a decision. The low-precision calculations help create powerful, and power-efficient, neural network clusters.
The Knights Mill design also brings more floating point performance to calculations, which is important in machine learning, Waxman said.
Intel is advancing its AI roadmap at a frantic pace, and Knights Mill is a leap forward, Waxman said.
Many machine learning models are being used in data centers. Beyond its homegrown software stack, Intel could make Xeon Phi compatible with different machine-learning models like the open-source Caffe and Google’s TensorFlow.
Intel has shown a willingness to collaborate. Intel is working with Baidu on a “Deep Speech” speech recogntion technology with its Xeon Phi platform, Waxman said.