Machine learning, artificial intelligence—whatever the label, it’s fast becoming a way to reinvent enterprise IT mainstays and for the companies on top to stay on top.
Consider four of the most familiar names in technology: Intel, Google, IBM, and Microsoft are investing heavily in ML/AI with hardware designs intended to greatly accelerate the next generation of applications. Here’s how each of their plans stacks up.
Intel
What it’s doing: The world’s best-known chipmaker recently introduced a new line of CPUs specifically aimed at ML applications: Knights Mill. It also mentioned plans to meld its CPUs with reprogrammable FPGA processors, a powerful but relatively underexploited technology for Intel.
Why it’s doing so: As the PC market continues to melt away like an Arctic glacier, Intel has been hunting for methods to make up the difference. Server products alone won’t do it, so Intel has widened its quest to include both main processors and co-processors designed to accelerate ML functions.
However, Intel won’t likely offer its own GPU for ML work. Intel’s efforts with GPUs have never been on the level of other processor makers, but it’s always believed its CPU-specific improvements can beat GPUs. After all, Intel would like nothing more than to create an environment where its CPUs alone—not mixed in with another company’s GPUs—power the future.
Microsoft
What it’s doing: After outfitting the Microsoft Azure cloud with specially designed FPGAs to add machine-learning-accelerated functions to its clusters, Microsoft is talking about allowing customers to program the devices directly to enable more powerful tools for machine learning in its cloud.
Why it’s doing so: Microsoft already offers ML/AI tools, both inside and outside of Azure (not to mention OpenAI, a major nonprofit doing AI work, uses Azure as its cloud provider). But now Microsoft is considering a new methodology for providing cloud customers with hardware for machine learning. The hard part will be making it worth the while; FPGAs are complex to program and not as well-understood for ML as GPUs (yet).
What it’s doing: Google has been deeply invested in machine learning on the software side with frameworks like TensorFlow, but now provides a hardware complement—the Tensor Processing Unit—to accelerate specific machine learning functions.
Why it’s doing so: Like Microsoft, Google wants its cloud to be a premiere destination for ML applications. Google’s made it clear that it wants to stand out with greater ease of use, so it’s unlikely to consider the low-level access to ML hardware that Microsoft is contemplating. If people want direct access to machine learning hardware in a familiar context, there’s always Google Cloud’s brand-new GPU instances. Odds are the two hardware offerings will work in conjunction.
IBM
What it’s doing: IBM’s new machine learning toolset, PowerAI, runs a mix of IBM’s Power processors and Nvidia GPUs wired together using new proprietary hardware designed to tie CPUs and GPUs as closely as possible.
Why it’s doing so: IBM already has one household-name ML/AI project: Watson. But Watson was conceived of and provided primarily as a set of black-box services. PowerAI is a hardware suite—not a specific processor or GPU—aimed at high-end customers who want the capabilities for themselves and total control over how to use it. It’s in line with IBM’s plans for the Power processor line, which revolve around the big data and cloud apps that machine learning workloads are applied to.