Home IT Info News Today Microsoft's AI Gets a Boost from Nvidia, Elon Musk's O…

Microsoft's AI Gets a Boost from Nvidia, Elon Musk's O…

298

 Nvidia and OpenAI announce they’re joining Microsoft in its efforts to advance the field of AI.

It’s no secret that Microsoft is banking on artificial intelligence (AI) to deliver next-generation IT services to its corporate customers as they embark on their digital transformation journey. This week during the SC16 supercomputing conference in Salt Lake City, the company joined computer graphics hardware maker Nvidia to announce they would collaborate to bring AI-enabled business processes and workflows to practically any enterprise.
The companies are working together on an AI framework and platform that runs Microsoft Cognitive Toolkit, the open-source deep learning solution that mimics how the human brain processes information. According to Microsoft, leading data scientists employ the toolkit, along with the company’s own Bing search engine and Cortana virtual assistant.

The software is tailored to run to its fullest potential using Nvidia’s Tesla graphical-processing units (GPUs), either on-premises or on Microsoft’s cloud. In September, Microsoft announced the impending release of N-series Azure virtual machines that tap the processing power of GPUs provided by Nvidia.

“We’re working hard to empower every organization with AI, so that they can make smarter products and solve some of the world’s most pressing problems,” commented Harry Shum, executive vice president of the Artificial Intelligence and Research Group at Microsoft, in a Nov. 14 announcement. “By working closely with NVIDIA and harnessing the power of GPU-accelerated systems, we’ve made Cognitive Toolkit and Microsoft Azure the fastest, most versatile AI platform. AI is now within reach of any business.”
Microsoft isn’t the only IT heavyweight that’s exploiting GPUs and their knack for parallel workloads, like AI and machine learning, to help usher in an era of smart applications.
Earlier this year, IBM revealed it added Nvidia’s blazingly fast Tesla M60 GPUs to its cloud, allowing it to run high-performance computing, machine learning and other complex workloads faster. In September, Fujitsu announced a new technology that boosts the efficiency of neural networks by simultaneously doubling their capability and reducing the amount of a GPU’s internal memory required by 40 percent. Fujitsu’s technology analyzes neural networks and searches for opportunities in which memory spaces can be reused instead of requesting more resources.
Separately, Microsoft and OpenAI, a non-profit AI research organization whose sponsors include Tesla’s high-profile co-founder, Elon Musk, announced Azure will be the primary cloud platform to run OpenAI’s experiments.
Launched late last year, OpenAI’s aim is to “is to advance digital intelligence in the way that is most likely to benefit humanity as a whole, unconstrained by a need to generate financial return,” stated the group in a Dec. 15, 2015 announcement. The new alliance increases the likelihood that those advancements will have taken root in Microsoft’s cloud.
“Azure has impressed us by building hardware configurations optimized for deep learning—they offer K80 GPUs with InfiniBand interconnects at scale. We’re also excited by their roadmap, which should soon bring Pascal GPUs onto their cloud,” stated OpenAI in a Nov. 15 announcement. “In the coming months we will use thousands to tens of thousands of these machines to increase both the number of experiments we run and the size of the models we train.”

Source link

LEAVE A REPLY

Please enter your comment!
Please enter your name here