The artificial intelligence arms race continues, as the largest tech companies explore new ways accelerate AI workloads for cloud platforms. The appetite for more computing horsepower is following several tracks, with major investment in graphics processors (GPUs) as well as custom ASIC chips.
Microsoft has been a leader in using FPGAs (Field Programmable Gate Arrays) to accelerate its cloud and AI workloads. This week Microsoft unveiled Project Brainwave, a deep learning acceleration platform based on its collaboration with Intel on FPGA computing.
Microsoft says Project Brainwave represents a “major leap forward” in cloud-based deep learning performance, and intends to bring the technology to its Windows Azure cloud computing platform.
“We designed the system for real-time AI, which means the system processes requests as fast as it receives them, with ultra-low latency,” writes Doug Burger, a Microsoft Distinguished Engineer, in a blog post. “Real-time AI is becoming increasingly important as cloud infrastructures process live data streams, whether they be search queries, videos, sensor streams, or interactions with users.
Real-Time Deep Learning
Tuesday’s announcement Microsoft at the Hot Chips 2017 conference fleshed out the details on an approach that Microsoft described in broad terms at its Build user event in April. Microsoft says its new approach, which it calls Hardware Microservices, will allow deep neural networks (DNNs) to run in the cloud without any software required, resulting in large advances in speed and efficiency.
FPGAs are semiconductors that can be reprogrammed to perform specialized computing tasks, allowing users to tailor compute power to specific workloads or applications. FPGAs can serve as coprocessors to accelerate CPU workloads, an approach that is used in supercomputing and HPC. Intel acquired new FPGA technology in its $16 billion acquisition of Altera in 2016.
“We exploit the flexibility of Intel FPGAs to incorporate new innovations rapidly, while offering performance comparable to, or greater than, many ASIC-based deep learning processing units,” said Burger.
Microsoft is using Intel Stratix 10 FPGAs as the hardware accelerator in its Brainwave platform. Microsoft describes its approach as using a “soft” DNN processing unit (or DPU), synthesized onto commercially available FPGAs. Microsoft says this approach provides flexibility and the ability to rapidly implement changes as AI technology advances.
“By attaching high-performance FPGAs directly to our datacenter network, we can serve DNNs as hardware microservices, where a DNN can be mapped to a pool of remote FPGAs and called by a server with no software in the loop,” Burger explained. “This system architecture both reduces latency, since the CPU does not need to process incoming requests, and allows very high throughput, with the FPGA processing requests as fast as the network can stream them.”
Project Brainwave, leveraging the Intel Stratix 10 technology, demonstrated over 39 teraflops of achieved performance on a single request, according to Microsoft and Intel. Brainwave is currently being used in Microsoft’s Bing search engine, but the company hopes to deploy it on its Azure cloud service.
“In the near future, we’ll detail when our Azure customers will be able to run their most complex deep learning models at record-setting performance,” said Burger. “With the Project Brainwave system incorporated at scale and available to our customers, Microsoft Azure will have industry-leading capabilities for real-time AI.”
Chinese Tech Firms Adopt AMD EPYC Servers
AMD also had news at the HotChips event, announcing that Chinese tech titans Tencent and JD.com plan to deploy its EPYC servers in their cloud and e-commerce operations. The wins are a signal of progress for AMD, which recently re-entered the data center market in earnest.
Tencent Cloud said that it plans to introduce AMD EPYC-based 2P cloud servers with up to 64 processor cores before the end of 2017. Jd.com also committed to future adoption of EPYC servers, but did not set a timeline.
“To continue as a leading provider of high-performance and high-value cloud services, Tencent needs to adopt the most advanced infrastructure and the chip industry’s latest achievements,” said Sage Zou, senior director of Tencent Cloud. “Tencent Cloud is continuously seeking more cores, more I/O interfaces, more secure hardware features and improved total cost of ownership for server hardware products.”
“By partnering with these market leaders, AMD is bringing choice and competition to one of the fastest growing technology markets in the world,” said Forrest Norrod, senior vice president and general manager, Enterprise, Embedded and Semi-Custom products, AMD.