Microsoft’s Azure cloud is being upgraded to accelerate both overall datacenter performance and specific computing workloads.

The bulk of new servers added to the platform will be fitted with Field Programmable Gate Arrays (FPGAs) — a type of chip whose core logic can be reconfigured using software.

The flexible nature of FPGAs allows them to be customised to handle specific computing tasks more rapidly than would be possible using software running on a CPU, which is how servers have traditionally handled workloads. As new ways are identified of accelerating tasks or new tasks are identified that could be accelerated, so the malleable nature of FPGAs allows them to be reconfigured.

While Microsoft has experimented with using FPGAs in its datacenters since 2010 as part of Project Catapult, the FPGAs in these new Azure and Bing production servers are connected together by a fast network architecture, which Microsoft calls the Configurable Cloud, effectively creating a pool of hundreds of thousands of FPGAs that can be used on demand.

“Every FPGA in the datacenter can reach every other one (at a scale of hundreds of thousands) in a small number of microseconds, without any intervening software,” according to a new paper released by Microsoft Project Catapult researchers.

This pool of FPGAs has been used by Microsoft to accelerate Bing searches, for network acceleration and in future could also be deployed to aid in large-scale machine learning and bioinformatics, according to researchers.

In tests, servers using this pool of Configurable Cloud FPGAs were able to resolve Bing queries more rapidly and were less prone to bogging down in times of high demand.

Microsoft’s earlier experiments with using FPGAs used a different network architecture, which only allowed a maximum of 48 FPGAs to directly communicate, limiting the useable, shared FPGA processing power available.

Microsoft’s new Configurable Cloud architecture not only allows the platform to offer large pools of FPGAs that can be tapped on demand, but also allows FPGAs to be applied to new tasks.

In this new architecture, the FPGAs are tightly coupled with the datacenter network — allowing tasks related to datacenter infrastructure to be accelerated.

To this end, Microsoft has been experimenting with using Azure’s Configurable Cloud FPGA layer to accelerate the speed at which data passes over its network, by encrypting data in transit at high speed.

Microsoft says the majority of new servers in its production datacenters across more than 15 countries and five continents are being fitted with FPGAs using its Configurable Cloud architecture.

Inside each server, Microsoft is using an Intel Altera Stratix V D5 FPGA on its own board with 4GB of DDR3-1600 RAM. The CPU and the FPGA can communicate at 16 GBps in each direction via a PCI Express Gen 3 bus. Each board is wired into the Configurable Cloud network using two independent 40Gb Ethernet interfaces, which connect to the server’s Network Interface Card (NIC) and the Top of Rack (ToR) switch respectively.

Because the FPGA board sits between the server and data flowing to and from the network, it can operate on data on the fly as it passes in and out of the server.

This approach allows the FPGA to act as an accelerator both for local compute tasks and to manipulate data in transit as it passes over the network.

“We demonstrate a reliable communication protocol for inter-FPGA communication that achieves comparable latency to prior state of the art, while scaling to hundreds of thousands of nodes,” the paper states.

Microsoft is not the only tech giant looking beyond CPUs to speed up datacenters. Google is using its own Application-Specific Integrated Circuit, which it calls a Tensor Processing Unit, to support its machine-learning efforts.

Intel is building Xeon CPUs with built-in FPGAs and a consortium of tech giants – including Dell EMC, Google and IBM – this week announced plans to build servers with a new, faster interface between the CPU and FPGAs.

The reasons for the increased use of hardware accelerators like FPGAs, GPUs and ASICs vary, but, in part, the trend is being driven by the need to find new ways to boost datacenter performance, due to the increasing difficulty of building ever-faster CPUs.

Also see