Google Cloud Platform (GCP) customers can now leverage NVIDIA GPU-based VMs for processing-heavy tasks like deep learning, the company announced in a blog post on Tuesday. Support for the GPUs will launch next week in the us-east1, asia-east1, and europe-west1 GCP regions, the post said.

GPUs, or graphics processing units, work well for deep learning tasks because they are designed for parallel computing, and do well to handle the vector and matrix operations that are prevalent in deep learning. Nvidia, like some other companies, has recently been using its background in graphics processing to build out GPU solutions for deep learning and machine learning.

According to the blog post, the Tesla GPUs will be accessible through the gcloud command-line tool, and users will be able to attach eight GPUs to a custom VM in the Google Compute Engine. Some of the types of computing that could be improved with the GPUs are “video and image transcoding, seismic analysis, molecular modeling, genomics, computational finance, simulations, high performance data analysis,” and more, the blog post noted.

SEE: Google DeepMind: The smart person’s guide

“GPUs on Google Compute Engine are attached directly to the VM, providing bare-metal performance. Each NVIDIA GPU in a K80 has 2,496 stream processors with 12 GB of GDDR5 memory. You can shape your instances for optimal performance by flexibly attaching 1, 2, 4 or 8 NVIDIA GPUs to custom machine shapes,” the blog post said.

Using the Nvidia GPUs, GCP customers can leverage frameworks such as TensorFlow, Theano, Torch, MXNet and Caffe, the post said. NVIDIA’s CUDA software, which is often used for “building GPU-accelerated applications,” will also be supported.

While they support all of the above-mentioned frameworks, the new GPUs are integrated with Google Cloud Machine Learning (Cloud ML) and work strongly with TensorFlow to help cut the time it takes to train machine learning models, the post said. Google’s post recommends starting the TensorFlow training with a small dataset and working up to full-size datasets to be able to fully utilize the Nvidia GPUs.

The new GPUs are billed per minute, with a 10 minute minimum, the post said. For the US market, every K80 GPU on a VM will cost $0.700 per hour, per GPU. The same structure in Asia and Europe costs $0.770 per hour, per GPU.

In late 2016, AWS CEO Andy Jassy announced Elastic GPUs for EC2, allowing users to attach a GPU to any of the existing compute instances in AWS. Before that, Microsoft Azure unveiled Azure N-Series Virtual Machines, which are also powered by Nvidia Tesla K80 GPUs, to up the ante on deep learning. It’s becoming increasingly clear that AI and deep learning will be defining much of the cloud wars in the coming years.

The 3 big takeaways for TechRepublic readers

  1. Google Cloud Platform added support for NVIDIA Tesla K80 GPUs, adding new capabilities for deep learning processing for users.
  2. The Nvidia GPUs are integrated with Google Cloud Machine Learning and TensorFlow to help lessen the time it takes to train machine learning models at scale.
  3. Both AWS and Microsoft Azure, the two leaders in the cloud IaaS space, have been working on GPU integrations as well.