Cloud

Google Cloud Preemptible GPUs: Dirt cheap machine learning with a serious catch

Preemptible options could be 50% cheaper than GPUs with on-demand instances. But, there are some major limitations.

Building a slide deck, pitch, or presentation? Here are the big takeaways:
  • Google Cloud has announced the beta of GPUs attached to Preemptible VMs, allowing for large-scale machine learning at a lower cost.
  • While the GPUs are less expensive, Google Compute Engine may shut them down with a short, 30-second warning if it needs the resources elsewhere.

On Thursday, Google Cloud announced the beta of GPUs attached to Preemptible VMs, providing a cheaper option for companies looking to expand their use of such resources. In a company blog post, Google revealed that the Preemptible GPUs would be about 50% less expensive than GPUs attached to on-demand instances.

While originally designed to handle graphics processing, GPUs have quickly risen to prominence in artificial intelligence (AI) circles for their ability to perform the kind of parallel processing necessary for machine learning workloads. As such, a cheaper preemptible option, like this one offered by Google, could lower the barrier for companies looking to experiment with, or expand their use of, machine learning technologies.

Google initially brought Preemptible VM instances to Google Cloud back in May 2015, the release noted. The firm later lowered prices for attachable SSDs in an effort to bring the preemptible strategy to storage as well.

SEE: IT leader's guide to deep learning (Tech Pro Research)

The preemptible VMs were designed for "high-throughput batch computing, machine learning, scientific and technical workloads," according to the press release. So, it goes to reason that the attachable GPUs could help accelerate these efforts.

Users will be able to attach either NVIDIA K80 and NVIDIA P100 GPUs to Preemptible VMs for $0.22 and $0.73 per GPU hour, respectively, the post said. Pricing is fixed, and Google bills on a per hour basis. This will give users a GPU boost in computational batch workloads and large-scale machine learning, but there's a catch: The Google Compute Engine can shut them down whenever it needs to free up the resources.

Don't worry, a user will still get a warning of the impending shutdown, but it's only 30 seconds beforehand, the release said. An additional limitation is that they are only available for a maximum of 24 hours at a time before they are shut down.

"This makes them a great choice for distributed, fault-tolerant workloads that don't continuously require any single instance, and allows us to offer them at a substantial discount," the post said.

To better understand how to get started with preemptible GPUs, and to get a clearer picture of other GPU options in Google Cloud, read the blog post.

Also see

mlcloud.jpg
Image: iStockphoto/Rick_Jo

About Conner Forrest

Conner Forrest is a Senior Editor for TechRepublic. He covers enterprise technology and is interested in the convergence of tech and culture.

Editor's Picks

Free Newsletters, In your Inbox