Analyst White Paper: Accelerate performance for production AI

Learn about the HPC storage requirements to accelerate performance for production AI scenarios with distributed AI servers. This paper shows the testing results from a variety of benchmarks from 1 to 32 GPUs up to 4 server nodes using flash-based WekaIO storage. See how GPU performance compares within a single server versus a clustered configuration with the same amount of GPUs, as well as how GPU performance scales from 1 to 32 GPUs. Discover the storage bandwidth and throughput requirements for common benchmarks, such as Resnet50, VGG16, and Inceptionv4. The information in this paper can help you plan and optimize your AI resources for production AI.

Subscribe to the Cloud Insider Newsletter

This is your go-to resource for the latest news and tips on the following topics and more, XaaS, AWS, Microsoft Azure, DevOps, virtualization, the hybrid cloud, and cloud security. Delivered Mondays and Wednesdays

Subscribe to the Cloud Insider Newsletter

This is your go-to resource for the latest news and tips on the following topics and more, XaaS, AWS, Microsoft Azure, DevOps, virtualization, the hybrid cloud, and cloud security. Delivered Mondays and Wednesdays

Resource Details

Hewlett Packard Enterprise logo
Provided by:
Hewlett Packard Enterprise
Topic:
Cloud
Format:
PDF