AWS looms large among public cloud service providers, with double the share of its next nine largest competitors in the important infrastructure-as-a-service market.

At the AWS re:Invent 2018 event, the cloud giant revealed upcoming changes to its platform aimed at lowering costs, speeding up machine-learning training, and moving high-performance computing to the cloud.

Here are the most important announcements coming out of this year’s event.

1. Cheaper cloud workloads as Arm-based chips make their AWS debut

For the first time, AWS’ EC2 platform will offer virtual machines (VMs) running on Arm-based processors, via the new A1 instances.

These instances promise to be a good match for computing tasks such as running containerized microservices or web servers.

For such workloads, Amazon is promising the new A1 instance will offer up to a 45% cost reduction, relative to other EC2 general-purpose instances, due to being more efficient in how they handle workloads that can be scaled out over multiple processors. AWS customer and photo-sharing site SmugMug is predicting a 40% cost saving by moving its PHP-based technology stack to the new instances.

That said, not all workloads will be portable to the A1 instances, as the vast majority of servers, and therefore software, runs on x86-based chips, such as those produced by Intel. While a growing amount of software is compatible with Arm-based processors, such as the 64-bit Graviton chips that power the new A1 instances, not every application will work.

SEE: FAQ: What Arm servers on AWS mean for your cloud and data center strategy (TechRepublic)

And despite AWS’ claims of reduced running costs, there are already complaints about the complexity of the A1 instance pricing, due to storage and network costs not being factored in.

None of the other major cloud platforms offer Arm-based VMs to the public, and AWS release of A1 could be a turning point for Arm-based servers after years of struggling to make an impact.

“The A1 instance is interesting in that it is AWS’s first foray into offering ARM-based compute to customers and the likelihood that customers will save money by switching,” said Lydia Leong, distinguished analyst with Gartner for Technical Professionals.

“Where the balance goes is largely up to customers, and will be worth watching, given that customers can’t ordinarily easily experiment with Arm at scale in a cost-effective way (because they’d have to buy hardware).”

The A1 instances are available now in the US East (N. Virginia), US East (Ohio), US West (Oregon), and Europe (Ireland) regions in on-demand, reserved instances, spot, dedicated instances, and dedicated host form, and is supported by Amazon Linux 2, Red Hat Enterprise Linux, and Ubuntu, with AWS noting additional operating system support is on the way.

SEE: AWS re:Invent 2018: A guide for tech and business pros (free PDF) (TechRepublic)

2. Accelerating machine learning when training on huge datasets in the cloud

Demand for machine learning infrastructure is exploding, and AWS has announced new heavy-duty instances designed to reduce training time for machine-learning models.

The new P3dn instances promise to reduce training time to less than an hour in some circumstances, according to AWS.

An upgrade on the existing P3 instances, P3dn boosts the amount of data that can be shuttled from attached storage, such as Amazon S3 or Amazon EFS, to the GPUs used for training.

This 4x boost increases throughput to 100Gbps, potentially speeding up training times by reducing the data bottleneck.

SEE: FAQ: What Amazon’s blockchain services mean for your business (TechRepublic)

The P3dn instances also upgrade the underlying hardware available in P3 instances, offering 8 Nvidia Tesla V100 GPUs, 96 Intel Xeon Scalable (Skylake) vCPUs, and 1.8TB of NVMe-based SSD storage.

Increasing data throughput is increasingly important as training datasets for machine-learning continue to grow in size, with Facebook recently training a computer-vision model using 3.5 billion publicly available images.

Gartner’s Leong described the P3dn instances as an “evolution” of AWS’ existing machine learning-focused offerings.

A similar boost in networking throughput is available to CPU-based virtual machines via the new C5n instances for compute-intensive workloads, which increase the speed of connections between storage and the underlying CPUs to 100Gbps.

AWS also announced Amazon Elastic Inference (AEI), which accelerates the rate at which trained machine-learning models can run on its EC2 cloud instances.

The service uses GPUs to accelerate machine-learning inference, where trained models make predictions from data.

AWS says AEI will allow inference to be carried out more rapidly, which can result in savings of up to 75 percent compared to an unaided AWS EC2 instance. Up to 32 teraflops of GPU processing power can now be provisioned when setting up an EC2 instance. AEI detects whether a major machine-learning software framework is running on an EC2 instance and automatically accelerates that workload.

The new service was revealed alongside a custom processor called AWS Inferentia, whose design is optimized for machine-learning inference. The option of using Inferentia will be available on all EC2 instance types.

3. New hybrid cloud and high-performance computing services

Companies will be able to pay to host the AWS cloud platform on-premises, as AWS moves deeper into the hybrid cloud market.

The AWS Outposts service provides fully managed but still configurable compute and storage that is connected to the rest of AWS’s cloud. This in-house infrastructure will be available in VMware Cloud on AWS and AWS native configurations — with customers using the same tools to manage both on-prem and public cloud AWS infrastructure.

“Customers want to work on-premises and in the cloud the exact same way,” Jassy said, who added the on-premises version would offer core AWS services, rather than recreate its public cloud offerings in their entirety.

AWS also broadened the scope of its platform by announcing a new service designed to make the public cloud more attractive for high-performance computing (HPC).

The new Elastic Fabric Adapter is service that allows AWS virtual machines to share data over low-latency interlinks. EFA is integrated with the Message Passing Interface, which AWS says allows HPC applications to scale to tens of thousands of CPU cores without any modification.

The service is aimed at persuading organizations running high-performance computing workloads such as computational fluid dynamics, weather modelling, and reservoir simulation to move to the cloud.

SEE: AWS RoboMaker: A cheat sheet (TechRepublic)

It is available now in preview on AWS EC2 P3dn and C5n instances, with support for more instances due to be added in 2019.

Hyperion Research says the availability of specialist hardware, such as heavyweight GPU accelerators, in public cloud platforms such as AWS and Google Cloud Platform is seeing an increasing amount of HPC workloads move to the cloud.

According to analyst Intersect360, cloud spending by HPC customers grew by 44% from 2016 to 2017, which it called it a “breakout year” for cloud-based HPC.

4. Simplify creating and running IoT applications from the cloud

AWS also revealed a suite of new tools designed to make it easier to build and run IoT applications.

AWS IoT SiteWise is a managed service that collects, organizes and structures data collected by IoT devices in industrial facilities, so it can be used to analyze equipment and performance data.

Another managed service, AWS IoT Events, monitors IoT sensors and applications to help detect problems such as malfunctioning equipment and automatically triggers actions and alerts. Meanwhile AWS IoT Things Graph provides a drag-and-drop tool for linking devices like sensors to services, and AWS IoT Greengrass Connectors allows developers to connect to third-party services such as ServiceNow or Splunk via common APIs.

The new services appear to be part of an attempt by the major cloud platform providers to offer all of the infrastructure and services needed to support IoT and edge computing deployments.

Another example is Microsoft’s Azure Sphere, which aims to secure connected microcontrollers — of which there are 9 billion shipping every year — at both the board level and network level.

5. AWS to offer fully managed satellite ground stations

AWS also announced the world’s first fully managed ground station-as-a-service.

CEO Andy Jassy says the service is designed to provide all the infrastructure needed to relay data to and from satellites, as well as to store, process and analyze that data.

AWS has built two AWS Ground Stations, with 10 more expected before mid-2019 worldwide.

“We think it dramatically changes the ease with which you can and the cost with which you can analyze data coming from satellites,” Jassy said.

Each ground station will be associated with a specific AWS Region, with AWS currently having 19 regions around the world. AWS customers previewing the service include Lockheed Martin, Capella Space, and Open Cosmos.

The service is priced per-minute of downlink time, with an option to pre-pay.

In general, businesses running global services need to rely on a series of infrastructure endpoints situated in multiple locations across the world.

The newly announced AWS Global Accelerator is a service that will automatically route users of an AWS-based service to the best endpoint for them, based on their location, application health, and customer-specific configurations.

AWS Global Accelerator also allocates a set of static Anycast IP addresses that are unique per application and do not change, removing the need to update clients as the application scales.

Read more