Amazon Web Services (AWS) has more than inched its way into hybrid infrastructure as part of their re:Invent 2016 product announcements. The elephant in the room during Andy Jassy's keynote was the Snowmobile, a 45-foot long data migration hardware system that is transported via semi truck. A 100PB rolling data center isn't something you typically see outside of an action movie.
More surprising to me was the announcement just before that of the Snowmobile. AWS is testing the waters of customer premises hybrid infrastructure via a hyperconverged infrastructure (HCI) platform called Snowball Edge.
Call for hybrid infrastructure
One of the customer complaints about AWS is its lack of support for on-premises use cases. Some potential AWS customers face limits posed by geography and lack of bandwidth to AWS data centers. Jassy spoke of a turbine farm as a strong example of the challenge that the Internet of Things (IoT) presents to customers wanting to embrace the public cloud.
Industrial IoT devices have the potential to generate a massive amount of data. The desired end state for most customers is to analyze the massive data sets using the substantial capacity of AWS. However, public cloud data connection options present a serious challenge for real-time data analysis.
In industrial control systems, the site managers want fast alerting to faults and major events. The type of detection is beyond single errors on a set of sensors. Plant managers want the ability to react to a combination of readings of several systems quickly. As IoT data loads, customers need the capability to run high-level, but compute intensive, analytics against the data set. High latency and low bandwidth make this computing model impractical for a pure public cloud strategy.
Snowball Edge is a new take on HCI. Snowball Edge is deployable in a 3-node cluster and provides a platform with 100TB of space. Edge offers some data resiliency features over the previous Snowball. By creating a cluster, AWS provides a higher level of data protection. Similar to other HCI solutions, the cluster is elastic. Nodes are added and removed to add and remove capacity. Jassy indicated the removal of nodes is the method by which data is shipped back to AWS for ingestion into a customer's VPC.
Similar to other HCI solutions, data storage isn't the only capability of Snowball Edge. AWS has never focused on replicating traditional enterprise IT services. As such, Snowball Edge isn't designed to run massive numbers of virtual machines. AWS seems focused on extending just enough processing functionality to customer facilities. Each node has 100TB of storage and the compute capacity of an EC2 m4.4xlarge instance. An m4.4xlarge instance includes 16 vCPU and 64GB of RAM.
Snowball Edge supports running Lambda functions on the cluster. In the IoT use case, customers write Lambda functions that run on the Snowball cluster. An example function examines data as it's ingested to S3-based storage on the cluster. The functions identify exceptions and may send a notification to a human or kickoff a remediation workflow. Individual nodes are removed and shipped back to AWS for ingestion into the permanent S3 storage in an AWS data center.
The solution isn't designed to replace products such as Nutanix or VMware vSAN directly. The initial capability is limited to a single Lambda call for each S3-Put event. AWS is removing the barriers to the public cloud for industries with physical constraints that formerly prevented public cloud adoption. However, I'm watching what new disruption results as industries with a high barrier to entry due to technology limitations are challenged by adept startups using edge cloud computing.
- Amazon goes all-in on AI and big data at AWS re:Invent 2016 (TechRepublic)
- AWS' Snowmobile data transport truck highlights why cloud giant is so damn disruptive (ZDNet)
- Amazon Web Services meets the hybrid world (ZDNet)
- Six re:Invent questions: AWS' Adam Selipsky (ZDNet)
- 5 steps for a successful large-scale cloud migration to AWS (TechRepublic)
Keith Townsend is a technology management consultant with more than 15 years of related experience designing, implementing, and managing data center technologies. His areas of expertise include virtualization, networking, and storage solutions for Fortune 500 organizations. He holds a BA in computing and a MS in information technology from DePaul University.