Edge computing needs an edge to succeed, which means attempts to go serverless are going to require physical infrastructure--like cell towers.
In the cloud, hardware no longer matters, especially as the world goes gaga for serverless. At least, that would be the case but for one inconvenient truth: Serverless is powered by... servers.
Even if one accepts that the cloud increasingly allows developers to focus on writing code and not bothering with how it's run, the hardware that powers the cloud looks set to matter for a long, long time--something Google's Kelsey Hightower humorously points out. In fact, in areas like edge computing, hardware has never mattered more, as former Goldman Sachs' top technologist Don Duet told me in an interview. By Duet's reckoning, "The 'land grab' for the next generation of computing literally involves physical assets like land and fiber."
Go west, young man
Duet believes this so strongly that he dumped his impressive Wall Street position to join an edge computing startup, Vapor IO, in Texas. Despite being able to architect the technology strategy for the world's preeminent investment banking firm, Duet needed to break away to solve a serious problem, that of the speed of light:
Edge computing has sparked a lot of interest in the past year as the Internet undergoes a massive transformation to support new, low-latency applications, including real time IoT, distributed machine learning and autonomous vehicles. These applications require cloud capabilities that can respond within milliseconds and centralized data centers are too far away. The speed of light is too slow. The only solution is to push cloud capabilities out to the edge of the network.
In such a world, the cloud becomes an n-tier fabric that stretches from the centralized data center to the edge of the wireless network. The most interesting aspects of edge computing will emerge as the full capabilities of cloud computing get pushed to the edge.
Getting to the edge, however, means getting into the muck of physical infrastructure. To wit, Vapor IO signed a partnership with shared wireless infrastructure player Crown Castle to get access to over 40,000 cell towers and more than 60,000 miles of metro fiber, not to mention a growing small cell footprint. "A great deal of [edge computing success] depends on real estate and urban infrastructure," Duet says, and in this case a network of fully managed, programmable data centers across a nationwide footprint of edge locations.
SEE: Special report: The cloud v. data center decision (TechRepublic PDF)
Of course, Vapor IO is hardly unique in trying to deliver on edge computing. AWS announced Greengrass, and Microsoft has its Azure IoT. In Duet's mind, however, these don't go far enough, because they're still too centralized, primarily focused on edge gateways and devices, and bringing only a small subset of cloud functionality to the edge.
For a true edge cloud, he argues, [W]orkloads must run on cloud servers at the physical edge, adjacent to the devices, and directly cross-connected to the wireless network. IP addresses are presented at the edge nodes and handed off seamlessly at the edge, not resolved back in a central office location."
A true edge cloud, in other words, provides all the key attributes of a centralized cloud, only in the edge location, including elastic scalability on automatically provisioned equipment. It also delivers a direct connection to regional and centralized data centers, as well as the internet at large, providing fast and seamless tiers of service that are more important for mission-critical computing. The key component is infrastructure and a lot of attention is on the use cases, whether it is autonomous cars, virtual reality, and so on, it is essential to be reliable, secure, and highly distributed.
SEE The 2 biggest problems with serverless computing (TechRepublic)
Not surprisingly, this "full cloud at the edge" requires a reconceptualization of data centers. Instead of billion-dollar behemoths stacked with servers, Vapor IO builds so-called "Vapor Chambers" that are nine feet in diameter, house 130 to 160Kw of compute power, and are completely self-contained and remotely operable. They're micro data centers, if you will, and serve as a tangible reminder that as much as we may want to eliminate our concern for hardware in the cloud, it is the nuts-and-bolts of physical servers that ultimately deliver this "serverless" reality, particularly at the edge.
- Why AWS Lambda and serverless computing won't kill Docker in the enterprise (TechRepublic)
- The Big Data & Analytics Master Toolkit (TechRepublic Academy)
- Can't kick the relational database habit? Serverless computing may help (TechRepublic)
- Microsoft takes the wraps off new Azure Event Grid service (ZDNet)
- Why AWS Lambda could be the worst thing to happen to open source (TechRepublic)
- Research: Cloud vs. data center adoption rates, usage, and migration plans (Tech Pro Research)
- Private cloud 3x cheaper than public cloud; you're kidding, right? (TechRepublic)
Has your organization made progress toward implementing serverless computing? Share your experiences and advice with fellow TechRepublic members.