I was invited to speak at a small expo in Seattle when I ran into the unexpected opportunity to learn more about serverless applications. The idea intrigued me since listening to a podcast on the topic a few weeks ago.
After a random Twitter mention, a friend who works at IBM’s Bluebox cloud group invited me to a microservices meetup just a few blocks from my hotel. It’s there where I got a deeper, practical look at the AWS Lambda service.
There are servers in serverless
Serverless applications is more of a marketing term than a technical concept. Lambda isn’t some breakthrough in computing where you run applications without computers. I believe it’s more accurate to call Lambda a platform to build non-server centric applications. The concept is to pivot from building applications that are tied to operating systems.
Lambda allows developers to create event-based applications using other Amazon services such as S3. The example given at the meetup was a media sharing app. Users of the application have the ability to deposit video into an S3 storage bucket. In a traditional OS-centric approach, some service running on an AWS instance would monitor the S3 object store for new files.
Once the new file is detected, a video encoding process is initiated. In a traditional OS-centric environment, the developer runs the encoding process within a virtual machine. If the encoder virtual machine fails, then the encoding process fails for all future video posts.
In Lambda, developers create independent functions to create a microservices. These microservices are developed using the Lambda framework as opposed to an OS-centric framework. Developers select the amount of RAM needed to perform each Lambda function. AWS sizes the compute and storage resources required to run the function based on the amount of memory selected.
In the example of our media encoding function, S3 sends an event to the Lambda service. The event triggers the execution of the encoder function built on Lambda. AWS will instantiate as many parallel Lambda calls as are needed to process the S3 events. The developer selects the maxims for the number of parallel functions that run at any single time. The limit provides a lever to control overwhelming the underlying system such as a MySQL backend.
The result is that the simple encoder process eliminates the OS as a point of scale and resiliency. Just as long as AWS’ Lambda and S3 are available in the selected region, the application will continue to function regardless of any single OS state.
Microsoft Azure Service Fabric
Microsoft recently announced its new Azure Service Fabric during their 2016 Build conference in San Francisco. Azure Service Fabric isn’t marketed as a serverless platform, but rather a cluster service that runs on either Windows Server or Linux. Lambda could be viewed more as a revolution in computing, while Azure Service Fabric is an evolution of microservice-based architecture for a cloud-based approach.
Microsoft Azure Service Fabric runs existing code or scalable microservices. Applications written natively for Service Fabric treat VMs as a pool of resources. The service is decoupled from individual virtual machines. Generically, Service Fabric clusters consistent of any Windows or Linux-based VM. In practice, an AWS instance could provide compute for Service Fabric compute. Compared to Lambda, Service Fabric is much closer to a container orchestration and clustering.
Microsoft’s selling point for Service Fabric running in Azure is the availability of additional services. Similar to Amazon’s integration of S3 in Lambda, Azure offers integrations with various Azure public cloud services.
What do you think?
Share your thoughts and questions on serverless applications in the comments section below.