Software application developers are often thought of as loners, mavericks and reclusive types, fond of working in solitary environments — if there were a parallel for life out on the edge, programmers would already embody it, giving software engineers a natural affinity for coding in edge computing environments.
All developers are the same. It’s a contentious statement, but at least it does apply if we agree that all software application developers are creatures of habit: They all have an exhaustive approach to detail, they share a naturally analytical brain type adept at problem solving, and most have a slightly unnerving predilection for soda and pizza.
But of course, all developers are not the same. Some are focused on upper “presentation layer” interfaces, some are networking specialists, some are core platform gurus focused on functionality and performance, while still others are systems architects, penetration testers, data science champions and so on.
SEE: Hiring kit: Python developer (TechRepublic Premium)
But developers are not only classified by nature of their engineering specialism, they are also distinguished by virtue of the implementation surface that their work is applied to. Some will focus on smartphones, some on desktops, some on embedded systems and some on cloud-native applications that could potentially span every domain.
Embedding the evolving edge
Within that last cadre of coders, we find the oft-unloved embedded computing stalwarts. Close in mindset to edge developers, embedded developers are not quite the same, as they have had their role quite historically defined to focus on everything from home heating systems to fitness trackers and perhaps even your television remote control.
But as we know, those embedded devices themselves are now straddling the Internet of Things. Airport kiosk computers are processing in real-time, in-car systems are now hooked up to a web connection, transit and fare collection machines have an intermittent link to the cloud, and factory robots and medical systems now operate in an always-on, always-connected data plane.
Some embedded computing will still be known as embedded computing, but many of these devices — and, specifically, the sensors and gauges within them — will now form the structure of what we refer to as the computing edge. This is how edge is different to embedded and different to distributed computing. These are not just machines that are moving out of the data center, these are machines doing a specific job.
As TechRepublic has already taught us: “Edge computing is different [to distributed computing] because of the way edge computing is tethered to IoT data that is collected from remote sensors, smartphones, tablets and machines. This data must be analyzed and reported on in real-time so its outcomes are immediately actionable for personnel at the site.”
All of that explanatory exposition out the way then, if this is a new operational substrate for software code to exist in, then what makes an edge developer?
What makes an edge developer?
“When we talk about edge devices, we are of course talking about bringing data processing and analytics closer to the device,” said Alessandro Chimera, director of digitalisation strategy enterprise data platform company Tibco. “This naturally gives us a chance to reduce latency and increase edge software’s response times.”
Reminding us that so many edge devices are installed in connected cars, Chimera points out that there would be too much latency to connect to any kind of cloud service in these environments — and anyway, that action would be rendered impossible if the car is in a tunnel. So in many ways, we are talking about reversing our modern notion of continuously connected computing.
“What all that means is that edge developers need to think about how much data they are sending back to a cloud data center at any given point in time; they need to think more carefully about what data can be processed locally,” Chimera added. “Where edge devices are constrained in power, edge developers need to think about battery usage as they architect and build their edge applications. Transmission to the cloud saps energy, so if an application can defer a connection from once every 15 minutes to once every hour — especially when it is not time-critical or mission-critical in terms of the application’s functionality — then that makes good engineering sense.”
Joe Drumgoole, senior director of developer relations at MongoDB, largely concurs with Chimera’s sentiments but thinks that we’ve been here before.
“Edge developers: What a quaint, quixotic notion,” Drumgoole said. “I grew up with the Ada programming language, which was purpose-built for this idea of edge developers. In those days, we called them real-time engineers or embedded systems developers. The Ada language is largely consigned to history, but edge development still has many of the same constraints. However, now the tools are better, the languages are more diverse and the world is building edge developer toolkits.”
But Drumgoole is philosophical about the state of software evolution and progress overall and believes that some things don’t change. As we know, edge sensors are often simple units of hardware that run a comparatively simple single program. The memory constraints are real — who running a Python program today worries about memory consumption? — but the difficulties of working on hardware with no screen, keyboard or mouse remain.
“This is the source of the torture and the joy of edge programming,” Drumgoole said. “There is something very honest about burning programmable read-only memory devices. It either works or it doesn’t, and the cost of it not working can be dramatic. The best edge programmers understand hardware and software, are not averse to a bit of soldering, can read an oscilloscope and still remember Ohm’s law.”
How to manage massive micro-clusters
A lot is changing in this space. Today, an edge developer needs to understand cloud-native development best practices natively. Perhaps even more fundamentally, they need to have an awareness of the adaptations they will need to make to deliver to the uniqueness of edge.
The general manager of edge at SUSE, Keith Basil, thinks that can be quite challenging. He says this is because it combines skill sets from typically disparate industries all needed in a single team. With its dedicated edge team, SUSE has specific experience in this space; there is even a suggestion here that we should perhaps think of well-populated edge zones with a new term.
“In a DevOps sense, the idea [of edge as] a set of large scale ‘micro-clusters’ is quite impactful,” Basil said. “Here we see developers needing to work with new constraints when building software in containers, all of which has to take into account different CPU architectures, limited memory and smaller disk sizes found on hardware at the edge.”
With edge, Kubernetes clusters are run at a remote location and Industrial IOT devices are peers on the same network.
“As an edge developer, one of the first questions that comes to mind is that of connectivity to those IIOT devices,” Basil said. “Does one build IIOT protocol support into the application, or does one extend an existing open source project to handle the management of device access instead?”
That’s the tough part of edge computing: Some of this discussion is still open to debate, such is the still-nascent nature of so many deployments.
Edge: Speed, resources and understandability
Because edge sensors are designed and indeed required to deliver data extremely quickly in real or near-real time within highly constrained compute spaces, it’s pretty clear that every microgram of power counts. According to Alan Jacobson, chief data and analytics officer at self-service analytics and data science company Alteryx, the key focus areas of an edge developer are threefold: speed, constricted compute resource and understandability.
In the case of Alteryx’s technology partner McLaren Racing, each Formula 1 race car has some 300 telemetry sensors which deliver over 100,000 trackside parameters across the course of a race weekend. Huge datasets are sent around the world in seconds for certain types of analyses, while in other instances, the data is used either trackside or analyzed on the vehicle.
With such large volumes of edge sensor data generated, it’s not always viable to bring data to analytics. Having analytic platforms that allow the user to move compute power where to where it’s needed is essential in a world defined by large data, fast compute and the need for resilient systems. Analyzing these large datasets is what Alteryx does.
“This requirement for speed necessitates technology that’s understandable by developers and domain experts alike,” Jacobson explained. “Any insights generated are useless if the engineer trackside isn’t able to easily comprehend them. For edge computing where speed-to-insight is key, there’s little time for QA on the results — the insight is either immediately useful or it is discarded. Flexible developers who know when to push computes out or bring data in.”
Edge development is still development
If we have painted a picture of some new dystopian computing zone where edge developers do what they do in some far more clinical granular way, only occasionally crawling upwards out of the primordial swamp to feed on pizza and soda, then that would be wrong.
SEE: Hiring kit: Back-end Developer (TechRepublic Premium)
Edge development is still actually software application development, says the VP for the EMEA solutions and cloud-native API company Kong, Chris Darvill.
“The requirements for edge developers are in some ways not dissimilar from your more traditional developer, and in the future, it won’t be an either-or, but in fact both will be required to deliver solutions out to the market,” Darvill said. “Edge developers will be required to utilize modern development practices characterized by well-defined APIs to accelerate the building of new applications and services through reuse of existing edge and centralized APIs — and they will also need to embrace consistent tooling that can be used to ensure the proper standards are applied to applications being developed encompassing a strong focus on security, governance and observability.”
Yes, there will likely be increased use of microservices, containers and Kubernetes in edge software programming. And, undeniably, practically every software engineering job in this space should be pushed towards a cloud-native status, even where on-premises compute happens.
But edge development is still code, and perhaps in 10-year’s time, we will accept it as part of the spectrum of coding competencies that every developer is expected to be able to showcase and execute, much like mobile development is now part of the establishment today.
Code on the edge is sharpening, as are the tools and toolsets, so be sure to pack some all-weather gloves.