Organisations have been undergoing transformation by digital technology for decades, but in recent years the term has come to mean the retooling of business processes in a world dominated by cloud, mobile, big data and social technologies, with developments such as the Internet of Things (IoT), machine learning and artificial intelligence (AI), automation, 3D printing and virtual/augmented reality queuing up to bring about further disruptive change.
The task facing organisations today is to negotiate an optimal path through a complex and fast-changing technological landscape, avoiding blind alleys such as — to take just one example — ‘lock in’ to a particular public cloud provider.
The technology stack offered by open-source pioneer Red Hat is at the heart of many enterprise digital transformation projects, and we caught up with Tim Yeaton, senior vice-president of the company’s infrastructure business, at the Red Hat Forum in London earlier this month to discuss this and other matters.
We began by asking how far the average CIO has travelled down the digital transformation road.
“Most companies know they need to — or believe they should — go there,” said Yeaton, “but most traditional IT organisations are struggling with how to get their current environments to be far more cost-effective to even think about scraping off enough investment to go pursue cloud-native applications.”
“I have more CIO-level conversations about their strategic direction than I’ve ever had before — maybe three a week,” he continued. “What’s really fascinating is, I’ll talk to a CIO in a particular industry — the financial services sector, for example — and they’ll say ‘we’re worried that all our competitors are miles ahead of us’, and the first thing I say is ‘don’t believe the hype — the reality is, there’s one or two in the industry that went all-in first, but most companies are in the same situation’. CIOs are almost embarrassed to think they’re the only ones behind, but this is a long evolution that’s going to play out.”
As far as the drivers of digital transformation are concerned, are these different for different industries?
“I think the destination ultimately is the same — highly fluid, scalable infrastructure — but the journeys are definitely different,” said Yeaton. “For example, the telecommunications industry, as it moves to software-defined, is completely reinventing itself. Relationships between network equipment providers and the carriers and service providers are all changing: some of the carriers don’t think they need their traditional suppliers, so they’ll do it themselves or work with vendors like us. That’s an industry where it’s been thrown completely up in the air. In other industries it’s far more linear — financial services, for example, where they’re always looking for the next wave of innovation, so they’ve jumped on this. Then there are industries where some digital entrant has created an existential threat — retail, for example, where a high percentage is now online. And in some cases it’s just improving competitiveness — like in oil and gas, or some of the manufacturing sectors.”
Red Hat’s business model: Curating upstream innovation
If you’ve ever wondered how a business can make money from free open-source software, just take a look at Red Hat’s business model and recent financial results.
In a nutshell, Red Hat tracks the multitude of open-source projects out there, selects and curates a (small) proportion of them, and delivers ‘mission-critical IT solutions’ with essential additions such as quality assurance, user experience, lifecycle management and enterprise-grade support. It’s a model that currently delivers over $2 billion in revenue a year (guidance for FY2017, ending next February, is around $2.4bn).
“In the early days, the open-source motivation was to create low-cost replicas of commercial software — operating systems, middleware — and it actually did a pretty good job,” said Yeaton. “But I think both the licensing models and the pace of innovation in those collaborative communities caught on, so it was a self-feeding process. Now, the fount of most innovation in software is coming out of these ‘upstream’ open-source communities. But they’re communities of innovation, not communities of completion: what they don’t do well — and this hasn’t changed, and I don’t think will ever change — is, they’re not focused on ‘fit and finish’…they’re not trying to build complete, perfect products.”
Which, of course, is where Red Hat comes in.
“What’s different now is that the range of technologies that’s being innovated in the upstream is so broad that you have to have direct and active participation in every single project that you’re going to productise,” said Yeaton. “The reason is, you’re going to have to make improvements in installation, you’re going to have to make improvements in user experience and user interface, integration — and then there’s all the hard work around providing lifecycle.”
“Support really comes from, how do we fix bugs in current releases, how do we fix bugs that appear, as well as introduce new features? We do that via something called ‘backporting’. The way we give a three-year lifecycle to OpenStack, for example, even though OpenStack’s release cadence is every six months, is: let’s say we release the ‘M’ version of OpenStack, which we’ve recently done; we’re going to support that in production for three years, so we have developers on every single OpenStack module that understand the technology deeply and can make judgements around a bug that’s found, say, two or three releases hence…how that might be backported to the original, or interesting new features that are required to sustain something in production. There have been a number of new entrants in open-source who haven’t quite gotten this model.”
“We have the discipline to say: ‘we’ll only modify existing projects in special ways because we know those changes will be accepted upstream’ — everything we do is in the upstream, there’s no proprietary bits that we maintain on our own. By keeping the model pure, we’re able to avoid putting customers in jeopardy of being on a fork.”
So, even given Red Hat’s strategic focus on IT infrastructure and application development, all this curation presumably limits the number of upstream projects that the company can be involved in?
“Without a doubt,” Yeaton confirmed. “Of the million or so open-source projects that exist in the world, we participate in under two thousand — and what’s interesting is, we participate in more than we productise. So for example, in projects that are setting an industry direction like OpenDaylight in software-defined networking, we work almost exclusively with our software-defined networking partners — but it gives us enough insight to know how to integrate it effectively and performantly into OpenStack.”
And how do the upstream communities view Red Hat? How often do they get paid for the work that ultimately fills the company’s coffers?
“As far as the infrastructure products that I’m responsible for are concerned, we probably author 25 percent of the code that we ship, yet if you look at the upstream communities, most developers are paid by someone. OpenStack is a great example: here, today [at the London Red Hat Forum] we have RackSpace, who are very strong contributors, along with Intel and other software providers — even people we compete with.”
As noted earlier, Red Hat’s model certainly delivers the goods. The company’s most recent quarterly results (Q2 FY2017) included $600 million in revenue (19% year-on-year growth), of which the majority — $531m or 88.5 percent — was recurring subscription revenue. Red Hat has now delivered 57 consecutive quarters of revenue growth, which goes a long way towards explaining the company’s healthy share price trajectory in recent years:
Concepts, buzzwords and guidance
For CIOs, keeping up with trends in enterprise IT is a never-ending process of absorbing new concepts and buzzwords, working out their potential relevance to their particular organisations, and then implementing them as required. So, ‘waterfall’ development cadences have become ‘agile’ and are heading towards a continuous ‘DevOps’ model; traditional ‘monolithic’ apps have become ‘N-Tier’, which will be superseded by ‘microservices and open APIs’; and physical on-premise servers have given way to virtualised infrastructure, while ‘containerised’ apps running on public and private clouds are the way of the future. The broad goals here are to modernise legacy apps (to create cost savings), and develop new agile workloads (to implement innovative business initiatives).
We asked Tim Yeaton how the latest buzzword, ‘serverless architecture‘, or perhaps more accurately Function as a Service (FaaS) fits into Red Hat’s enterprise IT picture. Amazon started this ball rolling in November 2014 with its Lambda service, and both Google (with Google Cloud Functions) and Microsoft (with Azure Functions) have now followed suit. So what’s Red Hat’s take on this?
“I think it’s just a point on the spectrum of virtualising and service-enabling,” Yeaton said. “The funny thing about serverless environments is, it’s a bit of a misnomer — it’s really about how much you can abstract away. We’ve thought a lot about that in our strategy, and you can see these concepts play out, but I think in practice it’s going to take many years for them to become widely adopted.”
And how about Red Hat’s customers — are they often champing at the bit for new, trendy technologies like serverless/FaaS to be productised, or does the company have to explain to them why it’s important?
“The customers that we have the best relationships with often are the ones that come to us and say: ‘this is kind of overwhelming…guide us a little bit’. They know they’ve got to react, but they’re not entirely sure what to change first. In most companies, the hardest part is, you’ve got to get your development organisation to agile, and that typically means cultural shifts in the whole IT organisation — policy and process changes — before you start investing in the technology.”
Security and the new IT architecture
Red Hat is in the business of creating mission-critical enterprise software, so clearly security is a central priority. But does the new generation of DevOps-created, microservices/open API-driven, containerised cloud-native workloads create qualitatively different security challenges for IT departments?
“It’s sort of a two-edged sword,” said Yeaton. “You can make containers very secure — containers are a Linux operating system technology, and we have techniques to secure those and wall them off — so if you apply best practices for building containers, you can make containers very, very secure. The flip side is, there’s a lot of pre-built containers in the wild — you can get them from Docker Hub, lots of other places. And like any other code, if you’re consuming code from the outside, what’s in that container matters — and containers are a binary format, so understanding what’s in there is not easy. So here again, it’s best practices: containers you’re going to use in production you should probably build from source…you should know the provenance of that source. Sloppy microservices-based applications can introduce a lot of vulnerabilities, and you won’t know them until it’s too late.”
Does that mean, sometime in the future, that the ‘dark side’ of containers will manifest itself as security breaches of modern cloud-native apps?
“I’m sure you’ll see companies who ingest code they didn’t understand having problems, no question,” said Yeaton. “That’s why it’s as much about good development practices and policies — having a root registry that you manage and you understand how components get put in there, all those kinds of things.”
Returning to the upstream open-source communities that are the wellspring of Red Hat’s product line: does their focus on innovation mean that security is less of priority, resulting in the company having to put in a lot of work here?
“Most security issues are ones where best practices were not applied during development, or something was introduced here that has an effect over there. We think of security as a component-level issue and a system-level issue, and we pay a lot of attention to both. We have teams inside the company that look across products as they get put together, which no upstream community is going to do.”
Technology choices and vendor lock-in
Another potential trap for unwary CIOs is making technology choices that back them into a corner as far as public cloud providers, for example, are concerned. So how does Red Hat’s open-source-based stack play here — can it be seen as a future-proofing layer?
“The way we’ve built our software elements, it’s really designed to create abstractions across multiple public clouds. Let’s say you’re building a microservices application, and you don’t want to necessarily be dependent on a single public cloud; you can build that in such a way that it can run on our infrastructure across public clouds or on-premises identically, using the same management tools and provisioning tools. That’s unique to us, and that really comes from fifteen years of data centre experience and how to make those things first-class citizens in this new world.”
“Secondly, I’ve seen people talk about the ‘Red Hat stack’ as if it’s a monolithic thing, but in our case each element of the stack, because they’re community-first projects, has to add value in its own right. Now we do a lot of work to make sure that when you put them together you get the best experience possible, but we have customers all the time who use, say, OpenStack or OpenShift but someone else’s management product, or they’ll use somebody else’s PaaS on top of our infrastructure. We’re designed to be modular in that way — we support, and only support, the native interfaces that are defined in the community, be it OpenStack, CloudForms, whatever. We’re eminently pluggable.”
Does that mean, then, that Red Hat has nothing to fear from a widespread migration of enterprise workloads to the public cloud?
“If it moves the way customers say it will, companies expect to be bimodal, with traditional and cloud-native apps in the mix; they’re going to be hybrid, with on-prem and public cloud — and on the public side they want to use multiple clouds because they’re afraid of another generation of lock-in otherwise. There are companies who already have policies around what native services developers are allowed to use on any given public cloud — they want to minimise that, so they can allow for portability, bursting and automated management. That’s where this is all going.”
Wrapping up our time with Tim Yeaton, we asked for his vision of enterprise IT beyond the 3-5-year timeframe of the foregoing discussion.
“In five to ten years, I think it’s about how much can be abstracted, how much can be automated, how much can be driven through machine-to-machine and artificial intelligence, with developers adding value to what’s come before. That’s where you’re going to see the pace accelerate.”
In this new world, the CIO will become much more of a curator and enabler of IT initiatives and developments — a “broker of the resources,” as Yeaton put it.