CXO

Shifting to DevOps? Put your ducks in a row first

With DevOps implementations, you need to figure out why and how and then streamline. A DevOps expert weighs in on barriers to adoption, software delivery trends, and more.

If you are pursuing DevOps and Continuous Delivery, make sure that your whole organization knows why, and has linked the effort to goals relevant to your business.

This is the first piece of advice that Andrew Phillips, VP of Products at XebiaLabs, gave for companies pursuing DevOps in a recent Q&A with TechRepublic. Secondly, he said firms should agree on metrics, using an incremental approach and, lastly, they need to standardize tooling and processes without too much rigidity.

Phillips also shared that in his experience, the biggest problem in implementations is often the lack of experience in a team regarding what an effective DevOps environment looks like. Making the transition can be a real challenge.

Located in Boston, XebiaLabs is a provider of automation software for DevOps and Continuous Delivery, working to speed up clients' delivery of new software. Phillips is also an evangelist and thought leader in the DevOps, Cloud, and Delivery Automation space.

DevOps is a software development approach that emphasizes communication, collaboration, integration, automation, and measuring cooperation between developers and other IT professionals. Continuous Delivery is a development method in which teams produce software in short cycles and seek to ensure that a viable product can be released at any time.

TechRepublic: What advice would you give to a company that has decided to pursue DevOps?

Andrew Phillips: First, ask yourself why you are pursuing DevOps, Continuous Delivery, Agile, or whatever you want to call it. Ensure you have identified clear, business-relevant goals and that everybody involved in the process has a shared understanding of them.

andrewphillipsxebialabsaug2015.jpg
Andrew Phillips
Image: XebiaLabs
Then, agree on metrics that you will use to track progress against your goals, and take an incremental "find biggest bottleneck and improve" approach to reaching your goals. Avoid making process changes or introducing tools "just because." Require any change to be justified by the data.

Thirdly, aim for standardization of tooling and processes, but leave room for exceptions where necessary. Avoid forcing teams onto a standard platform. Present them with a tradeoff in which more freedom on their part is paired with a greater responsibility (e.g., for actually running the systems in production, and ensuring all operational requirements are met etc.).

TechRepublic: In your experience, what are the main cultural and organizational barriers that complicate a firm's move to DevOps?

Andrew Phillips: In terms of its impact on the chances of a firm's successful implementation of DevOps, lack of experience and/or knowledge of what an effective "DevOps environment" looks like is probably the biggest problem we see. Even if you define clear goals and metrics, it's pretty tricky to actually improve your process if nobody on the team has experience making such a transition.

We also frequently run into the other "obvious" barriers: roles and responsibilities that make it difficult for the people building and then running services to pursue common goals, an entrenched belief that "it can't be done here because our organization is too set it its ways," or a starting mandate to "DevOps-ize our existing process" rather than accepting that adopting DevOps may well change your process quite a bit.

This last point is an example of a broader category of problems that more established organizations especially need to bear in mind: the tendency to want to "do DevOps" inside a very small box, i.e., institute for just one or a couple of high-profile projects, but somehow respect all the existing release management, change management, service management, and other processes.

Yes, some types of tooling can help provide a kind of "compatibility layer" to help integrate the tools and processes in the "DevOps world" with the rest of the organization (we have a number of users doing this quite successfully with XL Release), but it's much easier to scale up successful DevOps initiatives if you're able to accept some changes to these higher-level processes as well. And pushing these kinds of changes through is exactly where executive sponsorship becomes essential.

TechRepublic: Let's say I am a decision maker at a prospective company. How would XebiaLabs support my company's efforts toward DevOps and Continuous Delivery?

Andrew Phillips: As mentioned above, we find that our tooling is especially well suited to help organizations provide a "bridge" between the "DevOps world" and the rest of their organization, which will inevitably ask for visibility and some level of compatibility with the broader business process, especially once the DevOps initiative starts to scale beyond one or two teams.

We also find that our tools' focus on making the data inside the DevOps and Continuous Delivery processes highly accessible becomes essential for organizations that need to decide where next to invest to improve their overall process. Whether the information relates to which features and fixes are running through their pipeline, what level of quality they have, or where the bottlenecks and inefficiencies are in the process, we find that our users often use the data behind our reports and analytics in ways that are very specific to their environment, and that allows them to make decisions tailored to their scenario.

Ultimately, our users tell us that they benefit beyond the tooling itself. We find that our ability to advise and guide our users — based on our experience doing this with companies across all verticals — is just as important to them as a set of easy-to-use, enterprise-friendly software products.

TechRepublic: What are the big trends right now in software delivery?

Andrew Phillips: I'd say that the biggest trends right now are "automation for the sake of automation" and "what can I do with microservices, Docker, and these containers things?" Asking the latter question makes a lot of sense as long as organizations focus on researching containers, rather than rushing headlong into production implementations without first doing any cost/benefit analysis of this architectural principle and set of technologies, which is both very promising but also very young.

As you can probably guess from my phrasing, I do not believe that "automation for the sake of automation" in the software delivery process is a good idea. It's a message pushed by the "tech heroes" and encouraged by vendors that completely ignores the fact that automation should be a means, rather than a goal, of any initiative to improve software delivery. It's very important that we regain a focus on ensuring we start with business-relevant goals — which will likely include time-to-market, but should definitely also take quality into account, and could also touch on totally different areas such as auditability — and then work back to the means needed to achieve them. That may mean more automation, but could also simply involve better coordination, or a bit of process optimization.

TechRepublic: How can a development team best measure and demonstrate that they are producing quality software?

Andrew Phillips: Historically, trying to measure "software quality" has been tricky because we've tried to measure attributes of the code, and the team delivering the code was not actually responsible for providing the ultimate customer-facing service. Personally, I think the only metrics that really matter are those related to the "consumer experience" of the system: percentage of successful API calls responded to in a reasonable amount of time, number of customer purchase transactions, number of applications successfully processed, etc.

Of course, it's only fair to start measuring a team on these metrics if the team has a reasonable degree of influence on them. So, to some extent, this approach implies "DevOps" or "product teams" or whatever we want to call them.

TechRepublic: I understand that you attended the Jenkins User Conference in D.C. and gave a talk on automated testing. Based on your interactions, what questions do enterprises have about DevOps and Continuous Delivery?

Andrew Phillips: Apart from the now "classic" question of, "what does 'DevOps' actually mean??", the main thing we are currently asked is "OK, OK, I get that I should be trying to do some of this DevOps/Continuous Delivery thing...but how do I get started if I'm a large organization with existing tech stacks, teams, ad processes?" At this point, DevOps and Continuous Delivery are still sorely lacking an accepted "implementation blueprint," and the fact that we're seeing quite a few books in the pipeline about this speaks to the need for guidance.

How to manage a new set of challenges around the increased volume and frequency of automated testing is not something people tend to ask us (yet), but we certainly see a lot of recognition of the problem when we talk about it — which is what my Jenkins User Conference talk was essentially about. Put simply, testing in a DevOps/Continuous Delivery environment means running many tests, using many tools, throughout your delivery pipeline, and doing that on an increasingly frequent basis.

This means that you also have to try to make sense of all of the test results that are generated more and more often, and there simply aren't any tools that will allow you to visualize and analyze all those test results in flexible and effective ways — not to mention more "advanced" real-life problems such as automatically highlighting "flaky" tests and ignoring them in your go/no-go analysis.

TechRepublic: XebiaLabs has launched XL TestView, which provides a central view of software testing application. What does it add to your company's range of services and benefits?

Andrew Phillips: For many reasons, I'm really happy that we've been able to release XL TestView. On the technical side, I think that testing has been a dangerously neglected component of many DevOps and Continuous Delivery implementations, especially for those that were driven by the "tech heroes," because improving testing simply isn't very glamorous, cool, or CV-building. But what is the point of having a hyper-efficient software delivery pipeline if the software that is being delivered using that pipeline is junk? XL TestView allows teams to make all their tests a "first class citizen" in the overall pipeline, and gives them the platform needed to make sense of the rapidly growing volume of test results quickly and effectively, on a recurring basis.

There is also a more business-focused angle to XL TestView. After all, the whole point of the increased level of (automated) testing is not to have lots of test results, but to obtain sufficient information about the quality and risk of whatever is being delivered in order to decide whether it makes sense to go live.

In order to make that decision, organizations need the "cost" (i.e., risk) information that XL TestView provides. If they also have well-defined information about the "benefit" associated with a new feature — something we are also working on by allowing the business to associate the features and fixes flowing through the delivery pipeline with measurable anticipated improvements — the software delivery can really become a "normal" cost/benefit business decision. And XL TestView is a critical part of that.

Also see

Note: TechRepublic, ZDNet, and Tech Pro Research are CBS Interactive properties.

About Brian Taylor

Brian Taylor is a contributing writer for TechRepublic. He covers the tech trends, solutions, risks, and research that IT leaders need to know about, from startups to the enterprise. Technology is creating a new world, and he loves to report on it.

Editor's Picks

Free Newsletters, In your Inbox