We've been admonished to emulate technology startups and "fail fast." While it's a cute saying, it's not the right way to run IT.
For the last couple of years, you may have been hearing the suggestion that your company or organization needs to learn to "fail fast," mirroring Silicon Valley startups in their extreme bias to action. Rather than carefully considering an activity, planning contingencies, and taking a measured approach to execution, the "fail fast" school of thought suggests that you act first and ask questions later. The assumption is that a few failures will ultimately get you to the right answer more quickly than more measured action, and the rapid progress and cash lavished on "fail fast" startups seems to prove the case.
For more traditional companies, there's some appeal to this notion. If your organization usually needs a half dozen meetings to decide what to order for lunch, the idea of launching into action with only a vague notion of a plan, and a "license to fail," is appealing. In fact, some companies have instituted all manner of policies to encourage "fast failure," even to the extent of requiring teams to stand up and applaud when a colleague announces they've failed.
SEE: IT leader's guide to Agile development (Tech Pro Research)
The failure of failing fast
Like any notion that's taken to an extreme, there's some positive aspects to the mantra of failing fast. I work primarily with larger companies, and it's extremely rare that I come across one that doesn't err on the side of too much caution. In many cases, tilting the balance toward action can be beneficial.
However, the magic of successful startups is not necessarily that they reward failure, and it's rare that a successful company got there merely by lurching from failure to failure. Rather than throwing initiatives at the proverbial wall to see what sticks, the best companies seek to evolve and progress through testing and learning. It may seem like a semantic difference, but testing and learning is significantly different than failing fast. With the former, when companies are facing a difficult challenge, from developing a new product to implementing a new technology, they seek to break the problem down into smaller challenges that can be tested, and the outcome of that test used to inform the rest of the process.
An obvious manifestation of this process that you're likely already familiar with is prototyping. For example, a company designing a complex product like an airplane or car will build a smaller model to test in a wind tunnel before building a full-scale version. This allows for valuable information to be gathered that informs the design of the final product, at a lower cost and faster timeframe.
The analogy to prototyping may seem like a blinding flash of the obvious, yet many IT leaders fail to leverage prototypes to test and learn on a regular basis, and we tend to think of "prototypes" as rather complex, time consuming events, or if your organization regularly does quick prototypes, many fail to consider how to design their prototype so that it generates the maximum value.
Rather than thinking of prototypes as an event in themselves, consider them as part of a broader experiment. In the sciences, a well-designed experiment will answer multiple questions and provide a path forward, regardless of whether it had the outcome that the experimenter desired.
Applying this philosophy to technology is similar. A poorly-designed experiment might consist of giving a small group of users a new software product and asking them if they liked it or not. At the end of this experiment, you'll have some vague notion of whether the software could be helpful to the broader organization.
A more effective experiment might start with a problem statement that allows multiple hypotheses to be tested. Rather than testing a software package, we might try to validate that we can improve collaboration in our sales teams. We could test whether new software, better training, or various permutations of the two changed a key sales metric versus a similar control group. If we found that software worked better than training, or vice versa, we could then launch further experiments with different types of software, or mobile apps versus web tools, etc., ultimately using a series of small, quick experiments to test our assumptions, learn how to improve upon them, and arrive at an answer that's backed by real-world results.
Contrast that with the typical approach most companies take to a similar problem. You might hire some outside advisors or consultants, ask them to do some benchmarking or provide a list of "best practices," followed by a detailed analysis of various software packages complete with some fabulous diagrams that purport to show the best tools for the problem you're trying to solve. While this will leave you with dozens of beautiful PowerPoint slides, you'll ultimately have no idea how this guidance will translate from the real world to the theoretical. Furthermore, a savvier competitor may be on the third experiment, or ready to roll something out that will provide a concrete competitive advantage, while you're still digesting a 172-page deck. The organization that learned through experimentation will also have hands-on experience in how their solution operated within their four walls. While not a guarantee of success, it's like fielding soldiers who have seen a few battles versus fresh-faced recruits.
SEE: Riding the DevOps revolution (free PDF) (ZDNet/TechRepublic special feature)
Experiment with testing and learning
Unlike some advice, trying out testing and learning is fairly easy within your own organization. Find a small problem, where your normal reaction would be to convene a team to perform an analysis over a series of weeks and dozens of meetings. Instead, appoint one or two people to design an experiment that will answer some critical questions about the problem, provide a concrete direction toward the next experiment, and validate any assumptions.
See if you have a higher-quality decision, or if information was uncovered that normally wouldn't be revealed merely through analysis. Rather than admonishing staff to fail, ask them to design a process that provides valuable information regardless of the outcome, keeping in mind that "this will never work, and here are the factors that led to this conclusion" is still a valuable, successful outcome. Rather than bumbling between failures in search of success, you'll become adept at breaking problems into small chunks that can be tested, with each test making your organization smarter, more adept, and able to move more quickly toward successful outcomes.
- CIO Jury: 50% of tech leaders are implementing DevOps (TechRepublic)
- 3 ways to break out of your professional bubble (TechRepublic)
- 8 digital transformation resolutions for CIOs in 2018 (TechRepublic)
- How to execute on strategy: Getting great ideas from the whiteboard to the boardroom (ZDNet)
- Change at the top: This is who is going to run your digital business transformation agenda (ZDNet)