While many have a barely tenuous connection to the BP disaster, the parallels to an IT "disaster" and the relevance of the associated lessons are striking.
I can already hear the collective groans as readers consider yet another article related to the BP oil spill in the Gulf of Mexico. While many have a barely tenuous connection to the disaster, the parallels to an IT "disaster" and the relevance of the associated lessons are striking.
Having worked with a client in oil field services, the industry that provides the drill bits, tools, mud, and know-how to companies like BP, I continue to be amazed at how technically complex the process is. Most of us take for granted that gasoline will flow when we pull our car into a fuel station, just as the average worker thinks nothing of the complexity required to deliver everything from e-mail to ERP to their desktop. In many cases, the average person suspects the oil industry is up to all manner of nefarious deeds, just as many suspect corporate IT exists solely to spy on them, restrict access, and generally make their lives difficult.
The most dramatic parallel is that the Gulf oil spill is at its core a technical problem, requiring a technical solution. While safety, procedural, and managerial failings all led up to the disaster, no amount of political bombast, managing expectations, PR, or "ass kicking" can change the fact that a hole a mile below the ocean's surface until very recently was spewing oil unabated, and BP's technicians and their vendors were the only ones who could do anything about it. So, what lessons can IT learn from this incident?
Accept culpability now, play the "blame game" later
Immediately after the spill, BP CEO Tony Hayward accepted full responsibility for the accident and pledged to pay for damages. Immediately afterward, the political class in the United States began pointing fingers and making accusations, thus the two parties with the best capabilities for solving the problem, the U.S. government and BP, were put in an adversarial relationship just when they needed each other most. When an IT disaster strikes, acknowledge it, take responsibility as CIO, and worry about who to blame after the mess is fixed.
Sometimes "plan B" isn't good enough
The long-term solution to the Gulf spill was drilling a relief well, a process that was begun immediately but would take months to bear fruit. BP offered various alternative plans, but when the first alternate failed, they seemed to be caught flat footed and unprepared. In a disaster of this magnitude, having a "plan B" is not enough. Weeks and thousands of barrels of spilled oil may have been saved if BP was simultaneously implementing additional backup plans, rather than trying one, having it fail, then taking the time to plan and execute another alternative plan.
Provide regular updates through a competent spokesperson
BP provided abundant "status reporting," going so far as to include live streaming video from underwater robots on its website, but it lacked someone who could articulate the technical challenges and the plan that was in place in layman's terms. Like IT, oil exploration has a language of its own and mind-bending technical complexity. As people wondered why BP could not just "plug the hole," it did a poor job of keeping the public apprised of the difficulties and remediation plan for doing exactly that, in a language that everyone could easily understand.
While no one wants to hear about bits and bytes when your company can't enter customer orders, providing a layman's explanation of the technical problem and the plan and actions in place to remediate it provides far more confidence than minutely detailed and disjointed status reports.
I rarely advocate physical presence for appearance's sake, preferring a team that generates results working odd hours from home to one that accomplishes little but shows up in the office every day from 7 to 7, but a disaster changes the rules. BP's CEO and U.S. President Barak Obama took legitimate hits to their image when both chose to go on vacation at various points throughout the response. While these two men's presence obviously would not contribute to the technical solution to the problem, as a leader you are expected to be on the scene throughout the disaster, not fleeing to a yacht race or the golf course.
During a technical disaster, the leader's job is to make the technician's lives easier
The "doers" in any organization rarely rule the roost, but in response to a technical problem like the BP oil spill, leaders need to get out of the way and facilitate the solution lest they become part of the problem. President Obama appeared inept as he suggested further study and "expert" commissions all while publicly lambasting the only people in a position to actually cap the oil well. Much has been made of local communities' inability to get appropriate permits or permission to build seawalls or take preventative measures and of U.S. federal rules preventing foreign ships from aiding the cleanup -- all areas where the government could have provided assistance or made life easier for those on the ground responding to the leak.
Poking over the technicians' shoulders, demanding flowery status updates, and publicly questioning your technicians' ability rather than getting out of the way and removing administrative hurdles and expediting their requests is a sure way to prolong any disaster.
Surely there will be years of fallout from this disaster, and it is worth studying as a corporate leader, whether you are inside or outside the technical portion of your organization. For the IT leader, the spill provides the perfect example of what an outsider sees as you attempt to fix a technical problem. Just as you likely do not understand the nuances of directional drilling or blowout prevention, your constituents care little about firewalls and failed SANs. How you as a leader work with your technicians and CEO and inform the rest of the organization about your progress will determine whether you have the IT equivalent of the BP spill or a well-managed and effective remediation.