Regardless of past success, regardless of how much time and effort goes into a project, and no matter the talents of the team, some projects don’t go as planned. And then, there are the projects that are even worse; the ones that simply go insane, plunging teams into chaos and leaving a line department that’s unable to operate effectively.

I have the misfortune right now of being caught in the latter situation, and it’s not a pleasant place to be for me or for any of the involved parties. Let me start by making it clear that I’ve been the project manager on this project and, although there are a multitude of reasons that got us to where we are, I’ve accepted responsibility for the situation and am working with all stakeholders to recover the project and get back on track. I’ve managed a lot of projects; this one happened to be the largest and has proven to be the one that got the best of me.

The beginning

As is the case with many projects, this particular project — which is a conversion from one data system to another — began with great intentions and a laundry list of business problems that had to be solved. A project team was formed, and I was appointed as the project manager by default. I can’t go into all the reasons that I assumed the overall responsibility for the project except to say that it was felt that staff in the affected department did not have the knowledge necessary to accomplish the goals that were set out.

As a group, our project team put together a project plan — which I approved — that took into account the perceived quality of the data in the legacy system, the amount of time and support we expected to receive from our vendor, and the amount of time that each team member could devote to the project.

To say that I underestimated two key factors would be, in itself, a massive understatement. First, the quality of the data in the legacy system was far worse that we could have ever imagined. Second, the level of support and knowledge we received from the vendor was quite poor, at least at the beginning of the project. After we requested a new resource, we ended up with really good people, but by then, the project was doomed, but I just didn’t know it.

From bad to worse

Because of internal delays — the stakeholder department added projects that were not originally scheduled, which disrupted the schedule for this project — we, along with the stakeholders, made the decision to postpone the original launch date. However, the initial damage had already been done, and, again, I just didn’t realize it yet.

Fast forward to what we had intended to be our launch day. As per plan, we launched with a subset of the data that had to be migrated with the intent to continue to migrate the data that was remaining.

It didn’t happen. I can’t go into the confluence of events that led to a failure to meet the project goals, but we spent the next three months working night and day to get the data-quality issues resolved and to correct for the delays introduced due to the original vendor resourcing issues and some internal scheduling challenges in the stakeholder department that were originally unknown. Those were just two issues that raised their heads during this project. There were a whole lot of other things, too.

Now, months later, the stakeholder department is in limbo, and although the data-quality issues are close to being worked out, the time it will take to get everything corrected to a point where I feel comfortable is too far away. In this, I am likely erring on the side of caution when I look at the potential for data-quality problems, but, well, that’s my job (I’m not a full-time project manager; this was just one duty among too many on my plate at the time).

It’s really hard to go into someone’s office — a peer, a superior, a subordinate — and admit defeat. However, that’s exactly what I did. The project in its current form is a failure; again, this is not an easy admission to make and is potentially career threatening. The stakeholder department needs a system that allows them to meet basic operations. In its current form, the new system was not doing that, and it was going to be too long before we would get to that point.

So, we’re going back. We’re moving back to the original application, but only temporarily. The upside to this challenge is that many people have learned a lot, including:

  • The original data quality was far worse than ever expected. We need to spend more time on data quality this time around to make certain that we don’t end up spinning our wheels again. The next time, during status update sessions, I will not hear “Well, we discovered another data issue that we missed during validation.”
  • We cannot rely on tightly constrained vendor resources to help us when we have issues that arise.
  • The team needs to be able to focus less on the project and more on their day-to-day duties. We will extend the project time frame so that we don’t overburden our staff.

I’m fortunate that I’m working with people that, while unhappy at the state of things, understand the why of where we are and continue to agree that a successful migration is in our long-term best interests. So, although we’re moving back to the original system, we’re going to once again work to migrate to our new system. This time, since we have a framework from the original migration that is actually quite valuable, we will focus our efforts on data validation in order to avoid some of the data issues that plagued the original migration. Over the past couple of months, the team has learned a lot that will assist them in this process. And, although I got to a point where I no longer had confidence in the outcome, I am highly confident in the new process we’ve put together to get us to ultimate success.

In speaking with another VP just today — the VP in charge of the impacted division — he said, “It sucks that we are where we are, but nobody’s died, so there’s that.” The tone of his comment said it all. He hates this as much as I do (probably more), but realizes that things could be a whole lot worse. We actually have a fallback that, although it’s not the prettiest, will get us operational, and he continues to believe that we’re on to something good.

Perhaps the biggest lesson learned — and I’ve shared with you just a few in this article; there are so, so many more — is that there comes a time when, in the best interests of the organization, you have to admit defeat, failure, whatever you want to call it, no matter the potential personal price and take a step backward in order to allow the organization to ultimately move ahead.