This article was originally published on our sister site, TechRepublic.

It’s tough enough to manage through the technical, logistical, procedural, and political issues that face any large-scale deployment. All these risks become more acute when you are managing teams at several remote sites from a central location. Adding to your headaches as the project lead is the fact that you may not be getting completely candid feedback from project stakeholders, as I learned several years ago on a particularly difficult rollout.

I was working as an architect and troubleshooter on the deployment of a new network infrastructure for a geographically dispersed client. The client operated several hundred installations throughout the country, each with between 100 and 2,000 employees.

After months of planning, we assembled a team of 40 engineers from various subcontractors to do the work. This large team ran though a weeklong installation training exercise. After the week ended, we divided the engineers into pairs, ran them though another week of training, then sent them out into the world to install two sites a week. The core team of architects remained at the primary client site to help with troubleshooting and quality assurance.

Statistics of success
During the next few months, the core team carefully monitored the quality assurance data. In nearly every case, the servers and new network hardware went up within a reasonable margin of error. Some teams showed a consistently high performance. Others regularly demonstrated a low but still acceptable degree of accuracy. A few teams failed the quality assurance check. We rewarded the high performance teams with choice assignments, while marginal performers moved on to more out-of-the-way locations. Those that failed were brought back in for more training.

In short, the project seemed to be moving along according to plan. That was, until the day I called one of the deployed sites about a network problem. After some initial wrangling with the on-site support desk, I got though to the primary network engineer. The engineer and I chatted about the problem, how well things were going, and how happy his end users were with the new functionality. After about 40 minutes on the phone we rooted out the cause of his issues. We made a few quick changes on our routers and service stabilized.

What statistics don’t show
As we were finishing up, the engineer told me it really was a shame that I had not been on the initial deployment crew. He went on to say that his boss, the site IT manager, had such a bad taste in his mouth from working with that team that he would not welcome any crew from central IT. I scrambled to check the QA documents from the site. Everything looked reasonable. I had trouble imagining what could have gone wrong on my project implementation team. As I shuffled papers and spreadsheets, I heard the engineer shake his head.

He said, “Man, it wasn’t the work. They did well. But those two bickered on the floor like a divorced couple.”

My heart fell into my shoes.

The hidden dynamic
After we hung up, I left a voice mail on my project manager’s pager. While waiting for him to call back I pulled up the list of all of the sites that pair had worked on in the last three months. Two sites a week for three month adds up fast. When the PM finally got back in touch with me, we divided the list in half and started making follow-up calls.

A week later a pattern emerged. It seemed that a month into the project the pair had a personal falling out. They were good enough at their jobs that the QA numbers did not drop. But the two engaged in bitter verbal battles on the data center floors. When asked directly about it, the site managers admitted that the pair’s behavior bordered on the unprofessional.

The PM and I decided that we really had three problems:

  1. What were we going to do about the broken team?
  2. Why did our clients hide the issue?
  3. How were we going to stop this from happening in the future?

The first problem we removed by splitting the pair. We partnered them with the engineers who formerly made up our highest performing pair. Over the next few months, the two newly formed pairs turned in consistently high QA numbers. Although we lost our star team, we gained two solidly performing teams that were well within acceptable parameters.

The second problem posed a more difficult challenge. Our QA process included a phone call from the PM to the IT manager after the deployment. In theory, the IT manager should have expressed his opinion about the deployment process. However, during the creation of the script, we made what turned out to be a false assumption. We thought that the IT manager would be forthcoming with no prompting on our part. But this project was a multimillion-dollar, high-profile activity with backing from the highest levels in the organization. No one wanted to report a problem. So when we asked generic questions they gave us generic “everything is fine” answers.

After some thought, we revised the script to include more probing questions about the social and political aspects of the deployment. We also instituted a “team touch” policy between the deployment teams and the system architects. Each architect took a handful of teams as his personal responsibility. We contacted these teams at least twice a week to discuss how the deployment was going. During these contacts we also discussed travel, life as a consultant, and other personal factors that helped to form a rapport between the traveling and central teams.

Fast forward
A few years later I joined an already running project with 10 teams traveling around the world. The central team worked hard to resolve technical issues with the deployment before they could trip up our traveling colleagues. With teams scattered across the globe, we manned a 24×7 help desk, in addition to our design and analysis responsibilities.

After a few weeks of work, I suggested that perhaps we should have a scripted after-implementation interview for both our teams and our client sites. Although the idea met with some initial resistance from the architects, my story about the bickering deployment team convinced them to at least try it. The first round of calls revealed a host of minor social problems that we were able to quickly correct.

In the first case, we failed to push past the political weight of our own project to discover that something was wrong. In the second, we used a scripted follow-up with probing questions to identify and correct problems before they became major issues.