People should manage technology, not vice versa

Development teams are increasingly clashing with business units over what should be the focus: infrastructure or users. Columnist Tim Landgrave says there's one golden rule to live by: Technology doesn't manage people; people manage technology.

Through my teaching and consulting engagements over the past six months, I've met with several different development shops, and I’ve seen a common theme taking shape in most large companies that spells trouble for CIOs: Developers are just downright frustrated.

The main source of consternation is that corporations are driven by the security and engineering groups at the expense of the business units. They resist adopting any new technology (an affliction I call “Service Pack 1 syndrome,” for obvious reasons), and they’re more focused on how to control what end users do to “their network” than they are on how departments get their jobs done.

Sure, the security and engineering staff show the CIO all the charts and metrics that demonstrate how productive network users will be after their upgrades and rollouts, but they have no concern for developing an infrastructure that fosters the development of new systems that can support the business objectives of the corporation.

How did we get here?

When engineering drives infrastructure
During times of rapid expansion—as in the past five years—developers and business units created new services and bought or developed the applications with little regard for how they affected the infrastructure. Now that the engineers have time to breathe, they’re doing the opposite: Instituting infrastructure “upgrades” with little regard for future applications and focusing instead on what it takes to manage existing ones.

As these infrastructure plans are mapped out right now, the company’s development architects and project leads aren’t part of the process. In fact, these “system architects”—who should be looking at how future applications will be developed and deployed—have mainframe-biased backgrounds that prevent them from evaluating any new technology that’s not server-based at its core.

When these dinosaurs are left in charge, their single goal is to lock down the desktop and applications infrastructure as tightly as possible. In other words, make the PC network like the mainframe network that most CIOs in big organizations understand.

A case in point
One organization I recently spoke to is rolling out a Windows 2000 Professional client, Windows 2000 Server, and Advanced Server back-end infrastructure, along with a new Active Directory.

Its use of Active Directory, and standardization of the desktop, should allow the company to save hundreds of thousands of dollars annually in administrative and support costs.

Unfortunately, it appears that the company has gone too far—it's created Group Policy Objects (GPOs) that allow it to control what can run on the PC and what can’t.

For example, its default GPOs for the workstation disallow access to the CD-ROM drives, disable the Run option from the Start menu, and lock down Internet Explorer so that IE can only access sites that communicate on Port 80. This anal approach to securing the workstation makes the system all but unusable for anything except the specific applications on the Start menu. This may sound good in theory, but it’s terrible in implementation.

One of the company's major departments communicates with suppliers using an extranet that makes extensive use of nonstandard HTTP ports for security reasons. Suppliers can’t access the site anymore, and have been forced to go back to using the telephone and fax machine.

Another group uses AOL Instant Messenger (AIM) to communicate with the field sales force, most of whom work from home and keep the phone line tied up when they’re logged on (making telephone communication impossible). Not only will this configuration of company-designed GPOs not support the use of AIM, but also it will require them to create and maintain a list of “GPO exceptions” that will reduce the overall savings it expected from this rollout.

Meanwhile, the development staff is angry because they can’t take advantage of new technology that lets them optimize applications by installing parts of the application on the desktop, either in the browser cache or as part of an Active Directory-controlled published application.

The infrastructure group has effectively hijacked the entire computing infrastructure and made the business units and the development team subservient to them under the guise of “secure and managed computing.”

Solving the problem
For sure, I’m not suggesting that companies revert to the days where every PC is installed with a unique set of applications and there’s no central management. But certainly the team that designs and tests the rollout of new infrastructure should include development leads and development architects that are responsible for looking at operating systems and tools in the three-to-five-year planning horizon.

The team must also extend application testing beyond the “approved applications list” and into common Internet systems that departments use to get their work done (Yahoo groups, AOL Instant Messenger, Hotmail, extranet systems, etc.).

I try to live by a simple rule when deciding how to implement technology available to me. I’ve found it to be helpful, because in my experience, it’s always turned out to be true. The rule is: Don’t try to install technology that manages people; only try to install technology that people can manage.

Editor's Picks