I assume it’s the same situation everywhere, but more and

more vendors seem to play by their own set of rules rather than the

customer’s.  They attempt to anyway.  It’s not enough to review basic system

requirements posted on a web site.  You

have to really ferret out the details of what installing the vendor’s system

means.  Almost like a reporter

researching a good story, you need to know what questions to ask and who to

ask.  You need to analyze the data and

piece together the puzzle.  They may not

always be forthcoming with the information, especially if they think it could

be a deal breaker.

Will users need to be local administrators to run the

client?  Will the application need its

own VLAN or completely isolated network? 

Are there conflicts with their requirements versus your Group Policy

settings for items such as password length or other security settings?  Are there special requirements for running

antivirus or backup software?  Are

certain components still using 16-bit code or defunct unsupported third party

plug-ins?  Does the vendor insist that

their server not be a member server of the company domain?

The list of questions can go on nearly forever, but it’s

important to expose the facts prior to signing a contract or giving your stamp

of approval for all things technical. 

The trend I see occurring is vendors wishing to isolate their application

as much as possible from a customer’s environment; attempting to protect

themselves from support calls and blame due to a change or outage on the customer’s

side.  This is somewhat understandable

given the growing complexity of IT infrastructures and the support nightmares

which can arise.  Finger pointing often

ensues.  But isolating every installed

system just is not feasible or practical. 

As an example, a vendor I am currently working with is

requiring that their servers be in a separate domain and a two-way trust

relationship established with our domain. 

If we wish to instead make them member servers as is our policy, then

they require that we sign a waiver release form to absolve them of

responsibility should a change to our environment cause a system outage.  This all seems ridiculous to me, but the

company reports that “95%” of their clients go along with the separate domain

scenario.  I have a difficult time

believing that figure unless most other customers are relatively small and have

unstructured IT departments.  In this

case, the vendor is attempting to impose their will on the customer, and

effectively disregard a company’s established IT implementation best practice.  This would set a dangerous precedent if allowed,

and make us susceptible to numerous security threats, not to mention, we would

lose our ability to manage devices on our network.

While segregating systems in an effort to improve network

performance and stability are nothing new, making customers sign responsibility

waiver forms is, at least to me.  Obviously

there are a couple points of contention here. 

The trend toward application isolation wouldn’t be as prevalent if

software makers wrote more sound software that adhered to generally accepted

coding standards, and companies always followed best practices and were

proactive in maintaining their IT infrastructure.  In lieu of a perfect world though, vendors

and their customers should jointly research the environment and software prior

to beginning implementation.  This could

serve to determine potential pitfalls and also develop action plans that are

agreeable to both parties.

Allowing each vendor that walks through your door to set the

rules of engagement will never work in the long run.  Each environment is different than the next,

and each company must create reliable IT policies that protect the stability of

all their supported systems

collectively, not just one going in today.