A
recent study by Cisco Systems, NetApp, and VMware surveyed IT managers,
directors, and CIOs in the small to medium-sized business market about virtualization adoption. (Small
businesses are those with 50 to 100 employees, and medium-sized businesses are
those with 100 to 500 employees.) The study found that 65% of small businesses
and 79% of medium-sized businesses had adopted some form of virtualization.

For
those considering virtualization, the greatest hindrance is cost.
Interestingly, it’s middle managers who are most skeptical about costs, with
61% expressing concerns. In contrast, just 33% of upper management listed cost
as the main hindrance to adopting virtualization.

Also,
91% of survey respondents who implemented some form of virtualization believed their companies had competitive advantages over those without virtualization.
Even 71% of respondents from companies without any virtualization in place
thought it could give them an edge over the competition.

The following images from the Cisco FlexPod Express Study provide more information about the findings. (Click the images to see enlarged versions of them.)   
 
  

Dealing
with funding

Mark
Oliver, owner of Group Oliver, a full-service IT
firm, said some of the reason for financing woes when it comes to IT purchases
in general has been because traditional funding sources have typically shied
away from them.

“Because
of that, some larger vendors such as Microsoft and Cisco now offer competitive
financing plans and will finance the entire package including labor and
software,” he said. “Currently, Cisco has three-year 3% and a 90-day
delay payment programs.”

Brock
Jamison, vice president of sales at Orion Networks, suggests
that companies consider their financing options well before they start down the
virtualization road.

“Right
now, small businesses are in luck as rates for financing technology upgrades
are at historic lows,” Jamison said. “However, this isn’t going to
last forever. As most businesses will eventually adopt some form of
virtualization in the near future, it’s smart for businesses to explore their financing options
sooner rather than later.”

Using
policies to address server sprawl

Virtualization has its own list of potential problems and chief among those most often
cited is server sprawl. However, Oliver said that issue can be addressed with
policies.

“Of
course there is software that can help, but server sprawl is a symptom of more
fundamental problems in a virtualized environment,” Oliver said.
“Core issues are handled by management policies. Policies or procedures
need to include who can deploy servers, how they are maintained, who maintains
them, and who decommissions servers. Additionally, policies for naming servers,
where they reside, how resources are allocated and the like, also need to be
addressed. Without some basic policies, things can get out of control and
create similar management issues that were part of the reason why
virtualization was adopted.”

Jamison,
too, said planning was key to keeping sprawl at bay.

“While
virtualization offers small businesses loads of benefits, it can also be
difficult to manage,” Jamison said. “This coupled with the relative
ease in deployment and the tendency to ignore and mismanage a server once it’s deployed makes server sprawl a common problem. However,
by following these key steps, ones that are often overlooked during the
planning stage, businesses can further prevent becoming a victim of server
sprawl.”

Those
steps include succinctly assessing organizational needs, determining the length
the solution is required, and properly documenting the deployed systems to
enable proper future management.

Keeping
the NICs in order 

Another
challenge that has been associated with virtualization is incidences of network
interface cards (NICs) getting saturated and then reporting network errors.

“There
are a few things to keep in mind, or ways to help,” Oliver advised.
“Initially, start by making sure to get enough physical connections on the
host machine. When configuring, give the high volume activities a dedicated
physical connection through the switch infrastructure. When things are
operational, use bandwidth monitoring to keep track or tabs on traffic. The long
and short of monitoring is your two main options are SNMP and Promiscuous
monitoring. One can use a bandwidth shaper like NetLimiter as well.”

It’s
still about the hardware

Many
organizations look to virtualization to solve network downtime, but Oliver said
there still needs to be solid hardware in the system.

“Disruptions
can be alleviated or minimized through the use of redundant fans, power
supplies, and the like,” he said. “Additionally, drive configuration
that provides for backup or redundant physical drives such as RAID 1, 5, or 10,
depending on the system demand. Having a good support contract typically from
the manufacturer also helps. Following the solid hardware guideline, having a
good backup and restore solution that allows for fast replication or recovery
is needed. Also, establish a host server and SAN hardware consolidation. Then,
as storage space allows, use the host server storage instead of full SAN, which
can expedite the speed of recovery.”

Doing
advanced testing of existing applications is a key aspect of adopting
virtualization, and Oliver offered advice based upon the type of applications.
When doing a typical virtualization by migrating existing services or
applications from an existing physical server to a virtual instance, he said
the level or degree of testing is largely dependent on how critical the service
is to the organization. For less critical applications, it is typically done
during off-hours, and testing in production with the knowledge that one can go
back to the original server if things are not working as expected. For more
complicated or critical services, typically the existing system remains online,
while offline the virtual machine is established with the service, and connected
to a test database as required and then tested for both functionality and
performance. Functionality and performance testing can be done manually or with
tools if available.

Expert
help for licensing

Software licensing issues can also be problematic in virtualized environments, and Oliver stresses the need for expert help.

“Software
licensing rules are a challenge under the best circumstances,” he said.
“Virtualizing licensing is difficult. It is best to talk to a licensing
specialist that is associated with your software vendor, as well as have a
solid understanding of your infrastructure and how users and devices are
communicating with the applications or services. For example, with Microsoft
Virtual Desktops, it makes a difference if the user or device connecting to the
virtual desktop is a thin client or desktop, and if the desktop has an
operating system covered under a Software Assurance. Conversely, some databases
can be, or are licensed by the type of processor it uses. Further complicating
things, there are some things you can’t do with OEM or retail licensing, but
one can get it covered with additional service.”

The
other instance that can make things a challenge, he said, is how the remote,
backup, or disaster recovery software is licensed. Some manufactures consider a
remote or disaster recovery instance to be “warm” and
running; therefore, it requires a full license. Other manufactures allow for
this with only the production copy, and yet others have a special and typically
discounted version for the warm copy. The short story is that licensing is
complicated, and frequently a specialist is needed to help.