Shared capacity, abstracted resources,
automated deployments — these terms may be currently
restricted to the domain of virtual machines, but over the next few
years, expect the hype cycle to charge up as vendors attempt to push
the use of virtual storage arrays and virtual networking.

Over the past couple of days attending the VMworld 2013 conference, a new back-end path for datacentre computing
has been laid out: It’s a world where each and every resource is
defined and controlled by software, where all the physical resources
are pooled, and no segment avoids a layer of abstraction.

The introduction of a networking
abstraction layer into the hypervisor may on one level look like a
at Cisco
, but its deeper implications on an IT employment level
those expressed last week
that many systems administrators will
have to branch out beyond their comfort zone.

In a panel of storage vendors and
administrators today, one piece of advice shone through: That
application knowledge is going all the way down through the IT stack.
Storage administrators are hardly the most radical people from a change-embracing standpoint, so when they are projecting a change in
their own roles, you know that world is shifting.

Technologies like
, which allows for movement of some switching and routing onto
the hypervisor itself, changes the way that networks are treated.

Whereas previously, the network
administrators could always retreat to a world full of physical wires
and router consoles, should NSX be deployed on their network, they
will have to take an interest in what the hypervisor is up to. At the
same time, with the management tools for such functionality being
very simple to use — almost dangerously too simple to use, my inner
sysadmin says — there is every chance that developers and even users
will be creating virtual networks in their usage.

If your role is concerned primarily
about the network, then you will want to know what those virtual
networks are up to, and how they are configured.

That will involve a trip up the stack,
as the hitherto mere users of systems attempt to make their way down
the stack under the auspices of automation and application wrapping.

Much of the talk at this event has
been around changing the old 80:20 ratio of workload for IT people.
That’s the magic ratio where 80 percent of a worker’s time is taken
up by maintenance, and 20 percent is dedicated to “innovation”.

It always sounds possible to change
such workloads in theory, but as network and storage admins acquire
the skills and knowledge to start working with the rest of the stack
efficiently, the old 80:20 ratio will return.

Instead of dealing with dozens of
systems, maybe a worker will be able to deal with tens or even
hundreds, but instead of being a pure administrator, the delineation between roles starts to become blurred.

As virtualisation reaches deeper into
the datacentre, and makes over everything it touches with automation
and one-touch configurations, IT workers are going to have to
become multi-disciplined to deal with it.

With hypervisors growing ever fatter as
more functionality is poured into them, why not create the role of
“hypervisor engineer” and be done with it?

The saving grace at the moment is that
the push into virtualising the entire datacentre is one that will
occur over a decade or so, not in a couple of years.

The software-defined datacentre is
coming, it’s unlikely to be stopped, but it will only inch its way
forward rather than appearing in your ops centre overnight.

You’ve been warned.

Chris Duckett travelled to VMworld
2013 as a guest of VMware.