Last week, VMware announced vSphere 4.1. This incremental update is a collection of features that can give administrators the ability to give more granular control and management options for a virtualized infrastructure. One of the many features released with 4.1 is Network I/O Control for use with the vNetwork Distributed Switch.

Network I/O Control (NetIOC) extends the familiar concepts of shares and limits from VMware’s Distributed Resource Scheduler to manage network traffic. This is manifested in the following ways:

  • Isolation: traffic isolation to not overstep other traffic.
  • Shares: A mechanism to coordinate contention for network resources.
  • Limits: A cap on bandwidth for the uplink ports within the vNetwork Distributed Switch.
  • Load-based Teaming: Efficiency logic for a set of uplinks within the vNetwork Distributed Switch to optimize capacity.

These features are not traditional Cisco or other networking product lingo, but are extensions of VMware’s Distributed Resource Scheduler (DRS) feature that allow for coordinated access to workloads, which have historically been a boon to infrastructure administrators with CPU and Memory resources. These features are applied to various traffic classes. A traffic class is applied somewhat differently to the network traffic compared to traditional network tools, and in the case of NetIOC the following classes are available:

  • vMotion: This is the ability to migrate a running virtual machine from one host to another.
  • iSCSI: This is for network block storage protocol traffic, typically to a SAN.
  • FT logging: This is used to coordinate a fault tolerant (FT) virtual machine across hosts.
  • Management: This is the primary communication channel between the ESXi host and the vCenter Server.
  • NFS: This is for network file namespace storage protocol, typically to a NAS device.
  • Virtual Machine traffic: This is the guest virtual machine’s traffic over the vNetwork Distributed Switch.

These classes compose every primary category of vSphere traffic, and can be governed by the NetIOC feature. Figure A below shows how NetIOC is configured by the infrastructure administrator and applied to a vNetwork Distributed Switch:
Figure A

Click image to enlarge

While NetIOC sounds like a great solution for virtualized infrastructure, this isn’t quite a complete solution according to Gartner analyst Chris Wolf. Chris states on his blog: “To be clear, both storage and network I/O control are a good first step, but are not yet complete. Both technologies rely on a shares algorithm, meaning that access is a percentage of overall resource availability.” I will take this one step further and say that virtualized network traffic is rarely the bottleneck for infrastructure administrators.

I’ve found memory and storage to be my pain points more frequently. While I may think it a little short in its applicable use cases to many of today’s administrators, I do feel it a welcome feature when 10 Gigabit Ethernet (10 GigE) becomes more popular. This granular level of control can allow administrators to limit the ports assigned to a host, and therefore lower switching costs. Consider also that the storage protocol, whether it would be iSCSI or fibre channel over Ethernet (FCoE), could be used on a 10 GigE infrastructure and introduce more contention for network resources.

As the infrastructure administrator, do you see NetIOC as a useful tool or simply the ability to sharp shoot configurations for minimal gain? Share your comments below.