VMware's vSphere virtualization suite allows for ESXi hosts to support 802.1Q VLAN tagging, which presents multiple networks to the virtualized infrastructure. The basic premise is that a virtual switch (either the standard virtual switch or the new vNetwork Distributed Virtual Switch) will be presented with a tag (a VLAN ID) to the ESXi hosts. The use of VLANs for virtualization functions in a very similar way that the network switches move multiple networks around the datacenter between other switches.
With a VLAN configured, the ESXi host can present multiple networks to all types of communication. For example, the ESXi management interface can have an IP address on one network, while the vmkernel vMotion interface can be configured for a separate IP address on a separate network; both networks can be configured on the same physical cable with the use of VLANs. You would need to configure the physical cable on the physical switch as a VLAN Trunk Protocol (VTP) port; this is the opposite of the standard cabling that may usually go with a server to the physical switch, the access port (AP).With a VTP in use, vSphere supports up to 512 port groups on a virtual switch. The port groups can be used to configure the management and vMotion interfaces as described above; the groups can also be configured to attach guest virtual machines to the same virtual switch, which can mean putting all of these communication types on the same physical media (cable). Depending on the rules of separation for each virtual environment, this may not be permitted, but it is possible. Figure A shows all of these roles stacked on one interface for a sample ESXi host configuration, using VLANs for each port group. Figure A
Click the image to enlarge.
As a general rule of thumb, my virtualization practice puts vSphere-centric communication on the same physical media -- this includes the ESXi management and vMotion interfaces. Each interface is also a good candidate for its own VLANs. If Ethernet-based storage such as iSCSI or NFS should be used, this would make a good candidate for their own media. While you could stack the ESXi management, vMotion interfaces, and storage interfaces on the same physical media, there may be limitations that impact the storage networking. In my practice, I carve iSCSI and NFS connectivity to their own media, and I may also use VLANs on the connection. The last category is virtual machine guest networking, and I usually keep these separate via their own connections to the switching environment and utilized VLANs.
This primer is an overview on how to use VLANs with vSphere virtualization. For more information about getting into the right configuration for this pillar of infrastructure, read the VMware Virtual Networking Concepts white paper and the What's New in vSphere Networking white paper.
Rick Vanover is a software strategy specialist for Veeam Software, based in Columbus, Ohio. Rick has years of IT experience and focuses on virtualization, Windows-based server administration, and system hardware.