When Windows Server 2012 and Hyper-V R3 were first shown to me (Windows Server 8 at the time), one of the features that caught my eye was the virtual Fibre Channel capabilities. This is something I provided an overview on earlier in the year, but I now have a Fibre Channel infrastructure where I can go through the motions of these new features. The virtual Fibre Channel capability is a feature for Hyper-V and leverages N_Port ID Virtualization (NPIV).

On the Hyper-V host I’m using in my lab, it has an Emulex LP11002 interface that provides two ports. When this is present on the host, a new virtual Fibre Channel SAN can be created in the Virtual SAN Manager applet of Hyper-V Manager. This is shown in Figure A:

Click images to enlarge.

The World Wide Node Names (WWNNs) and World Wide Port Names (WWPNs) of the LP11002 are shown, and are the physical interfaces associated with the two ports on the card. The next step is to configure a virtual machine in Hyper-V Manager to have a virtual Fibre Channel HBA that will be presented to the VM. Once that is enabled on the VM, the Virtual SAN set up in Hyper-V Manager is listed as an option and virtualized (via NPIV), WWNN and WWPNs are available for the Hyper-V VM. This step is shown in Figure B :

The benefit of giving a VM a direct HBA is to provision storage to the VM directly, and then apply zoning rules to it, if applicable, on the storage fabric with the NPIV WWNNs and WWPNs. In terms of virtual machine migration, the Virtual SAN assigned to the host (Emulex-LP11002, in this example) needs to be configured to all hosts and, as the VM moves between hosts, the NPIV WWNNs and WWPNs will follow the VM.

Does NPIV and Hyper-V virtual Fibre Channel appeal to you? In my virtualization practice, I tend to prefer keeping all virtual machine data on VHDs (or VMDKs for vSphere) but have seen this requirement come up. One example is application clusters, such as a SQL or Exchange cluster. How will you use NPIV for Hyper-V? Share your comments below.