Storage

When is it okay to allow multiple hosts to connect to a single iSCSI array?

Scott Lowe answers a TR member's question about the wisdom of connecting multiple hosts to a single iSCSI array.

A few weeks ago, I wrote an article explaining different kinds of storage and I included a diagram that showed multiple hosts connected to a single iSCSI array. An astute TechRepublic reader raised the following question in the comments section:

"From your diagram, it appears that multiple hosts are connecting to a single iSCSI target -- is this correct? Are there any complications with connecting more than one machine to a single target? If so, what is the ideal setup for having multiple hosts connect to a single target?"

The short answer to the first part of the question is yes. In the diagram, multiple hosts are, indeed, connecting to a single iSCSI array. The hosts may be connecting to different individual LUNs/targets or multiple hosts may be connecting to the same target. Both scenarios are supported in a shared storage environment.

The second part of your question is excellent. Under the wrong conditions, allowing multiple hosts access to a single target is an exceptionally bad idea. Doing so can lead to all manners of problems, including data corruption. Some iSCSI devices have a configuration option on new targets, which won't allow multiple hosts to connect simultaneously.

The reason: iSCSI itself doesn't do anything in the way of file locking. It's simply a block-level protocol that enables storage data transmission over the network. iSCSI itself doesn't know what a file is. Think of it like Ethernet. Ethernet doesn't know what HTTP is... that's the job of higher level protocols; Ethernet's job is to just carry whatever is encapsulated inside Ethernet packets and then hand that data off to connected devices for further processing. If a real world scenario is more to your taste, think of iSCSI as your mailman. Your mailman doesn't know what's in your mail; he just places items into your mail... unless you have an unscrupulous mailman. Asking iSCSI to handle file locking would be like asking your mailman to pay your bills.

By the way, this doesn't pertain to just iSCSI. It's true for any storage transport protocol, including Fibre Channel, AoE and more. The transport protocol itself is not responsible for locking.

But, in some cases, you want multiple hosts to access the target. For example, if you're deploying VMware or a Microsoft cluster, shared storage is part of the design. But, you still need a way for multiple hosts to safely access the data stored on the volume. This is the job of the file system that you choose to deploy into this raw space. Some examples of file systems that support strong locking are VMFS - used by VMware - and NFS, used by a variety of operating systems. These kinds of file systems are designed to support multi-host access to the stored data.

The last part of your question is tougher to answer since it needs additional context. Honestly, the "best" solution depends on what you're trying to do. If you're deploying vSphere, for example, use VMFS or NFS. If, however, you're running a Microsoft workload and need cluster services, VMFS would be a very poor choice. Here, your best bet is to turn to your solution provider for whatever solution you're implementing in order to get appropriate guidance and make sure that you're using a file system suitable to the task.

Summary

I hope that this answers your remaining questions about iSCSI and how it works!

About

Since 1994, Scott Lowe has been providing technology solutions to a variety of organizations. After spending 10 years in multiple CIO roles, Scott is now an independent consultant, blogger, author, owner of The 1610 Group, and a Senior IT Executive w...

13 comments
ricardoegss
ricardoegss

 Hi

Is it possible to connect LINUX and Windows Server to the same LUN?
On linux it could be read-only.

infotek
infotek

Comparing VMFS to NFS is like comparing apples to oranges. One is a filesystem like NTFS/EXT3 while the other a network filesystem protocol like SMB/CIFS.

jprac
jprac

Tom.Marsh, Please help me to understand why you feel NFS should never be used in a production VMware environment. What is the alternative, and why is it better?

tom.marsh
tom.marsh

Whatever you do, don't use NFS unless you hate yourself and want to die. In seriousness, NFS should never be used for a production VMware installation--at best it is a party trick, too slow and unreliable for anything other than testing.

tom.marsh
tom.marsh

NFS is a file-level protocol for sharing individual files, iSCSI is a means of provisioning raw disk storage to a host across the network. Raw access is faster (and more flexible) for a number reasons. Where basic NFS really shines is in archiving applications... Backups, old email archives etc. The stuff that has to be there, and has to be available for access, but isn't expected to be "high-performance." Performance of NFS for for these applications is more than acceptable... but production VMs for anything but the smallest environments the performance just isn't there. Also, somebody mentioned NetApp. It is true, they do have the best NFS implementation out there. ...But it's still only 95% of a gigabit iSCSI implementation--the performance doesn't scale to 10 gig (where all storage is going, eventually.) The problem is still that the host consuming the storage has an extra layer of abstraction to fight through, which add cycles to each read/write transaction. Certainly, there are implementations where a NetApp can give you what you need--but NetApp isn't really the same as off-the-shelf NFS that comes with Linux/Unix--it's important to differentiate between the two. The last thing anybody should ever believe is that they can just cobble together a server full of disks, setup NFS on a private LAN, and get 95% of iSCSI--because they won't. They have to buy NetApp's implementation, and implement it per their white paper, to get 95% of gigabit iSCSI. Again, NFS isn't totally useless, its ideal for some applications, but not for production VMs where you care about read/write performance.

jcbronson
jcbronson

NetApp has NFS speeds that closely rival iSCSI (internal tests have shown i/o near 95% of iSCSI, YMMV). If you accept the difference in speed, NFS is fine for vSphere storage. Another factor is the need to mount the NFS volume on every host in your cluster (unlike iSCSI which is a one-shot deal). If you use Enterprise Plus licensing, you can use host profiles to make up for the hassle of mounting the volume more than once. What prompted our move to NFS was the ease of increasing the volume size and the accompanying loss of the limitation (and hassle) of volume extents. We also gained the benefit of the NetApp snapshot on the NFS. While not the perfect backup solution, it has saved us in some quick-n-dirty restores.

NonBreaker
NonBreaker

My company went down the path of Hyper-V instead, so I never got my hands on VMware. What is different between VMFS and NFS? What makes NFS unsuitable for production enviros? I'd go look elsewhere, but I find some of the best information comes from people with strong opinions one way or another, lol.

tom.marsh
tom.marsh

There is no "hassle" with iSCSI in vSphere... When you have hosts in a cluster and create a new VMFS volume on one host, it will initiate a rescan of HBAs for VMFS volumes on all hosts in the cluster. Once complete, if you've revealed the LUN to all your cluster members (and those are all correctly defined in your storage solution) you should see the VMFS volume on all your hosts in the cluster after it finishes rescanning. If you aren't using vSphere, you do need to manually click "Rescan" on each host, but if you have more than one host, why wouldn't you buy vSphere? The tools it provides make administration of multiple vmware hosts tons easier because you get a standard vCenter license with vSphere.

tom.marsh
tom.marsh

I have no experience with Hyper-V in production... But I'm curious about what's coming in 8 Server. It looks compelling... still not as good as vSphere 5.0, but they make up 75% of their lag in features. And vSphere 5's price tag is definitely sending companies flocking to test Hyper-V. So far, most aren't switching, but who knows what pricing will come down the pike for VMware 6? Charging per gigabyte of accessible storage?

tom.marsh
tom.marsh

Chiefly, the issues relate to NFS being a file-level file-sharing protocol, as opposed to iSCSI which presents raw, block-level disk-access to hosts. The difference is that NFS is about sharing files across a network, whereas iSCSI is about presenting raw-disk-storage to the host that it controls and accesses as if it were built physically into the server. This is not a trivial difference--NFS requires an additional layer of abstraction to get the data to/from the disk. iSCSI (or fiber-channel) permit the OS to treat the storage target as if it were local--for example, dictating block-sizes and file system rather than having to just accept whatever block size and file-system is in use on the NFS server.

VBJackson
VBJackson

If you are deploying vSphere, and using the iSCSI protocol for the cluster target, then you want to use VMFS as the protocol for vSphere. Even if vSphere states that it is possible to use it as the base protocol for the cluster, the product is optomized to use VFMS. This says nothing about using NFS from WITHIN the virtual host environment. NFS is a popular protocol for use with most OSes, and you can certainly use it to set up your shared network storage.

NonBreaker
NonBreaker

Actually redesigning our backup architecture, and we are waiting on exactly what features hyper-v 3 will bring. With system center 2012 datacenter licenses being priced around $3600 each, it would be extremely pricey to protect our meager two hypervisors. With something like vmotion, we could probably not even protect those hypervisors, and focus on protecting the vms instead. By the way, the price of vsphere was definitely what drove us to hyper-v instead. Would have rather went with VMware, but that's just business, isn't it?

Editor's Picks