In most situations, the default VMware vSphere configurations are fine, though there are some situations where tweaks to advanced options will require changes under the hood of the hypervisors. I find that if virtual machines with a large amount of storage are provisioned, a number of limitations of default configurations may be realized.

One value is the heap size for the VMFS-3 driver. The heap size is effectively the amount of VMDK storage that can be hosted, across all virtual machines, for a given host. The default values are usually fine until you find yourself supporting a number of virtual machines that raise the amount of active VMDKs that are active by a cluster or single host.

The default heap size is a value of 80, specified in the VMFS3.MaxHeapSizeMB advanced setting of an individual ESX(i) host (version 4.1). This value is calculated by taking the VMFS3.MaxHeapSizeMB value and multiplying by 256 * 1024. So, an 80MB heap value extends to 20 TB of open VMDK storage on a host. The max value is 128, which would be 32 TB of open VMDK values. This value is shown in Figure A.
Figure A

Click the image to enlarge.

It’s unclear to me whether VMFS block size impacts the extended results of the maximum amount of open storage for this value. Kenneth van Ditmarsch has an explanation about how the datastore block sizes impact the extension of the VMFS3.MaxHeapSizeMB value maximums.

You should only change this value if you need it. For example, a number of large file server virtual machines that could easily hit the 20 TB limit; this could be hit by 10 virtual machines provisioned with 2 TB of storage in VMDKs. File servers generally have a low utilization, so DRS may want to consolidate them.

VMware KB article 1004424 has more information on the VMFS3.MaxHeapSizeMB value.