Main menu

VSICM51 - Slide 06-76 - VSA Cluster Configuration Requirements



1. Wrong:

When describing the VSA maximum hard disk config, slide notes state:

The maximum hard disk capacity per ESXi host is four 3TB hard disks, six 2TB hard disks, or eight 2TB hard disks.
1. Correct:

In VSA 5.1, the maximum storage supported is as follows:

  • 3TB drives:
    • 8 disks of up to 3TB capacity in a RAID 6 configuration (no hot spare).
    • 18TB usable by the VMFS-5 file system per host.
    • Across three hosts, a total business usable storage of 27TB.
  • 2TB drives:
    • 12 local disks of up to 2TB in a RAID 6 configuration (no hot spare).
    • 16 external disks of up to 2TB in a RAID 6 configuration (with hot spare).
    • VMware supports maximum VMFS-5 size of 24TB per host in VSA 5.1.
    • Across three hosts, total business-usable storage of 36TB.

The VSA installer automatically adjusts the VMFS heap size on each ESXi host to handle the larger file system sizes.

1. Source:

"What’s New in VMware vSphere® Storage Appliance 5.1" whitepaper.



2. Wrong:

When describing VSA installation requirements, slide notes state:

Each ESXi host must not have additional vSphere standard switches or port groups configured except for the default switches and port groups that are created during the ESXi installation.
Before the VSA cluster is configured, ensure that each ESXi host does not have any deployed virtual machines. After the VSA cluster is created, you can add virtual switches to your ESXi hosts to support the virtual machines that will be part of the VSA cluster.
2. Correct:

As described in the chapter "Brownfield Installation of the VSA Cluster" of the VMware Technical Marketing document "What’s New in VMware vSphere® Storage Appliance 5.1" these limitations have been removed.
The Guide also explains:

For brownfield installations, the user might already have created vSwitches. If this is the case, the wizard audits the configuration and fails it if it is not set up correctly. The user is responsible for fixing the configuration. For greenfield installations, or brownfield deployments that do not have preconfigured vSwitches, the wizard configures the vSwitches appropriately.
The VSA 5.1 installer uses the free space remaining on the local VMFS-5 file system and hands this space off to the VSA appliances. After the shared storage has been configured, the virtual machines running on local storage can be migrated to the shared storage, using either cold migration (virtual machines powered off) or VMware vSphere® Storage vMotion® if the user has a license for this feature.
2. Source:

"What’s New in VMware vSphere® Storage Appliance 5.1" whitepaper.



3. Wrong:

Last paragraph in this slide notes state:

The vCenter Server instance that manages the VSA cluster cannot be a virtual machine that is running inside the VSA cluster. But the vCenter Server instance can be a virtual machine located in the same datacenter as the VSA cluster.
3. Correct:

As described in the chapter "vCenter Server Running on the VSA Cluster" of the VMware Technical Marketing document "What’s New in VMware vSphere® Storage Appliance 5.1" this limitation has been removed:

In VSA 5.1, one supported option is to install a vCenter Server instance in a virtual machine on a local datastore on one of the nodes in a VSA storage cluster. The vCenter Server instance then can be used to install VSA by allocating a subset of local storage, excluding the amount allocated for vCenter Server (on all hosts) for VSA.
3. Source:

"What’s New in VMware vSphere® Storage Appliance 5.1" whitepaper.


Last modified onThursday, 12 December 2013 19:25
Rate this item
(0 votes)
back to top