Main menu

VSICM55 - Slide 13-07 - ESXi Hardware Prerequisites



Wrong:

The second sentence in the first paragraph in the slide notes states:

The server can have up to 160 logical CPUs (cores or hyperthreads) and can support up to 2048 virtual CPUs per host.
Correct:

The actual configuration maximums in vSphere 5.5 are different. Hence, the above sentence must be corrected as follows:

The server can have up to 320 logical CPUs (cores or hyperthreads) and can support up to 4096 virtual CPUs per host.
Source:

Configuration Maximums for VMware vSphere 5.5.


Read more...

VSICM55 - Slide 07-31 - Migrating Virtual Machines



1. Wrong:

When describing the various virtual machine migration types, the name of the last one is completely wrong. Slide notes state:

Enhanced vMotion Compatibility (EVC): Migrate a powered-on virtual machine to a new datastore and a new host.
1. Correct:

Enhanced vMotion Compatibility (EVC) is something completely different. EVC is a cluster feature that prevents vSphere vMotion migrations from failing because of incompatible CPUs.

The vMotion migration without shared storage, introduced in vSphere 5.1, was originally named Enhanced vMotion. Hence, due to the similarity between the two names, the confusion above took place.

As with vSphere 5.5, VMware has decided to rename the feature as Cross-Host vMotion Migration in the course slides and vMotion Without Shared Storage in the VMware vSphere 5.5 Documentation Center. Whichever name you want to refer to, it cannot anyway be EVC.

1. Source:

VMware vSphere 5.5 Documentation Center.



2. Wrong:

In the slide graphic, last bullet states:

A maximum of eight simultaneous vMotion, cloning, deployment, or Storage vMotion accesses to a single VMware vSphere® VMFS-5 datastore is supported.
2. Correct:

Starting with vSphere 4.1, the maximum number of simultaneous vMotion operations per datastore has been increased to 128. Additionally there is no difference, in terms of maximum concurrent operations, between a VMFS-3, a VMFS-5 or an NFS datastore. Other limits listed in the slide - cloning, deployment and Storage vMotion - are still valid as of vSphere 5.5.

2. Info:

More details about why the above limits apply to datastores and hosts can be found in the "vCenter Server and Host Management" guide, "Migrating Virtual Machines in the vSphere Web Client" chapter - "Limits on Simultaneous Migrations in the vSphere Web Client" paragraph, available in the vSphere 5.5 Documentation Center.

2. Source:

Configuration Maximums for VMware vSphere 4.1.

Configuration Maximums for VMware vSphere 5.0.

Configuration Maximums for VMware vSphere 5.1.

Configuration Maximums for VMware vSphere 5.5.



3. Wrong:

At the end of the slide notes you can read that:

With file relocation, the contents of the raw LUN mapped by the RDM are copied into a new .vmdk file at the destination. The copy operation is effectively converting a raw LUN into a virtual disk.
If you must cold-migrate a virtual machine without cloning or converting its RDMs, remove them from the configuration of the virtual machine before migrating and recreate them when migration has completed.
3. Correct:

The above statements are either incomplete or incorrect. The described Storage vMotion behavior was a limitation when migrating RDMs in vSphere 4.x, yet in vSphere 5.x it has been improved.
Cormac Hogan - a Senior Storage Architect in the Integration Engineering team which is part of VMware R&D - has further clarified RDM behavior during Storage vMotion operations.

  • VM with Physical (Pass-Thru) RDMs (Powered On – Storage vMotion):
    • If I try to change the format to thin or thick, then no Storage vMotion allowed.
    • If I chose not to do any conversion, only the pRDM mapping file is moved from the source VMFS datastore to the destination VMFS datastore – the data stays on the original LUN.
  • VM with Virtual (non Pass-Thru) RDMs (Power On – Storage vMotion):
    • On a migrate, if I chose to covert the format in the advanced view, the vRDM is converted to a VMDK on the destination VMFS datastore.
    • If I chose not to do any conversion, only the vRDM mapping file is moved from the source VMFS datastore to the destination VMFS datastore – the data stays on the original LUN (same behavior as pRDM).
  • VM with Physical (Pass-Thru) RDMs (Powered Off – Cold Migration):
    • On a migrate, if I chose to change the format (via the advanced view), the pRDM is converted to a VMDK on the destination VMFS datastore.
    • If I chose not to do any conversion, only the pRDM mapping file is moved from the source VMFS datastore to the destination VMFS datastore – the data stays on the original LUN.
  • VM with Virtual (non Pass-Thru) RDMs (Power Off – Cold Migration):
    • On a migrate, if I chose to covert the format in the advanced view, the vRDM is converted to a VMDK on the destination VMFS datastore.
    • If I chose not to do any conversion, only the vRDM mapping file is moved from the source VMFS datastore to the destination VMFS datastore – the data stays on the original LUN (same behaviour as pRDM).
3. Source:

VMware vSphere Blog.


Read more...

VSICM55 - Slide 06-53 - Using a VMFS Datastore with ESXi



Wrong:

Last phrase in the first paragraph of the slide note states:

The maximum size of a VMFS datastore is 62TB.
Correct:

The actual maximum size of a VMFS-5 datastore is 64TB.

Source:

Configuration Maximums for VMware vSphere 5.5.



Room for improvement:

The second paragraph in slide notes states the following:

You can use an NFS datastore to store your virtual machines. But not all functions are supported. For example, you cannot store an RDM on an NFS datastore. A VMFS datastore is required for an RDM to store the RDM mapping file (*-rdm.vmdk).
Correct:

Despite the above is formally correct, it is a non-sense to detail NFS datastores limitations within a lesson dedicated to VMFS datastores. I would personally move the statement back to the NFS datastores lesson number 3, within the same course module, starting with slide number 29.


Read more...

VSICM51 - Slide 14-07 - ESXi Hardware Prerequisites



1. Wrong:

Slide notes list the CPU generations required for running VMware vSphere® ESXi™:

VMware vSphere® ESXi™ requires a 64-bit server (AMD Opteron, Intel Xeon, or Intel Nehalem).
1. Correct:

Besides being out of date, this description is logically a nonsense: "Nehalem" is only one of the various Intel microarchitecture names and is a Xeon processor anyway, where "Xeon" is the Intel brand name for multi-socket-capable processors. Hence, mentioning "Intel Nehalem" as opposed to or an alternative to "Intel Xeon" is lexically a nonsense.

1. Info:

For constantly up-to-date information about supported hardware, please always check the online VMware Compatibility Matrix.

1. Source:

VMware Compatibility Guide.



2. Wrong:

When listing some of the configuration maximums for a host, slide notes state:

The server can have up to 160 logical CPUs (cores or hyperthreads) and can support up to 512 virtual CPUs per host.
2. Correct:

The new configuration maximum for virtual CPUs per host is 2048. The number shown in the slide notes is related to vSphere 4.x.

2. Info:

For constantly up-to-date information about the Configuration Maximums, please always check the latest online version of the .PDF Guide.

2. Source:

Configuration Maximums for VMware vSphere 5.1.



3. Wrong:

Slide notes show a list of hardware components a host must have:

The ESXi host must have:
  • One or more Ethernet controllers
  • A basic SCSI controller
  • An internal RAID controller
  • A SCSI disk or a local RAID logical unit number (LUN)
3. Correct:

The above list cannot be considered a "must", yet - rather - a list of hardware components an ESXi host can have.
Just think of - in example - diskless hosts booting via PXE and deployed via VMware vSphere® Auto Deploy™.

Source:

VMware Knowledge Base articles.


Read more...

VSICM51 - Slide 11-21 - Admission Control Policy Choices



Wrong:

While describing the "Host failures the cluster tolerates" Admission Control Policy, slide notes state:

The maximum number of host failures that can be tolerated is unlimited.
Room for improvement:

Even if this is virtually true, it should be wiser to note that the logically obvious limit is equal to N - 1, where "N" is the number of hosts in the cluster. This has been an important improvement - introduced with vSphere 5.0 - compared to vSphere 4.x, where the limit was set to 4 failures (4 out of the maximum 5 available Primary hosts in the pre-vSphere 5.x clusters).


Read more...

VSICM51 - Slide 07-31 - Migrating Virtual Machines



1. Wrong:

In the slide graphic, last bullet states:

A maximum of eight simultaneous vMotion, cloning, deployment, or Storage vMotion accesses to a single VMware vSphere® VMFS-5 datastore is supported.
1. Correct:

Starting with vSphere 4.1, the maximum number of simultaneous vMotion operations per datastore has been increased to 128. Additionally there is no difference, in terms of maximum concurrent operations, between a VMFS-3, a VMFS-5 or an NFS datastore. Other limits listed in the slide - cloning, deployment and Storage vMotion - are still valid as of vSphere 5.1.

1. Info:

More details about why the above limits apply to datastores and hosts can be found in the "vCenter Server and Host Management" guide, "Migrating Virtual Machines in the vSphere Client" chapter - "Limits on Simultaneous Migrations" paragraph, available in the vSphere 5.1 Documentation Center.

1. Source:

Configuration Maximums for VMware vSphere 4.1.

Configuration Maximums for VMware vSphere 5.0.

Configuration Maximums for VMware vSphere 5.1.



2. Wrong:

At the end of the slide notes you can read that:

With file relocation, the contents of the raw LUN mapped by the RDM are copied into a new .vmdk file at the destination. The copy operation is effectively converting a raw LUN into a virtual disk.
If you must cold-migrate a virtual machine without cloning or converting its RDMs, remove them from the configuration of the virtual machine before migrating and re-create them when migration has completed.
2. Correct:

This was a typical behavior for RDMs in vSphere 4.x, yet in vSphere 5.x it has been improved.
Cormac Hogan - a senior technical marketing architect within the Cloud Infrastructure Product Marketing group at VMware - has further clarified RDM behavior during Storage vMotion operations.

  • VM with Physical (Pass-Thru) RDMs (Powered On – Storage vMotion):
    • If I try to change the format to thin or thick, then no Storage vMotion allowed.
    • If I chose not to do any conversion, only the pRDM mapping file is moved from the source VMFS datastore to the destination VMFS datastore – the data stays on the original LUN.
  • VM with Virtual (non Pass-Thru) RDMs (Power On – Storage vMotion):
    • On a migrate, if I chose to covert the format in the advanced view, the vRDM is converted to a VMDK on the destination VMFS datastore.
    • If I chose not to do any conversion, only the vRDM mapping file is moved from the source VMFS datastore to the destination VMFS datastore – the data stays on the original LUN (same behavior as pRDM).
  • VM with Physical (Pass-Thru) RDMs (Powered Off – Cold Migration):
    • On a migrate, if I chose to change the format (via the advanced view), the pRDM is converted to a VMDK on the destination VMFS datastore.
    • If I chose not to do any conversion, only the pRDM mapping file is moved from the source VMFS datastore to the destination VMFS datastore – the data stays on the original LUN.
  • VM with Virtual (non Pass-Thru) RDMs (Power Off – Cold Migration):
    • On a migrate, if I chose to covert the format in the advanced view, the vRDM is converted to a VMDK on the destination VMFS datastore.
    • If I chose not to do any conversion, only the vRDM mapping file is moved from the source VMFS datastore to the destination VMFS datastore – the data stays on the original LUN (same behaviour as pRDM).
2. Source:

VMware vSphere Blog.


Read more...

VSICM51 - Slide 06-76 - VSA Cluster Configuration Requirements



1. Wrong:

When describing the VSA maximum hard disk config, slide notes state:

The maximum hard disk capacity per ESXi host is four 3TB hard disks, six 2TB hard disks, or eight 2TB hard disks.
1. Correct:

In VSA 5.1, the maximum storage supported is as follows:

  • 3TB drives:
    • 8 disks of up to 3TB capacity in a RAID 6 configuration (no hot spare).
    • 18TB usable by the VMFS-5 file system per host.
    • Across three hosts, a total business usable storage of 27TB.
  • 2TB drives:
    • 12 local disks of up to 2TB in a RAID 6 configuration (no hot spare).
    • 16 external disks of up to 2TB in a RAID 6 configuration (with hot spare).
    • VMware supports maximum VMFS-5 size of 24TB per host in VSA 5.1.
    • Across three hosts, total business-usable storage of 36TB.

The VSA installer automatically adjusts the VMFS heap size on each ESXi host to handle the larger file system sizes.

1. Source:

"What’s New in VMware vSphere® Storage Appliance 5.1" whitepaper.



2. Wrong:

When describing VSA installation requirements, slide notes state:

Each ESXi host must not have additional vSphere standard switches or port groups configured except for the default switches and port groups that are created during the ESXi installation.
Before the VSA cluster is configured, ensure that each ESXi host does not have any deployed virtual machines. After the VSA cluster is created, you can add virtual switches to your ESXi hosts to support the virtual machines that will be part of the VSA cluster.
2. Correct:

As described in the chapter "Brownfield Installation of the VSA Cluster" of the VMware Technical Marketing document "What’s New in VMware vSphere® Storage Appliance 5.1" these limitations have been removed.
The Guide also explains:

For brownfield installations, the user might already have created vSwitches. If this is the case, the wizard audits the configuration and fails it if it is not set up correctly. The user is responsible for fixing the configuration. For greenfield installations, or brownfield deployments that do not have preconfigured vSwitches, the wizard configures the vSwitches appropriately.
The VSA 5.1 installer uses the free space remaining on the local VMFS-5 file system and hands this space off to the VSA appliances. After the shared storage has been configured, the virtual machines running on local storage can be migrated to the shared storage, using either cold migration (virtual machines powered off) or VMware vSphere® Storage vMotion® if the user has a license for this feature.
2. Source:

"What’s New in VMware vSphere® Storage Appliance 5.1" whitepaper.



3. Wrong:

Last paragraph in this slide notes state:

The vCenter Server instance that manages the VSA cluster cannot be a virtual machine that is running inside the VSA cluster. But the vCenter Server instance can be a virtual machine located in the same datacenter as the VSA cluster.
3. Correct:

As described in the chapter "vCenter Server Running on the VSA Cluster" of the VMware Technical Marketing document "What’s New in VMware vSphere® Storage Appliance 5.1" this limitation has been removed:

In VSA 5.1, one supported option is to install a vCenter Server instance in a virtual machine on a local datastore on one of the nodes in a VSA storage cluster. The vCenter Server instance then can be used to install VSA by allocating a subset of local storage, excluding the amount allocated for vCenter Server (on all hosts) for VSA.
3. Source:

"What’s New in VMware vSphere® Storage Appliance 5.1" whitepaper.


Read more...
Subscribe to this RSS feed