Configuring Storage (FC, iSCSI, NFS)

From VMUG Wiki
Jump to: navigation, search


Originating Author

Mike Laverick


Video Content

In these two videos - the "Show Me How" is a demo of how to managing iSCSI and NFS volumes – together with how to provision new storage using a Synology Diskstation as an example – it also covers the steps to format and grow a VMFS volume. The second "Discuss The Options" video Mike Laverick and N discuss the typical challenges and troubleshooting required to enable FC, iSCSI and NFS storage

Show Me How with @mike_laverick - Set to 720p, Full Screen

Discuss The Options with Mike Laverick & Tom Howarth

Introduction to Storage on vSphere

Version: vSphere 5.5

vSphere supports wide variety of storage configurations and optimisations. Virtual Machines can be stores on local, fibre-channel, iSCSI and NFS storage. There is all support for distributed storage model called vSAN which use combinations of HDD and SSD drives within servers to create a datastore for virtual machines. Additionally, there number of technologies which allow for the caching of VMs and their files to SSD or flash-cards to reduce the IO penalty. VMware’s is called Flash Read Cache (vFRC), but options exist also from the third-party market.

Storage is critical to vSphere as not only do you need to be able to store the virtual machines somewhere, that shared-storage is also a requirement for advanced features such as vMotion, High Availability, Fault-Tolerance and Distributed Resource Manager and Distributed Power Management.

This chapter focuses on the core storage technologies of fibre-channel, iSCSI and NFS protocols. Once presented to the ESX host, VMware uses the term “datastore” to describe the storage present. In some respects as you use VMware ESX you begin to care less about the underlying protocols backing the storage. Ever since more storage protocols have been supported there has been an endless debate about which is the “best” storage protocol to use. The reality is there too many factors at play (cache, seek times, spindles, RAID levels, Controller performance) to make such a comparison meaningful. It’s perhaps best said that all storage protocols have their own unique advantages and disadvantages, and the fact multiple protocols are still support is indicative of the fact that no one dominant protocol has laid the others to rest. NFS is perhaps the easiest to configure – all you need is a valid IP address to mount a NFS share across the network. With that said, by definition NFS does not support VMware’s VMFS File System which might be preferred by some customers – which means iSCSI and FC are the only protocols to offer this. Most enterprize class storage systems now support all three (if not more!) protocols, so what’s critical is understand the VMs/Application use-case together with an appreciation of their features and functionality. The decision to use one storage protocol or array over another is often decision which is outside the remit of the virtualization admin, and is a reflection of the organizations history of storage relationships through the years.

Configuring Storage (Web-Client)

Configuring Fibre-Channel Storage

Fibre-Channel connectivity is usually provided by pair of Host-Bus Adapters (HBA) typically from Qlogic or Emulex. As with Ethernet networking multiple HBAs are normally used to allow for redundancy and load-balancing. The HBAs are connected to the underlying "fabric" by the use of Fibre-Channel switches such as Brocade Silkworm devices - in what is commonly referred to as "fully redundant fabric". This provides multi-paths to the the LUN or Volume being accessed on the storage array. FC based connections generally offer the lowest latencies to the underlying storage, but this does not necessarily mean they are faster - for instance a 2Gps FC system maybe out-performed by 10Gps Ethernet backplane. Whilst there is a great focus on bandwidth in storage typically in most virtualized environment these pipes to the storage are not remotely saturated, and bottleneck if they exist are to be found within the storage array itself.

In in new environment of VMware ESX hosts one requirement is discover their WWN or World-Wide Name address which is imprinted on the HBA itself. The value is akin to a MAC address on Ethernet network cards. The HBA value is used by the storage array to "mask" or "present" LUNs to the hosts. Typically, when a LUN or Volume is presented to ESX this done to all the hosts in a cluster. VMware's File System - VMFS inherently supports a cluster mode where more than one host can see the same LUN/Volumes. In fact this is requirement for many of the advanced feature such as vMotion and High-Availability. Most storage system allow for the registration of the the HBA's WWN backed by friendly name - and grouping of these, so the LUN/Volume assignment can be done to the group rather than remembering WWN values themselves

Locating the HBA WWPN on VMware ESX host

VMware ESX shows both the WWNN and WWPN which stand for the Node WWN and Port WWN. The "device" is the node (Node WWN) and the connections on the device are referred to as ports (Port WWN). You can locate the WWPN by navigating to:

1. >Hosts and Clusters >Select your vCenter >Select the DataCenter >Select the ESX host

2. Click the Manage tab

3. Select the Storage column

4. Select Storage Adapters

5. Select the vmhba device that represents your FC HBA

Screen Shot 2013-11-27 at 13.44.57.png

Note: In this case vmhba0 is an onboard IDE control; vmhba1 is the local RAID controller card in a HP Server, vmhba2/3 FC HBA cards of different generations and vmhba33 is virtual adapter used with the Software iSCSI Adapter.

Registering the FC HBA with the Storage Array

The vendors process for registering the HBA with the storage array varies massively from one vendor to another. Some auto-discover the HBA during boot time, and pre-populate their interfaces. Typically, you still need to know the WWPN in order to complete the process. In some case you might merely copy the WWPN value from the web-client into the storage management system. In this example a NetApp Array has been enabled for FC support, and the WWPNs have been added there - in what's called the NetApp OnCommand System Manager.

Here a group called "ESX_Hosts_NYC_Cluster01" was created, and the WWPN added.

Screen Shot 2013-11-27 at 13.58.27.png

This group is then used to control access when creating a LUN

Screen Shot 2013-11-27 at 14.04.25.png

Rescanning the FC HBA in vCenter

Once a LUN/Volume has been presented, all that is required on the VMware ESX host is a "rescan" of the storage. There are number of ways to do this and different locations. A right-click of Datacenter or Cluster object in vCenter should present the option to rescan every ESX host. It probably best to do this on the properties of a cluster, if you have one - since every host within a cluster will need to see the same storage. It could be regarded as "dangerous" to do full datacenter wide-scan if you environment contains many ESX hosts and Clusters.

Screen Shot 2013-11-27 at 14.12.43.png

Screen Shot 2013-11-27 at 14.24.03.png

Alternatively, refreshes and rescans of storage can be done a per-host basis from the Storage Adapter location. Three different icons control the rescan process.

The first icon merely carries out a refresh of the overall storage system; The second icon rescans all storage adapters; and the third icon rescans just the selected storage adapter.

Screen Shot 2013-11-27 at 14.26.44.png

Once the rescan had completed you should see a new LUN/Volume mounted to the ESX host under the "Devices" Tab associated with a HBA and under "Storage Device" category.

Screen Shot 2013-11-27 at 15.25.36.png

Screen Shot 2013-11-27 at 15.26.49.png

Configuring iSCSI Storage


iSCSI storage is another block-level protocol that present RAW LUNs/Volumes to the ESX host using combination of Ethernet/TCP-IP technologies. iSCSI communications follow the client/server model, with the client often being referred to as the "Initiator" and the iSCSI storage array being the "Target". iSCSI support was added to vSphere in the 3.5 edition, and has since grown exponentially in popularity.

VMware ESX ships with its own built-in iSCSI Initiator, and using a combination of vSwitch and VMKernel Ports. From a network perspective basic iSCSI communications could be enabled using just a vSwitch backed with two NICs. However, a load-balanced configuration is supported which has more convoluted configuration but pays dividends from a performance perspective. For more details consult this part of the VMUG Wiki - Creating an Teamed Standard vSwitch for Load-Balanced iSCSI communications.

As with Fibre-Channel Communications, iSCSI uses an unique value to represent the inbound requests of the initiator referred to as IQN or iSCSI Qualified Name. This a convention which allows for global unique string to be configured for the iSCSI Initiator. The convention is For example

Note: Notice the domain is inverted from the way it is expressed in the FQDN format. No DNS looks are used or required by iSCSI. It's merely a convention design to ensure uniqueness. The IQN for VMware is configured when you enable the Software iSCSI Initiator. However, the first step in a iSCSI deployment is deciding upon a meaningful IQN convention that guarantees uniqueness.

Creating an iSCSI Volume in Dell EqualLogic

As with most storage vendors the Dell EqualLogic comes with a graphical user interface called "Group Manager" so called because it allows you to manage many arrays in a group mode. Creating a new iSCSI Volume involves giving it a friendly name, a size in GB or TB - and granting access to the LUN for at least 1 iSCSI Initiator. Additionally, you must enable "Allow simultaneous connections from initiators with different IQNs" to allow other VMware ESX host access to it.

Screen Shot 2013-11-27 at 15.52.51.png

Once the Volume has been definied other IQN maybe added to the Access list:

Screen Shot 2013-11-27 at 15.54.28.png

Enabling VMware ESX Software iSCSI Initiator

Once a volume has been created and the access granted, we can next enable the Software iSCSI Initiator on the host itself.

1. >Hosts and Clusters >Select your vCenter >Select the DataCenter >Select the ESX host

2. Click the Manage tab

3. Select the Storage column

4. Select Storage Adapters

5. Click the green + symbol add the Software iSCSI Adapter, and click OK to prompt

Screen Shot 2013-11-27 at 16.25.49.png

Screen Shot 2013-11-27 at 16.27.13.png

This should add the iSCSI Software Adapter, and provide an Edit button that allows you to see the IQN parameter. By default VMware auto-generates an IQN for you, but many administrator prefer to configure their own.

Screen Shot 2013-11-27 at 16.28.55.png

6. Next set the IQN, by clicking the Edit button for the iSCSI Adapter - and type in the IQN or cut and paste it if you prefer

Screen Shot 2013-11-27 at 16.31.00.png

7. Next under the Network Port Binding we can set the two portgroups defined for load-balancing into the list

Screen Shot 2013-11-27 at 16.33.59.png

Screen Shot 2013-11-27 at 16.35.35.png

Finally, we can configure the ESX host with the IP address of the iSCSI Target to connect to - in our case it is the IP address of

8. Select the Target tab, and ensure Dynamic Discovery is selected - and click the Add... button.

Screen Shot 2013-11-27 at 16.38.58.png

Rescanning the iSCSI Software Adapter in vCenter

Once a LUN/Volume has been presented, all that is required on the VMware ESX host is a "rescan" of the storage. Indeed, during the configuration of the iSCSI Initiator you may receive prompts to this effect. These can be safely ignored until your configuration is complete.

There are number of ways to do this and different locations. A right-click of Datacenter or Cluster object in vCenter should present the option to rescan every ESX host. It probably best to do this on the properties of a cluster, if you have one - since every host within a cluster will need to see the same storage. It could be regarded as "dangerous" to do full datacenter wide-scan if you environment contains many ESX hosts and Clusters.

Screen Shot 2013-11-27 at 14.12.43.png

Screen Shot 2013-11-27 at 14.24.03.png

Alternatively, refreshes and rescans of storage can be done a per-host basis from the Storage Adapter location. Three different icons control the rescan process.

The first icon merely carries out a refresh of the overall storage system; The second icon rescans all storage adapters; and the third icon rescans just the selected storage adapter.

Screen Shot 2013-11-27 at 16.42.51.png

Once the rescan had completed you should see a new LUN/Volume mounted to the ESX host under the "Devices" Tab associated with a HBA and under "Storage Device" category.

Screen Shot 2013-11-27 at 16.43.45.png Screen Shot 2013-11-27 at 16.45.50.png

Managing CHAP Settings vCenter


Once a iSCSI TCP Session has been established to the storage array these persist. Even if the permission on the ACL on the LUN/Volume are changed it appears these are not checked until next time the LUN/Volumes are mounted. Merely, rescanning the storage is insufficient to validate the settings for CHAP have been correctly set. Many would recommend a full reboot of the VMware ESX host to verify CHAP has been correctly configured for the first time. Once correctly configured so long as all the hosts are configured in the same way then you can rest assured that security is in place as expected. This occurs because once a iSCSI session is established (eg a login occurs), it remains in place until someone logs out (the target or the initiator). Once you create a connection to the volume on the array, if you then change the access controls, nothing will happen until a logout and an attempted login occurs, at which point the access controls will be checked. From ESXi under the iSCSI initiator, it is possible to delete these Static Paths - and the attempt a rescan which triggers a full login and discover of the iSCSI connections. You can look at these Static Paths as a "caching session" that removes the necessity to authentication ever time an iSCSI transaction takes place.

In addition to the iSCSI IQN its possible to enable another level of security to accessing iSCSI volumes using the Challenge-Handshake Authentication Protocol (CHAP). It is possible to have both Target CHAP and Client CHAP where both the Storage Array and the Client (in this case the ESX host) each provides a shared password or "secret" to allow them to mutually authenticate to each other. Generally, CHAP is not widely used in vSphere environments as many people consider a private dedicated network sufficiently secure. However, it is the case that IQN conventions are quite easy to guess and once a system has valid IP address to reach the storage - there is really nothing stopping a rogue administrator potentially mounting storage to any client with a suitable iSCSI stack installed to it.

CHAP authentication settings do allow for "negotiated" authentication by which authentication is attempted, but not made mandatory. This can be useful in range of environments where different levels of CHAP authentication is supported. Typically the Target side CHAP username and password is configured on the array. Once the CHAP username/password has been established then on a LUN/Volume and then CHAP username is specified as requirement along side a valid iSCSI IQN..

Screen Shot 2013-12-11 at 09.11.42.png

Screen Shot 2013-12-11 at 09.13.41.png

On the VMware ESX host there are two places where authentication maybe handled - on the iSCSI Adapter itself or the Target IP configuration. By default the iSCSI Adapter is the default location. This assumes the same CHAP password is used to access all the iSCSI Storage Array in your environment. Such a configuration maybe regarded by some as inheriently insecure - as once storage arrays CHAP password is known for a specific volume then its is none for all the storage arrays.


1. Navigate to the ESX host, Select the Manage Tab and Storage Column

2. Select Storage Adapters, and select the iSCSI Software Adapter

3. Select the Properties Tab and Click the Edit button, under Authentication

Screen Shot 2013-12-10 at 14.59.33.png

4. In the Authentication Method pull-down list select the authentication model desired

In this case the dialog box presents four option. Three are unidirectional (from the ESX host to the iSCSI Target) and fourth is a bidirectional (a CHAP password exists on the ESX host and on the iSCSI Target - such as VMw@re1! and ESXho$t). The notes below are taken from the online VMware Documentation.

Screen Shot 2013-12-10 at 15.34.41.png

  • Use unidirectional CHAP if required by target - in this case the ESX host prefers a non-CHAP connection, but can use a CHAP connection if required by the target
  • Use unidirectional CHAP unless prohibited by target - in this case the ESX host prefers CHAP, but can use non-CHAP connections if the target does not support CHAP
  • Use Unidirectional CHAP - in this case ESX requires successful CHAP authentication. The connection fails if CHAP negotiation fails
  • Use bidirectional CHAP - in this case the ESX host send the Target password, and the Storage Array sends the Client password - if both match then the iSCSI volume is mounted.

Use Unidirectional CHAP is the most secure method. It means any LUN/volume on the storage array that does not have both an IQN and CHAP name configured will not presented to the VMware ESX. The other options allow for the ESX to negotiate a lower-level of authentication, and potentially gain access to a LUN/Volume merely by IQN alone. This does require the storage administrator to consistently set a CHAP name for every LUN/volume created for ESX. You may find these different ways of handling CHAP confusing, but they are reflection of the many different ways that CHAP can be configured on various storage arrays.

5. Clear the tick next to Use Initiator Name, and input the CHAP Username configured on the storage array - in this case vmware

6. In the Secret field type the password configured at the Storage Array - in this case VM@re1!

Screen Shot 2013-12-11 at 09.31.32.png

To Set CHAP on Per Target Basis

1. Navigate to the ESX host, Select the Manage Tab and Storage Column

2. Select Storage Adapters, and select the iSCSI Software Adapter

3. Select the Targets Tab and Click the Authentication

Screen Shot 2013-12-10 at 15.06.23.png

4. Remove the tick next to the option to Inherit settings from parent - vmhbaNN

5. Clear the tick next to Use Initiator Name, and input the CHAP Username configured on the storage array - in this case vmware

6. In the Secret field type the password configured at the Storage Array - in this case VM@re1!

Screen Shot 2013-12-11 at 09.36.11.png

Managing VMFS Volumes

Formatting VMFS volumes

VMFS is VMware's File System and its used primarily with device that present themselves as block SCSI devices such as disks/LUN/Volumes that are attached to a local RAID controller, FC or iSCSI Storage Arrays. It's been in existence for many years, and comes in two main flavours VMFS3 and VMFS5. Current edition of vSphere support both version for backwards compatibility, and its this allows for different generations of ESX Host to reside in the same vCenter and allows for the ease of movement from the older VMFS3 format found in version 3 and 4 of vSphere, and VMFS5 which is found on vSphere 5.x and later. If you require VMFS datastore greater than 2TB in size you must use VMFS5.

VMFS is by design a clustered filesystem that allows for many ESX hosts within the same cluster to access the same shared storage. This is often a requirement for advanced features such as vMotion, High Availability (HA) and Distribute Resource Schedule (DRS)

1. >Hosts and Clusters >Select your vCenter >Right-Click the DataCenter or Cluster

2. Select New DataStore and select VMFS

Screen Shot 2013-11-29 at 13.04.45.png

3. Type in a friendly name for the datastore, and select an ESX host to view the disks/LUNs. Next select the LUN you wish to format

Screen Shot 2013-12-06 at 16.12.57.png

4. Next select the VMFS version

Screen Shot 2013-12-06 at 16.30.06.png

5. Next select whether you wish to use all of the available partition space - in most case you just accept this default to use all the capacity

Screen Shot 2013-12-06 at 16.31.31.png

6. Click Next and Finish

Note: One way of validating that all the hosts have access to the datastore, is using the Connectivity and Multipathing view that appears on the properties of a datastore once it has been mounted by the hosts. This will allow you to confirm that the datastore is available to all the hosts in a given cluster for example.

Screen Shot 2013-12-10 at 12.33.41.png

Increasing the size of a VMFS Volume

Ever since VMware introduce vSphere 4.1 it has been possible to increase the size of a VMFS datastore, once the underlying LUN/Volume has been increased. This can be useful in circumstances where the LUN/Volume is becoming full and/or was undersized in the initial creation. The process begins by using the storage vendors management tools to increase the size of the LUN/Volume.

Screen Shot 2013-12-10 at 11.33.54.png

Screen Shot 2013-12-10 at 12.08.03.png

Once the LUN/Volume has been re-size, before a rescan of the affect hosts so they are aware of the increase in space, and use the "Increase" button to grow the VMFS datastore

1. Right-click a Datacenter/Host/Cluster, Select All vCenter Actions and choose the Rescan Storage

Screen Shot 2013-12-10 at 12.12.32.png

2. Next locate the datastore in the inventory - and in the Manage Tab and Settings Column - click the Increase button

Screen Shot 2013-12-10 at 12.16.53.png

3. In the subsequent windows you should see that the LUN/Volume has displays the new size, and Expandable it marked as having a state of Yes. This indicates that it is possible to grow the VMFS volume.

Screen Shot 2013-12-10 at 12.19.35.png

4. In the Specify Configuration dialog box, from the pull-down for Partition Configuration switch to Use Free Space to expand this datastore

Screen Shot 2013-12-10 at 12.21.39.png

5. Clicking Next and Finish should trigger the process by which the VMFS partition grows to use the new free space that was allocated earlier

Upgrading VMFS3 to VMFS5 Volumes

VMware ESX 5.x support VMFS3 to allow for backwards compatibility to the older versions of ESX that support a legacy version of the file system. The file system can be upgrade from VMFS3 to VMFS5 in place without any need to shutdown VMs. This isn't the only method to decommission a VMFS3 datastore. Upgrade paths do allow the SysAdmin to use VMware's Storage vMotion feature to relocate the files of a VM from VMFS3 to a VMFS5 datastore. Once the old VMFS3 volume is empty, the datastore can be deleted and the storage handed back to the storage pool on the array in question or the old VMFS3 datastore can be destroyed, and a new VMFS5 volume created. The critical consideration is ensuring there are no legacy hosts left in the environment which may try and fail to connect to a VMFS5 volume. VMware support backwards compatibility not forwards compatibility.

1. Select the datastore in the Inventory

2. Select the Manage Tab, Choose the Settings Column and the General options

3. Click the button to Upgrade to VMFS 5...

Screen Shot 2013-12-10 at 13.55.08.png

Note: This button only appears on VMFS3 datastores. Notice the version of VMFS is 3.60.

4. In the Upgrade to VMFS 5... dialog box select the datastore to be upgraded and click OK

Screen Shot 2013-12-10 at 13.56.54.png

Datastore Folders and Datastore Names

Datastore Folders: Once you have a significant number of datastores it is possible to create a folder structure for holding them. This can be useful if you want to exclude access to certain type of storage based on permissions, as datastore folders like any object in vCenter can hold a permissions ACL. You can see datastore folders in the same context as folder in the VM and Template views. It merely allows to layout your inventory in a logical manner. They can also useful when run PowerCLI reports, as they give you an object type to filter on. You could create folders that represented your different types (local, remote, FC, iSCSI, NFS) or by the systems that have access (NYC-Gold01 Cluster, NYC-Silver01 cluster and so on) they choice is yours.

1. In the web-client click the Storage icon

Screen Shot 2013-12-18 at 19.20.52.png

2. Right-Click the Datacenter object, and under All vCenter Actions, select New Storage Folder

Screen Shot 2013-12-18 at 19.27.19.png

3. Specify a friendly name for the Storage Folder

Screen Shot 2013-12-18 at 19.28.13.png

4. In the Related Objects Tab and Top Level Objects column - select the datastores, and drag and drop them to the folder. It is also possible use the Datastore column to filter out objects just to show datastores only.

Screen Shot 2013-12-18 at 19.31.05.png

Datastore Names. When VMFS volume or NFS datastore is mounted it must have a unique name to the host and also to the wider cluster/vCenter environment. vSphere does a good job of validating the uniqueness of datastores names. However, with local storage you might observe a serialization process that looks like this:

Screen Shot 2013-12-18 at 19.41.02.png

This is caused by default installation to local storage creating a VMFS volume called "datastore". When multiple VMware ESX hosts are added to vCenter - this naming clash is detected and vCenter changes the datastore name adding (1), (2), (3) for each host added into the environment. Many administrator find this unsightly and find it difficult to identify which "datastore" belongs unique to which VMware ESX host. The use of the "Related Objects" column can be useful in assisting with properly identifying the datastore, and to which VMware ESX host it is mounted.

Screen Shot 2013-12-18 at 19.45.20.png

A right-click of the datastore and select Rename

Screen Shot 2013-12-18 at 21.10.05.png

Screen Shot 2013-12-18 at 21.14.57.png

Managing Multi-Pathing Settings vCenter

Types of Policy and Modifying the Policy


You should not arbitrarily change the PSP unless directed to do so by your storage vendor or without at least consulting with the vendors documentation. Changing the policy to one that is inappropriate or unsupported can cause outages as the VMware ESX host could potentially not detect a failed path. Use the right policy for the right type of array.

Once an ESX host has more than one connection to datastore be that via fibre-channel or iSCSI then the LUN/Volume that backs that datastore will present multi-pathing options. VMware ESX ships with its own multi-pathing policies although additional policies are configurable by the adding of the third-party load-management plug-ins such as EMC's PowerPath. When a LUN/Volume is presented to the VMware ESX an attempt is made to identify the storage vendor and assign Storage Array Type Policy (SATP), from that the correct Path Selection Policy (PSP) is assigned.

VMware's currently supports three different Path Selection Policies called:

  • Fixed

Typically, Fixed is used with arrays that have more two or more controllers that can both actively take a load. Historically, this has been used to create a manually load-balanced environment where each LUN/volume is has own dedicate fixed (and preferred path) to the storage. The default behaviour of this PSP is just to use the first path found when scanning the PCI Bus. This can mean one path is being used all the time if the configuration has not been reviewed. By setting a "Preferred" path if the current active path fails for whatever reason, then failover will occur to one of the alternative paths. When the preferred path becomes available again, it will be return to the configured or preferred path. This can cause problems when the preferred path is suffering an intermittent failure.

  • Most Recent Used

MRU is typically used in arrays that possess an active/passive controllers. Where the passive node is merely standing by to take over should the active controller fail. Generally, LUNs/Volumes are assigned a controller in an attempt to balance the IO load across two or more controllers in the array. If a failure does occur then path will failover. However, when the failed path becomes available this policy to does not automatically return the path to original location. This is how MRU gets its name - it always uses the most recently used path to the storage. This can be helpful if path has become unreliable. However, there are some cases where one controller ends up owning all the LUNs/Volumes and can become over-worked as a consequence.

  • Round Robin

This is perhaps the best method of distributing load across all the interfaces in the server, and all the controllers in the array. However, it may require additional configuration including:

  • Correct firmware on the Storage Array, with the feature enabled
  • Additional software to be installed to the VMware ESX host
  • A high-level of licensing of the vSphere platform

The current multi-pathing policy on given datastore can be viewed and modified by

1. Selecting the ESX host >Manage Tab >Storage Devices >Select LUN/Volume

2. Click the Edit Multipathing button

Screen Shot 2013-12-06 at 15.17.53.png

Note: In this case the policy is Fixed (and preferred path) on FC connected system using VMW_SATP_DEFAULT_AA SATP Policy. This * indicates the preferred path to the storage, and the Active (I/O) indicates the path used currently. Two HBAs plugged into two different FC switches in turn plugged into two different controllers on the storage array present 4-paths to the storage. Here vmhba3 is speaking to array via the second controller (T2) to LUN number 10.

This preferred path can be modified by merely selecting a different path to the storage - so be careful in your clicking or choose cancel if you are merely viewing information. In this way it possible to manually distributed load to make sure that one path is not over utilized. This is referred to as setting the "Logical Unit Policy"

Screen Shot 2013-12-06 at 15.35.12.png

In this second example VMware ESX is using a Dell Equallogic Array for iSCSI using the VMW_SATP_EQL SATP Policy, and has been enabled for Fixed Policy:

Screen Shot 2013-12-06 at 15.27.57.png

Note: In this case the HBA is the VMware ESX Software Initiator using vmhba34 alias. Two paths are enabled using two separate VMkernel. The active and preferred path is using controller number one or C1.

Finally, paths can be disabled from the Paths Tab. This can be useful if you have a storage array controller or switch that is displaying an intermittent fault, and causing unwanted failovers.

Screen Shot 2013-12-06 at 15.48.36.png

Vendor Support on Path Selection Policies

Below is some example vendor statements based on the configuration of their various storage arrays. In this case they come from NetApp and Dell, and they refer to specific setups. These are intended as examples of how different vendors implement support for different methods of load-balancing.


In recent versions of Data ONTAP, NetApp have have been recommending an access method called "Asymmetric Logical Unit Access" or ALUA. According to Duncan Epping he describes ALUA thus:

ALUA is that you can see any given LUN via both storage processors as active but only one of these storage processors “owns” the LUN and because of that there will be optimized and unoptimized paths. The optimized paths are the ones with a direct path to the storage processor that owns the LUN. The unoptimized paths have a connection with the storage processor that does not own the LUN but have an indirect path to the storage processor that does own it via an interconnect bus.

In fact, in clustered Data ONTAP, ALUA is all that’s supported all these three main storage protocols including FC, FCoE and iSCSI. If the customer has configured ALUA support for this protocol then this is when you use Round Robin path selection policy. Without ALUA support NetApp recommends using the fixed path selection policy - and still do for 7-mode with iSCSI. ALUA is typically enabled on the properties of the Initiator Group.

Screen Shot 2013-12-10 at 11.16.03.png

NetApp does not require its own PSP or SATP because it doesn’t really need it, since vSphere 5.0 and higher has a specific NetApp SATP rule with a specific PSP setting. In Data ONTAP, FC target queues are immediately moved to NVRAM, so it doesn’t make a whole lot of difference which path you come in on. Having said that, people have seen some performance improvements by lowering the IOPS before path switch from default of 1000 to 100 (or even less in some cases).

For further information consult this blogpost by Peter Learmonth

There is one caveat currently as virtual machine running Microsoft Cluster Service (MSCS) presents a special case. Prior to vSphere 5.5, VMware didn’t support ALUA for MSCS RDMs. Some early docs (including NetApp best practices) recommended MRU, but fixed is actually preferred pre-5.5. Also, they only supported FC. With 5.5, they announced support for iSCSI and FCoE as well as ALUA, since now people can use the same SATP and PSP for VMFS, general RDM, and MSCS RDM.

Dell EqualLogic

Dell has two models for managing load-balanced connections to the their Equallogic storage arrays. For customer with the right level of vSphere license they can use the Mulit-Path Extension Module (MEM). For those customers who have lower-level SKUs you can configure load-balancing using the built-in Path Selection Polcies in VMware ESX.

Dell Mulit-Path Extension Module (MEM)

When the MEM is download it comes in zip file with PDF admin guide included, there is also a separate document TR1074 - Configuring and Installing the EqualLogic Multi-pathing Extension Module for VMware vSphere and PS Series SANs which covers the MEM as well as explaining in details some of the advanced iSCSI settings that should be modified as part of its installation.

The current recommendation is to use Dell's freely provided multi-pathing modules available for ESXi 4.1 and above. While Dell EqualLogic have no licensing restrictions (as with all Dell Equallogic software, as long as the customer has a current support contract they get access to firmware updates, and server side software) on its use, however VMware does tie the APIs used to Enterprise and Enterprise Plus. In short what these Multi-pathing modules do is direct the IO down the least queue depth path to the controller with that data block – as an Dell EqualLogic pool can contain up to eight active controllers. The MEM can be deployed in a number of different ways. It's ships as Virtual Infrastructure Bundle (VIB) and can be pushed out using:

  • Command-Line tools using Dell's Script VMware vCLI or the VMware Management Assistant
  • VMware Update Manager
  • Incorporating the VIB into a VMware ESX Host image/ISO for use at install time or AutoDeploy

This part of the vmWIKI focuses on the command-line methods:

The can be used to install the MEM bundle and also complete the network and iSCSI configuration. In this case the is uploaded to a storage location accessible to the VMware ESX hosts, and then invoked remotely. Using the following sample syntax for instance the esx02nyc host installs the bundle located on the datastore called "datastore1". In this case the VMware CLI (vCLI) is used to make a remote connection to the VMware ESX host to initiate the installation. --install --username=root --password=Password1 --datastore=datastore1

Screen Shot 2013-12-12 at 16.39.40.png

Note: The script supports reboot flag ( --reboot) but a reboot should not be required for a clean installation.

If you prefer you can develop your own script for installing and configuring the MEM. This maybe because you already have the networking and iSCSI Initiator already enabled and configured, and merely wish to install the bundle with the correct parameters.

Note: These commands are executed directly at the ESX host or invoked during a scripted installation of VMware ESX.

esxcli system maintenanceMode set --enable true
esxcli iscsi adapter param set --adapter=vmhba34 --key=LoginTimeout --value=60
vmkiscsi-tool -W -a delayed_ack=0 -j vmhba34
esxcli software vib install --depot /vmfs/volumes/datastore1/
esxcli system maintenanceMode set --enable false

Dell has configured the script so it will both handle the networking and some additional iSCSI parameters (LoginTimeout and DelayAck parameters within iSCSI) . The script can be run non-interactively (where the administrator applies all the parameters from the command-line) or interactively (where the administrator is ask questions during the configuration itself). If the VMware ESX hosts are already part of vCenter they should be placed into maintenance mode before attempting the script. Additionally, if lockdown mode has been enabled on the host, this should be temporarily disabled during the installation itself. If this is a clean install of MEM for the first time, then a reboot of the VMware ESX will not be needed, however with an upgrade from one version to another a reboot is required.

The following command line can be used to perform the network configuration. If you are not using CHAP authentication in the iSCSI implementation you can merely miss out those flags in the script.

esxcli --username=root --password=Password1 system maintenanceMode set --enable true --configure --bestpractices --username=root --password=Password1 --vswitch=vSwitch0 --mtu=9000 --nics=vmnic0,vmnic1 --ips=, --netmask= --vmkernel=IP-Storage --nohwiscsi --enableswiscsi --groupip= --chapuser=vmware --chapsecret=VM@re1!
Do you wish to proceed with configuration? [yes]:

Screen Shot 2013-12-12 at 16.29.38.png

Note: The script does support a --heartbeat= parameter this is no longer required in vSphere5.1 and later. Whilst the script will enable the VMware Software iSCSI Initiator it does not currently support the configuration of the IQN. This could be configured using the ESXCLI command:

esxcli --username=root --password=Password1 iscsi adapter set -A vmhba34 -n

After the installation you should find that all datastores with a name of "EQLOGIC iSCSI Disk" will be reconfigured to be using the Dell Equallogic MEM like so:

Screen Shot 2013-12-12 at 13.00.48.png

Round-Round with Default SATP/PSP

For customers without those required VMware licenses Dell recommend using bulti-in Round Robin path selection policy that ships with VMware ESX, and lowering the IOs per path. The best value for lowering the IOs per path seems to be open to debate, but most the partners recommending values much lower than 1000. As the MPIO vSwitch is the same between both policies, the configuration script Dell provide with the EqualLogic Routed module can be used for a host been setup for Round Robin. In this scenario merely setting the vSwitches to support load-balancing is not enough. You will find that VMware will default to using the "Fixed" PSP which means only one connection will be used, with the second merely being used in the event of a failure. Dell has a specific document TR1091 Best Practises with Equallogic and VMware which covers what is required to make vSwitch/Round-Round load-balancing to work effectively. Within it outlines the use of the ESXCLI command to reconfigure the PSP parameters.

This script run on each ESX host reconfigures existing Dell Equallogic Volumes with the correct parameters:

esxcli storage nmp satp set --default-psp=VMW_PSP_RR --satp=VMW_SATP_EQL ; for i in `esxcli storage nmp device list | grep EQLOGIC|awk '{print $7}'|sed 's/(//g'|sed 's/)//g'` ; do esxcli storage nmp device set -d $i --psp=VMW_PSP_RR ; esxcli storage nmp psp roundrobin deviceconfig set -d $i -I 3 -t iops ; done

When the script above should output a result of "Default PSP for VMW_SATP_EQL is now VMW_PSP_RR".

This second script makes the change configuration the preferred option for new Dell Equallogic Volumes:

esxcli storage nmp satp rule add -s "VMW_SATP_EQL" -V "EQLOGIC" -M "100E-00" -P "VMW_PSP_RR" -O "iops=3"

Note: A reboot is required of the ESX host to make this affective

To confirm the settings are in place you can run:

esxcli storage nmp device list

Before: This graphic shows the state of one of the Dell Equallogic volumes called "naa.6090a078c0f5fedd6329d5399e012057". Notice that the PSP is "VMW_PSP_FIXED"

Screen Shot 2013-12-11 at 11.02.26.png

After: This graphic shows the state of one of the Dell Equallogic volumes called "naa.6090a078c0f5fedd6329d5399e012057". Notice that the PSP is "VMW_PSP_RR" and the PSP Device Configuration presents the settings provided earlier

Screen Shot 2013-12-11 at 11.04.24.png

Mounting NFS Volumes


NFS is popular protocol for both Linux and found on multi-protocol arrays. It's very simple to setup on the ESX host. All that's required is a valid IP address and the IP address of the array together with export or share path. Despite the existence FC and iSCSI Storage may virtualization admins like using NFS because it sometimes easier to present storage. Generally, each cluster is allocated its own chunk of LUNs/Volumes, and these are not visible to other clusters in the site. The separation/segmentation or siloing reduces the chance of conflicts and the chance of one cluster affecting another - it can also make it harder to move VMs from one cluster to another - because they lack a common shared storage location. Additionally, NFS can be useful for anncilary shared storage for such items as template or store of .ISO images or software.

NFS exports need to be setup with the "no root squash" property which allows for server-to-server connections will full access to the NFS file system. At no stage does the ESX host need to know the "root" account password on the storage array.

Creating an NFS volume on NetApp

NetApp is an extremely popular NFS vendor, and arguably has pioneered the mainstream adoption of the protocol in modern datacenters. NFS Volumes and Exports can be created through the OnCommand System Manager. When a NetApp Volume is defined in System Manager, the storage administrator has the ability to indicate how it will be accessed.

Screen Shot 2013-11-29 at 14.01.42.png

Once the volume has been defined permission need to be adjusted to allow the VMware ESX hosts access. In the example below, every host with a address has been granted read/write/root access:

Screen Shot 2013-11-29 at 14.25.50.png

Mounting an NFS volume to VMware ESX Hosts

1. >Hosts and Clusters >Select your vCenter >Right-Click the DataCenter or Cluster

2. Select New DataStore

Screen Shot 2013-11-29 at 13.04.45.png

3. Click Next to accept the location

4. Select the Type of NFS

Screen Shot 2013-11-29 at 13.06.35.png

5. Next specify the details of the NFS mount process by typing a friendly name for the datastore; the IP address of the NFS service and the path to the export, in this case /vol/templates.

Screen Shot 2013-11-29 at 13.07.21.png

Note: Different NFS providers use different syntax. For instance NetApp defaults to /vol/<exportname> whereas an IOMega NAS device commonly would use /nfs/<exportname> and FreeNAS typically use /mnt/<exportname>. NFS datastores can be mounted in a read-only mode. This can be helpful where other methods are used to populate the share with content. You may prefer to template and .ISO datastore be made read-only to prevent virtualization administrator accidentally using them to store production virtual machines.

6. Next select which ESX hosts will mount the NFS datastore. Typically every host in the same VMware HR/DRS cluster would have access to the same storage. It is less common for datastores to be made available across clusters.

Screen Shot 2013-11-29 at 13.07.40.png

NFS volume Advanced Settings

By default the number of mounted NFS volumes is set to 8. If you wish to mount more than 8 NFS volumes this is possible, but an advanced setting needs to be modified for this to work called NFS.MaxVolumes. The maximum supported mount points is 256.

1. Select the ESX host, and click the Manage Tab and Settings column

2. Next select Advanced System Settings, and use the Filter option to show only NFS settings

3. Locate the parameter NFS.MaxVolumes, click the pencil icon to Edit the value

Screen Shot 2013-12-19 at 13.29.32.png

Storage Vendor Plug-ins: EMC VSI (TBA Placeholder)

Storage Vendor Plug-ins: Dell EqualLogic Virtual Storage Manager (VSM)

Dell EqualLogic provide a plugin that integrate deeply with VMware's vCenter management system. Providing not just status and monitoring information about the storage, but enabling the vSphere administrator to leverage the capailities of the EqualLogic array, to provide local and remote data protection. Dell's VSM provides over fifty role-based access controls (RBAC) enabling the creation of very granular permission levels. In some environment the Storage Team may prefer to limit who on the Virtualzation team can create new datastore, and in other environment a junior VMware admin maybe permited to create snapshots, but the task of recovering a VM is limited to a senior admin.

  • Creating and mounting of new datastores
  • Resizing (enlarge only) and deleteing of existing datastores
  • Create hypervisor consistent hardware based snapshot, to protect virtual machines
  • Recover of groups of VMs or selectively restore individual VMs
  • Enabling and managing replication of datastores for recovery of VMs at a DR site
  • Granular role based access controls

Dell's technology is called Dell VSM which standards for Dell Virtual Storage Manager - Dell also provides a Site Recovery Adapter (SRA) for Site Recovery Manager (SRM) and a Multipathing Entension Module (MEM) for ESXi (Link to MEM section?) for increased performance. Dell also has a number of integration toolkits for other OS vendors, follow this link for more info

For those of you who have not yet made the change to the vSphere Web Client, Dell continues to support the previous version, VSM 3.5.x, which integrates with the legacy vSphere Client, and provides much of hte same functionality.

Installing and configuring VSM

Configure the vCenter Runtime IP Address

Before you import the Dell VSM appliance the vCenter "RunTime" IP address should be first configured, if it has not yet set. This can be configured after importing the appliance, but the appliance will not power until it is configured. It's this Runtime IP address that allows the appliance to register itself with vCenter when it first powers on.

1. Select >vCenter in the Home page, Select >vCenter Servers in the Inventory list, Select your >vCenter server

2. Select the Manage Tab, and the Settings column

3. Select General, and click the Edit button

4. In the subsequent dialog box, select the Runtime Settings option

5. In the vCenter Server Managed Address field, type the IP Address of the vCenter Server

IMPORTANT: Notice how in this case the vCenter Service (VPXD) needs to be restarted.

Screen Shot 2013-11-29 at 16.23.03.png

Importing the Dell VSM Appliance

Now that the vCenter Runtime IP address has been properly configured the Dell VSM Virtual Appliance can be imported.

1. >Hosts and Clusters >Select your vCenter >Select the DataCenter >Select the ESX host or Cluster

2. Right-Click and Select the option to Deploy OVF Template

Screen Shot 2013-11-29 at 15.13.30.png

3. Browse to the location where you downloaded the Dell VSM .ova file.

Screen Shot 2013-11-29 at 15.14.56.png

4. Review the details associated with the .OVA file, and check the Accept the extra configuration options checkbox. The Dell VSM virtual appliance uses NTP for time synchronization, this can be enabled on the array and the VSM to utilize the same NTP time source.

Screen Shot 2013-11-29 at 15.15.38.png

5. Next the Accept the EULA. This comes in two parts - the first acknowledges the need for the Runtime IP setting, and the second acknowledges the EULA itself.

Screen Shot 2013-11-29 at 16.44.26.png

6. Assign the virtual appliance a name, and location (Datacenter/Folder) in the vCenter Inventory

Screen Shot 2013-12-02 at 12.50.33.png

7. Select a datastore to locate the virtual appliance.

Screen Shot 2013-11-29 at 16.45.51.png

8. Next we need to configure the VSM network - the VSM needs to configure for the same network as the vCenter Server, or network routable to the vCenter Server. In some environments the vCenter Server and EqualLogic array management, may not be accessible or routable from one network, in such environment the VSM virtual appliance can be configured with a second network port from the VSM vCenter interface – see the product documentation or TR1101

Screen Shot 2013-12-02 at 12.45.13.png

9. Next we configure the TCP/IP settings for the VSM appliance. Leaving these fields blank defaults the appliance to using a DHCP address

Screen Shot 2013-12-02 at 12.47.59.png

10. Clicking Next and Finish triggers the import process. Notice the warning about needing to configure the vCenter Runtime IP address.

Screen Shot 2013-12-02 at 12.48.36.png

First Power On and Post-Configuration

1. After powering on the Dell VSM you should find it registers a icon in the Home page of the web-client.

Note: On first boot it can take VSM a few minutes to become active. From here you can carry out the few post-configuration tasks required.

Screen Shot 2013-12-01 at 17.18.03.png

2. Under VSM Home and Manage, the Management Network options allow you to adjust the configuration options specified during the import process:

Screen Shot 2013-12-01 at 17.44.38.png

3. The first step to enabling the functionality of the VSM is to configure it to know of your Dell EqualLogic arrays. This is done using the >Home >Getting Started >Add Group option. In the dialog box you merely need to type the IP address, Group Admin and Password values - and click Add.

Screen Shot 2013-12-01 at 17.55.10.png

4. This should register the group under the >Dell Storage >PS Groups

Screen Shot 2013-12-01 at 18.00.27.png

Manage Dell VSM Settings

The Dell VSM has own settings which can configured from VSM Home page. This allows you to modify the settings configured during import of the appliance under Settings and Management Network including:

  • Hostname
  • Timezone
  • NTP Servers
  • Management Network

The Storage Network options can be used to enable access to the Dell EqualLogic Array if the management network does not have a route to the storage network. From the Storage Network option click on the Edit button, select the appropriate VM Network from the drop down, and provide the IP address info. Every environment is different, so what the appropriate VM Network for one environment will not meet the security requirements of another. By default an EqualLogic array’s management traffic is on the same ports that carry iSCSI traffic, in such an environment access to the management network from a VM just requires the addition an “iSCSI Guest” VM Network off of the iSCSI vSwitch. If management has been split from the iSCSI network on the array, then often this will be on the same Management Network as vCenter – though in some large siloed organization there may exist multiple non-interconnected management networks.

Screen Shot 2013-12-06 at 11.09.36.png

The System Schedule controls frequency of backup, monitoring of disk usage and the verification that Smart Copies and Replicas. VSM Peers configurations are used as a part of the configuration of replication partners, and is covered elsewhere in this page. Replication Partners shows the relationship between various Dell EqualLogic Groups that have been "paired" together for replication purposes. vCenter Servers shows which vCenter the specific Dell VSM has been registered with. Finally,Advanced Settings control the timeouts, sleep, polling retry values used by the Dell VSM when communicating to the Dell EqualLogic Groups and vCenter - as well as additional tasks included in various workflows.

Dell EqualLogic VASA Provider

vStorage APIs for Storage Awareness (VASA) is collection of API's that allow storage vendors to advertise storage capabilities from the array to vCenter. The primary purposes of this is to provide useful information to the vCenter Administrator about the capabilities of the underlying storage. VASA providers can tell the administrator what RAID level is in use, disk type, replication, thin provisioning, and de-duplication. The VASA Provider data can be leverage by the Storage Profiles feature, which allows the classification of the storage into different types and tiers. Storage Profiles can leverage this VASA information, but can also use user defined categories.

The process of importing the VSM virtual appliance into vCenter requests the majority of the configuration information; however, the VASA Provider cannot be configured as part of the import process.

Configuring the VASA Provider

1. From the Web Client right-click on the VSM and select Open Console from the context menu.

2. Log into the VSM console using the default credentials: username: root and password: eql.

3. From the Setup menu enter [1] to select Configuration.

4. From the Configuration menu enter [3] to select Configure VASA.

5. Provide a username for the credentials of the account to be created for the vCenter VASA Service and the EqualLogic VASA Provider to communicate with, than press Enter.

Note: This is a unique account that is solely used by VASA Provider and VASA Service

6. Provide a password for this account, press Enter, and then re-enter the password for verification. Enter [Y] to proceed with these settings.


7. The VSM VASA provider will then communicate with the VASA service on the vCenter server, and the VASA service will register with the EqualLogic VASA Provider on the VSM. This process will take approximately 2 minutes. Once complete the EqualLogic VASA Provider will be listed under Storage Providers.


Changing the root account password

The VSM virtual appliance, like most virtual appliance, is configured with a default password, this can be changed from the VMs console.

1. From the Configuration menu enter [4] to select Change root password.

2. At the prompt enter the new password, and then press Enter.

3. Re-enter the password to verify, and then press Enter.

4. The password will now be changed, press Enter to return to the main setup menu.

Using VSM for datastore managment

Creating and Assigning a Datastore

Once the Dell EqualLogic array has been registered with the vCenter, you can use it to create new datastores - this simultaneously build the iSCSI volume on the array, and assigns the correct IQN and access privileges.

1. In the VSM >VSM Inventory >Datastores - click the Datastore icon with green plus symbol

Screen Shot 2013-12-01 at 19.27.30.png

2. Next type a friendly name for the datastore, together with a VMFS Version/Type - and select the datacenter where it will be located.

Screen Shot 2013-12-01 at 19.32.06.png

3. Select the VMware Cluster or ESX host that the storage will mounted to:

Screen Shot 2013-12-01 at 19.38.05.png

4. Next you can define the Datastore Location and Size:

20140319 194755.png

Note: This form allows you to select which Group and Storage Pool to create the volume from. Thin Provisioning allows you define a volume that maybe defined as 1TB, but actually grows on demand. The Thin Provision Stun feature allows vSphere to detect that the VM is running out of disk space on a thinly provisioned volume, and halt it before it runs out of free space. From this interface its is possible to define multiple datastores and their size in MB, GB and TB. The Snapshot Reserve is a quantity of disk space that's held back for the volume for snapshots. By default his is set at 100% - so 1TB volume would have 1TB snapshot reserve - consuming 2TB of disk space in total. This would assume the possibility that every block in the disk could be changed, and therefore snapshot could be the complete contents. This is unlikely, but theoretically possible. It's also possible that the volume could experience a very low rate of disk churn, and so smaller reserve would be suitable. Snapshot Reserve Warnings alert the storage administrator to possibility of running out of snapshot reserve space. If insufficient reserve is allocated, and large number of blocks change, there may be insufficient space to snapshot a volume. Finally, the Enable Snapshot Space Borrowing allows one volume to borrow from the snapshot reserve of another volume, or from the array’s free space. For instance, two volumes may have large reserves (100% each) but one of them could be using only 1% of its allocation, while the other has already consumed its reserve. If enabled, and if this volume exhausts its snapshot reserve, it can borrow from another volumes snapshot reserve, or from the free pool space, instead of deleting its oldest snapshot.

5. Next select the Access Control Policy, by default VSM can enumerate the IQN configured on the ESX host, and use that auto-generate access rights. This saves a great deal of time and effort in administration.

Screen Shot 2013-12-01 at 20.01.07.png

6. Next you can optionally assign a Snapshot Schedule which will controls how frequently snapshots are taken, and how long they are retain for:

Screen Shot 2013-12-02 at 12.13.21.png

Note: Snapshot Schedules are assigned a name per volume, and can be configured to snapshot the volume at specified frequencies - which include many variables (day, time, time of day) - these can be in turn be saved as "templates" to reduce further configuration, and ensure consistency. The snapshots can be aged using the Keep Count value. Other advanced options included the capacity to dump the contents of virtual machine memory to ensure a consistent snapshot. You should be careful with this parameter as it can affected the performance of the VM.

7. Optionally, you can also enabled the volume to be replicated - This is covered later in this vmWIKI page.

8. Click Next and Finish starts the process of provisioning and presenting the volume at the Array and to the selected ESX hosts.

Note: You can monitor the progress of the datastore creation by small pop-up window in the vSphere Web-Client, and from the "Jobs" node in the Dell VSM itself.

Screen Shot 2013-12-02 at 12.42.36.png

Resizing a Datastore using Dell VSM

Once a datastore has been created, it can be re-sized using the tools provided by VSM.

1. Under >VSM Inventory >Datastores select the datastore

2. Click the Resize Datastore icon

Screen Shot 2013-12-02 at 22.19.12.png

3. Increase the size of the datastore by increasing the increment for the volume size.

Screen Shot 2013-12-02 at 22.24.33.png

Note: Resizing a datastore (along with other common management tasks) can also be carried out from the properties of the datastore itself. Navigate in the Dell VSM to >Datastores, >Select your datastore, >Manage Tab and >Dell VSM column

Screen Shot 2013-12-06 at 10.01.55.png

Manage Volume Settings

There are number of settings and options on a volume which you may wish or need to configure depending on your circumstances. For instance you may have added three new additional hosts to an existing VMware Cluster. These new ESX host that have been added for increased compute capacity, will require access to the same datastores. Other option include being able to modify the existing schedule for snapshots and replication - as well as being able to view active iSCSI connections from the ESX hosts to the storage.

Add additional IQNs to the Access Control List

Occasionally there is a need to grant access to a volume from an initiator that is not in the ACL Template used at volume creation time. Such as access for a temporary server. VSM enables this task to be completed without leaving vCenter.

1. In the Dell VSM navigate to >Datastores >Select the Datastore >Manage Tab >Dell VSM Column

2. Select the Access Control List entry

3. Click the small green plus to add IQN to the ACL for the datastore. The other icons allow the administrator to add a new host or new cluster to the access to the datastore.

Screen Shot 2013-12-06 at 10.10.45.png

View iSCSI Connections

It is possible to view the open iSCSI TCP sessions to the Dell EqualLogic Array. This can be helpful in determining if an ESX host has a valid connection to the storage:


VSM and data protection

Managing Replication using Dell VSM

In a multi-site, multi-vCenter environment one Dell VSM can be registered with the respective vCenter environment - and the Dell EqualLogic Groups at each site can be registered to the system.

If the multiple Dell EqualLogic Groups are already "pair" together this will be displayed in the interface. This same interface can be used to create the pairing process if it has not already been carried out using the Create button. The arrow pointing from New York to New Jersey, and from New Jersey to New York indicate that replication is configured in both directions.

Screen Shot 2013-12-02 at 22.37.13.png

If you environment contains multiple sites and mulitiple vCenter the model is to deploy and register a Dell VSM at to each vCenter. In this example there are two locations - New York and New Jersey with a vCenter at each location.

Screen Shot 2013-12-05 at 10.53.24.png

1. Once this configuration is in place the respective VSM are made aware of each "peer". This configuration is held under >VSM Home >Manage Tab >VSM Peers

2. Click Change Password to set a password for the peer process. This needs to be carried out on BOTH VSM environment so PeerA can pair up with PeerB using PeerB's password. Although you may see the appearance of password indicated by asterisks ******, in a clean installation there is no password set.

Screen Shot 2013-12-05 at 10.58.40.png

3. Next use the Add button to reference the VSM at the opposite site. Repeat this configuration on the second vCenter to configure the ability to manage replication from SiteA to SiteB, and from SiteB to SiteA. Type the name or IP address of the other sites VSM appliance, together with its Access Password, and friendly description.

Screen Shot 2013-12-05 at 11.02.18.png

4. When you create a new iSCSI volume replication can be enabled at the same time, or at any time afterwards by right-clicking on an existing volume in Dell VSM to enable replication from there

Screen Shot 2013-12-05 at 11.23.27.png

Note: Replication requires the configuration of a Local Reserve - this reservation of disk space is ensure enough temporary space exists to hold all the changes taking place in the volume since then last replica was taken. This reserve guarantees that even if every block in a volume was modified, there would be enough reserved storage to hold all of those changes ready to be replicated elsewhere. Typically this much change does not occur between replicas, so once you understand the rate of change for a particular volume you can adjust it to a less conservative value. The Allow temporary use of free pools space - allows for free space outside of the volume’s designated reservation reserve to be used to complete the replication process in addition to the reservation – occasionally a volume can experience a short term increase in its rate of change, for example at the end of a business quarter a database might experience a higher number of transaction, and therefore data changes then during the rest of the quarter. The Keep Failback Snapshot keep a copy of the most recent replica in the volume’s Local Reserve. This allows for faster failback from DR, in such an instance only the changes at DR will need to be replicated back to the original DR site. The Local Reserve should be sufficient sized for this, hence why the Local Reserve can be sized to 200%. The Replication Partner allows the administrator set the target/destination for replication, in a multi-site/multi-vCenter environment it is possible to pair up multiple Dell EqualLogic Groups and Dell VSM to support replication to many different locations. However, an individual volume can only be replicated to one array. The Remote Reserve option allows you to control how much disk capacity is guaranteed or reserved on the target array for storing replicas of that volume - a value of 200% allows for the storage of the initial replica (assuming the volume is full of data) plus allows for 100% of data to be change and have sufficient space to store an incoming replica. In practice 100% data change between replicas is rare, but customers typically store multiple replicas in the event that data-corruption at their Prod site has become replicated to their DR site. Finally, Replication Schedule allow you define a frequency for the replication by minute, hour and what time of day the replication takes place.

5. Synchronous Replication is configurable, but your Dell EqualLogic environment must be configured to support it. Synchronous Replication with EqualLogic arrays is pool to pool, and requires a high bandwidth low latency link between the sites. Note: A volume can be configured for asynchronous Replication or Synchronous Replication, but not both.

Screen Shot 2013-12-05 at 11.13.39.png

Data Recovery with Snapshots & Replication

The state of both snapshots and replication can be monitored directly from the Dell VSM by using the Data Recovery node. This allows the administrator to search for inbound and outbound replicas, as well as snapshots that reside on datastores within vSphere. For instance the administrator can search for snapshots on a datastore, and then open the target datastore and from there the following tasks are possible:

  • Create a manual snapshot
  • Typically snapshots are created by a schedule configured / select by the administrator.
  • Delete a snapshot
  • Restore an individual VM
  • Rollback the entire volume

Restoring a VM from Dell Snapshot

The option to restore individual VMs maybe of interest to organizations who have had a catastrophic loss of VM caused by accidental deletion by an administrator or some other process such as a badly designed PowerCLI script. In the scenario below a simple 3 tier application was created containing a database, application and web-server in a vCenter construct called a vApp. vApps are a way of gather VMs together to reflect their overall relationships with each other, and manage performance. This was stored on a datastore called "platiniumstorage01" which is both snapshotted and replicated to another site once every hour.

Screen Shot 2013-12-06 at 08.59.35.png

To simulate an error the App01 was deleted. Opps... ?

1. In the Dell VSM select Data Recovery

2. From the Object Type select Snapshots, and Search

Screen Shot 2013-12-06 at 09.27.57.png

3. Next double-click the object (Dell VSM permits the snapshotting of individual VMs, folders of VM, vApps and datastore, however, keep in mind that ultimately it is a volume that is snapshotted on the array, so plan your data protection strategies accordingly) you wish to see the snapshot recovery properties of.

Screen Shot 2013-12-06 at 09.29.59.png

Note: Another route to a similar interface is to select Datastores in Dell VSM, and open the required datastore - under the Manage Tab, and the Dell VSM column you should be able to see all the settings and options associated with the volume.

Screen Shot 2013-12-06 at 09.32.58.png

4. From the list select a snapshot with the appropriate time/date/state that you wish to restore, and click the Selective Restore icon. Optionally you can Rollback the entire datastore, which will restore all VMs on the datastore to the state they were in when the snapshot was taken – typically this is not desired.

Screen Shot 2013-12-06 at 09.40.50.png

5. Next select the VM you wish to restore, and click Next and Finish

Screen Shot 2013-12-06 at 09.42.43.png

Note: You can monitor the progress of the restore from the "jobs" tab in the Dell VSM

Screen Shot 2013-12-06 at 09.45.29.png

Data Recovery with Replication

In the event of catastrophic failure - such as loss of site or loss of the storage array, a large amount of data can be retrieve by failing over to storage array where VMs have been replicated. By its nature this task is carried out at the destination/target location where the VMs are being replicated. VSM can simplify the task of promoting replicas to on-line volumes and register virtual machines at the DR site. Of course, recovering the VMs to another array or location is only the first part of the process. If the VMs now reside at a different site, depending on the environments network configuration there is a chance that both their network and IP configuration will need to be modified. For true end to end DR automation you may well need to consider licensing a technology such as VMware's Site Recovery Manager.

1. In the Dell VSM select Data Recovery

2. From the Object Type select Inbound Replicas, and Search

3. This should find all the volumes that are being replicated to the destination/target array. Double-click the datastore you wish to failover to open it options.

4. This should display the relationship between the vCenters, Dell VSMs and Dell EqualLogic Arrays that reflect in this case a multi-site configuration - where in our case New York is replicating its VMs to New Jersey. The highlighted icon triggers the Failover...

Screen Shot 2013-12-06 at 11.16.45.png

5. Select a Datacenter and Folder to hold the VMs to be registered to vCenter when the datastore is recovered

Screen Shot 2013-12-06 at 11.22.50.png

6. Select a Cluster or ESX host as target for running the VMs

Screen Shot 2013-12-06 at 11.24.00.png

7. Select how the Dell VSM will grant rights to the volume once it has been mounted

8. Click Next and Finish

Note: You can monitor the progress of the failover from the Jobs window

Once the process has completed you should find that the Dell VSM has mount a replica to a datastore called "VSM-failover-<VSM_hostname>-<OriginalVolumeName>" and has created a VM folder structure within the datacenter location as well.

Screen Shot 2013-12-06 at 11.35.58.png

Screen Shot 2013-12-06 at 11.33.52.png

IMPORTANT: At this point the VMs are running on a replica of volume. For this to be a permanent configuration the replica would need to be promoted to being a volume using the Dell EqualLogic Group Manager.

Modify Snapshot and Replication Schedules

It is possible to modify the existing schedule of replication and snapshotting of a selected volume:


Caveats and unexpected limitations

IMPORTANT: Currently the Dell EqualLogic isn't aware of Linked Mode in vSphere 5.5. This can lead to configuration issues in a multi-site; multi-vCenter environments. The Dell plug-in expects one VSM appliance per vCenter instance/site - complications can occur when registering the right Dell EqualLogic group to the right vCenter environment.

Storage Vendor Plug-ins: NetApp VSC [TBA Placeholder]

Configuring Storage (PowerCLI)

Rescanning Storage

Screen Shot 2013-10-21 at 14.01.16.png - Rescanning Storage:

A simple one-liner PowerCLI command is all that is required to rescan many hosts storage configuration - care must be taken not to rescan every single host in vCenter, and limit rescans to clusters of ESX hosts

Get-Cluster -name “NYC-Gold01” | Get-VMhost | Get-VMHostStorage -RescanAllHBA

Configuring iSCSI Storage

Screen Shot 2013-10-21 at 14.01.16.png - Configuring iSCSI Storage:

This is script uses a .CSV file retrieve the FQDN/Hostname values.

Screen Shot 2013-12-18 at 17.59.12.png

The first block of PowerCLI enables the iSCSI Software Adapter together with a valid IQN, Target IP, Chapname & Password. The second block uses the ESXCLI namespace within PowerShell to set the VMKernel Port Bindings for iSCSI Load-balancing. Finally, a rescan is undertaken to find new storage and VMFS volumes. This script assumes the appropriate networking is in place. You can use this sample in the vmWIKI as an illustration of that.

Import-CSV vmhosts.csv | ForEach-Object {
$hostname = $_.vmhost
$shortname = $_.shortname

$target0 = ""

Get-VMHostStorage -VMHost $hostname | Set-VMHostStorage -SoftwareIScsiEnabled $true
$iscsihba = Get-VMHostHba -VMhost $hostname -Type iScsi
$iscsihba | Set-VMHostHBA -IScsiName$shortname
New-IScsiHbaTarget -IScsiHba $iscsihba -Address $target0 -ChapType Required -ChapName vmware -ChapPassword VM@re1

$esxcli = Get-ESXCLI -VMHost $hostname
$esxcli.iscsi.networkportal.add($iscsihba, $true,"vmk1")
$esxcli.iscsi.networkportal.add($iscsihba, $true,"vmk2")

Get-VMHostStorage -VMHost $hostname -RescanAllHba
Get-VMHostStorage -VMHost $hostname -RescanVmfs

If your storage vendor supplies load-balancing software - this could be installed using PowerCLI. In this example the Dell MEM is being installed:

Foreach ($vmhost in (get-vmhost))
  Set-VMHost -VMHost $vmhost -State "Maintenance"
  Install-VMHostPatch -VMHost $vmhost -LocalPath "C:\MEM1.2\dell-eql-mem-esx5-\" -HostUsername root -HostPassword Password1
  Set-VMHost -VMHost $vmhost -State "Connected"

Creating VMFS Volumes

Screen Shot 2013-10-21 at 14.01.16.png - Creating VMFS Volumes:

Like any process the creation of VMFS volumes using PowerCLI is a viable operation. Clearly care must be taken in enumerating the right LUN/volume to avoid a situation where data loss could occur. This starts at the fabric layer ensuring only LUN/Volumes intended for VMware ESX are presented.

The New-Datastore cmdlet requires the "Canonical Name" parameter in order to format a volume, the Canonical Name is unique ID for block storage. By using the Canonical Name you can guarantee the identity of a given block of storage. Using Get-SCSILun can retrieve storage from a specified ESX host - when combined with filters that show the Vendor, Canonical Name, Capacity and the Runtimename this can assist in locating the correct SCSI disk

Get-SCSILun -VMhost -LunType Disk | ?{$_.vendor -eq "NETAPP"} | Select CanonicalName,CapacityGB,runtimename
Get-SCSILun -VMhost -LunType Disk | ?{$_.vendor -eq "EQLOGIC"}  | Select CanonicalName,CapacityGB,runtimename

In the first example the command returns a single 1.5TB disk with a Host ID of 10. In many environments its possible for the LUN ID on the storage array to be one value (say 23) but the ID as it is known to the host be a different value altogether (for example say 10). For simplicity some administrators make the Host ID and LUN ID the same value to ease identification of the disk.

Screen Shot 2013-12-19 at 10.51.28.png

To return a list of disks that already formatted with VMFS we could use this PowerCLI oneliner:

Get-Datastore | where {$_.type -eq 'vmfs'} | get-view | %{$} | select -ExpandProperty diskname

To format this disk with VMFS we would use the Canonical Name of naa.60a980005034484d444a7632584c4e39. The format only needs to be carried out by one VMware ESX host, and then the other hosts in the same cluster can be re-scanned too.

New-Datastore -VMHost -Name NETAPPLUN10 -Path naa.60a980005034484d444a7632584c4e39 -Vmfs
Get-Cluster -name “NYC-Gold01” | Get-VMhost | Get-VMHostStorage -RescanAllHBA

If you do attempt to re-format a VMFS volume you should receive a message that it is "in use" from PowerCLI

Screen Shot 2013-12-19 at 11.02.50.png

Mounting NFS Volumes

Screen Shot 2013-10-21 at 14.01.16.png - Mounting NFS Volumes:

In this case variables are specified for the NFS Target and Volumes to be mounted. By default a maximum of 8 NFS mount points are enabled. This can be increased using the Set-VMHostAdvancedConfiguration cmdlet. The maximum number of supported NFS mounts in ESX 5.5 is 256.

Set-VMHostAdvancedConfiguration -VMHost $vmhost -Name NFS.MaxVolumes -Value 64 -Confirm:$false

$nfshost = ""
$mount1 = "software"
$mount2 = "templates"
$mount3 = "infrastructureNYC"

Foreach ($vmhost in (get-vmhost))
   New-Datastore -VMHost $vmhost -Name $mount1 -Nfs -NfsHost $nfshost -Path /vol/$mount1
   New-Datastore -VMHost $vmhost -Name $mount2 -Nfs -NfsHost $nfshost -Path /vol/$mount2 
   New-Datastore -VMHost $vmhost -Name $mount3 -Nfs -NfsHost $nfshost -Path /vol/$mount3  

Modifying Path Selection Policies

Screen Shot 2013-10-21 at 14.01.16.png - Modifying Path Selection Policies with Set-SCSILun:

As with formatting VMFS volumes we need the Canonical Name to change the default PSP used. Fortunately, its possible to identify different disk/vendor type by the first couple of bytes in the name itself

Get-SCSILun -VMhost -LunType Disk | Select CanonicalName,CapacityGB,runtimename,MultiPathPolicy

Screen Shot 2013-12-19 at 16.20.21.png

So in this case here all NetApp FC LUNs are identified by using naa.60a98 whereas the Dell Equallogic Volumes are identified by using naa.6090a. We can use the Set-ScsiLun cmdlet to change the policy to an appropriate settings. For example if ALUA has been enabled and is supported on the NetApp Array we could switch the "Fixed" paths to a "RoundRobin" path like so:

Foreach ($vmhost in (get-vmhost))
 Get-VMHost $vmhost| Get-ScsiLun -CanonicalName “naa.60a98*” | Set-ScsiLun -MultipathPolicy "roundrobin"

Screen Shot 2013-10-21 at 14.01.16.png - Modifying Path Selection Policies with ESXCLI:

In most case this simple script can be used to modify the PSP for all the hosts in a datacenter or cluster depending on the filter type you use. However, there are some case where Set-ScsiLun -Multipathpolicy will not work. This is because it appears to only recognise the built-in PSP provided by VMware. If a new PSP is added by the installation by third party vendor the above PowerCLI will in most cases not work. Additionally, you can also find that Get-ScsiLun can misreport the MultiPathingPolicy

In this situation you can resort to calling ESXCLI from within PowerCLI to achieve the same result. In this case the Dell MEM PSP was set used as an example.

Here is used to locate all the Dell Equallogic Disks beginning with naa.6090sa. This pipelined to the command to change the PSP used. Finally, is used to set the default PSP for future volumes.

WARNING: The name space around esxcli has change. If you reading other sources you might find that code examples use esxcli..nmp. rather than Additionally, ESX 5.5 introduced new labels for carrying out tasks. For example in ESX 5.5 setting a default PSP is done with, whereas in previous releases it was done with

Acknowledgement: Arnim van Lieshout post ESXCLI the PowerCLI way provide an invaluable starting point for this section.

Foreach ($vmhost in (get-vmhost)) {

 $esxcli = Get-EsxCli -vmhost $vmhost

 $ | Where {$_.device -like "naa.6090a*"} | select -Property Device,PathSelectionPolicy | %{
 $$null, $_.Device, "DELL_PSP_EQL_ROUTED")

To return a list of the volumes and their PSP you can use this script perhaps called StorageESXCLIpaths.ps1 <hostname/fqdn> to return the paths and MEM used...

$esxcli = Get-EsxCli -vmhost $vmhost
$ | Where {$_.device -like "naa.6090a*"} | Select DeviceDisplayName,WorkingPaths,PathSelectionPolicy

Renaming Datastores

Screen Shot 2013-10-21 at 14.01.16.png - Renaming Datastores:

A good example of wanting to rename datastore en masse is with local storage. As we saw earlier because all local storage is formatted with with the label "datastore1", when more than one ESX host is added to vCenter - this can lead to an unsightly and confusing serialization process where volume labels are automatically renamed to be datastore1, datastore1 (1), datastore1 (2). Some SysAdmin prefer to handle this issue with a post-installation script that executes in the context of the ESX host installation - others prefer to use PowerCLI.

We would like to acknowledge Nicolas Farmers blog from which this script was used as the template -

$hostnames = get-vmhost | %{$}
foreach($vmhost in $hostnames)

  $one1 = $vmhost.split(".")
  $two2 = $one1[0]
  $thename = $two2+"-local"
  get-vmhost $vmhost | get-datastore | where {$ -match "datastore1"} | set-datastore -name $thename

Creating Datastore Folders

Screen Shot 2013-10-21 at 14.01.16.png - Creating Datastore Folders:

Datastore Folders can be created to separate local storage from remote storage (FC, iSCSI, NFS) or alternatively a folder structure could be created for each cluster created in vCenter.


$root = get-folder -type Datastore
new-folder -Location $root "Local Storage"
new-folder -Location $root "Remote Storage"

By Cluster:

$root = get-folder -type Datastore

foreach($cluster in (get-cluster)) 
  new-folder -Location $root $cluster

new-folder -Location $root "Local Storage"

Moving Datastores:

Moving datastores to a different folder is easy once you have a meaningful naming convention. It's somewhat harder if your datastores names are unique, and don't follow a naming convention. In this simple script all the datastore are dumped into a folder called "Remote Storage", and then the VMFS volumes which have "local" in their name are relocated to a folder called "Local Storage".

Move-Datastore -Destination "Remote Storage"
Move-Datastore local* -Destination "Local Storage"
Personal tools