Chapter 4 Storage in vSphere

2V0-21.19 EXAM OBJECTIVES COVERED IN THIS CHAPTER:

  • Section 1 - VMware vSphere Architectures and Technologies

    • Objective 1.3 - Describe storage types for vSphere
    • Objective 1.4 - Differentiate between NIOC and SIOC
    • Objective 1.6 - Describe and differentiate among vSphere, HA, DRS, and SDRS functionality
    • Objective 1.10 - Describe virtual machine (VM) file structure
    • Objective 1.11 - Describe vMotion and Storage vMotion technology
  • Section 4 - Installing, Configuring, and Setting Up a VMware vSphere Solution

    • Objective 4.2 - Create and configure vSphere objects
  • Section 7 - Administrative and Operational Tasks in a VMware vSphere Solution

    • Objective 7.2 - Manage datastores
    • Objective 7.3 - Configure a storage policy
    • Objective 7.6 - Configure and use vSphere Compute and Storage cluster options
    • Objective 7.8 - Manage resources of a vSphere environment
    • Objective 7.11 - Manage different VMware vCenter Server objects
    • Objective 7.13 - Identify and interpret affinity/anti affinity rules

Storage is one of the basic requirements for ESXi-along with compute and memory. You can have a complete, air-gapped virtual datacenter with no physical networking, but there will be no persistent virtual machines without somewhere to store them. VMware has slowly been adding features and capabilities to the storage stack over the years, and today a host can connect to local disks, remote file or block storage with physical and virtual adapters, plus share out its local disks as file- or block-level storage. It can even host storage for physical servers.

Note that this chapter focuses on connecting to shared storage arrays and network-attached storage and configuring local storage to become shared storage. There is no coverage of local disks for booting or virtual machine storage.

Managing vSphere Integration with Physical Storage

We will start with the basic ways to connect ESXi hosts to shared storage. Shared storage takes two basic forms: block-level and filelevel, which essentially comes down to where the file table is maintained.

NOTE

Absent using host profiles, all physical storage access is configured separately on each host.

With block-level storage, an array presents unformatted raw storage identified by a unique logical unit number (LUN) to an ESXi host using either iSCSI or Fibre Channel protocols. The host then formats the storage for use with the VMware File System (VMFS) and creates a file table to track folders, filenames, and storage blocks in use by files. With file-level storage, a network-attached storage (NAS) server (or any server with the capability) formats its local storage and shares out folders using the Network File System (NFS) protocol.

The question of which datastore type to use has a great many variables -what storage technologies are currently in use and what technologies are the staff knowledgeable and comfortable with? What are the future storage needs for the datacenter? What vendors are preferred? While the current trend is toward a Software-Defined Data Center (SDDC) using software-defined storage such as vSAN, there are still plenty of reasons to purchase physical arrays, including supporting separate physical equipment or features such as datastore-level snapshots.

One of the traditional standards was using block storage for virtual machines and a low-cost NFS server for templates and ISOs. The idea of multiple levels of storage, usually differentiated by cost and performance, is still very valid and can be easily implemented, as we will see later, by storage profiles. While the Keep It Simple philosophy would suggest only maintaining one type of storage, you may find cost, performance, features, or availability reasons to maintain multiple arrays.

Adding an NFS Datastore

ESXi hosts support connecting to file-level storage using either NFS version 3 or version 4.1. NFS version 4.1 adds Kerberos for authentication and data integrity assurance, some multipathing capabilities via session trunking, and support for server-side locking. However, NFS 4.1 is not currently supported for Storage DRS, Storage I/O Control, or Site Recovery Manager.

For setting up an ESXi host to connect to NFS, version 3 has a very simple add wizard; there are multiple possible steps for NFS v4.1. For NFS v3, you will provide a datastore name, the share name as it is configured on the server, and the IP address or DNS name of the NAS. For version 4, you can specify multiple DNS names or IP addresses if your NFS server supports multipathing (see your vendor's documentation for best practices) and then configure Kerberos for security and data integrity.

As Figure 4.1 shows, you need to have the host added to Active Directory and have a Kerberos user added to the host configuration before you can enable Kerberos for the NFS 4.1 datastore

NOTE

Kerberos credentials are set once per host, so if you have multiple NFS 4.1 servers, they all need to use the same credentials because only one set of credentials can be set on a host.

As shown in Figure 4.1, the Kerberos credentials are set in the Authentication Services menu on the host, the same menu page where the Active Directory settings are located.

FIGURE 4.1 Configuring Kerberos credentials
Screenshot_198

Once the datastore is created on one host, you can use the Mount Datastore to Additional Hosts wizard to ensure that the configuration matches on your hosts. This is important as advanced features such as vMotion and HA require the datastore to be named identically on each host.

FIGURE 4.2 Unmounting an NFS datastore
Screenshot_199

You can remove an NFS datastore from an individual host or from all hosts using the Storage view. Right-click the datastore and choose Unmount Datastore (Figure 4.2). You will see a list of hosts with the datastore currently mounted.

If there are virtual machines running on the datastore, you will not be able to unmount it from the hosts running those VMs. Figure 4.3 shows an error message resulting from trying to unmount an NFS datastore on a host with virtual machines present on the datastore.

FIGURE 4.3 Error when removing a datastore that's in use
Screenshot_200

EXERCISE 4.1 Add an NFS v3 datastore.

Requires a NAS providing an NFS v3 share.

  1. Connect to vCenter using the vSphere web client, open the Host and Cluster view, and click an ESXi host.

  2. Open the Datastores tab, click the Create a new datastore icon, and choose NFS from the first screen of the wizard.

    Screenshot_201
    Screenshot_202
  3. On the second screen, pick NFS v3

    Screenshot_203
  4. For NFS version 3, provide a datastore name for how the NFS share will appear in the GUI. Enter the share name as it is configured on the server, along with the IP address or DS name of the server.

    Screenshot_204
  5. Click Next and then Finish to complete the Add Datastore wizard.

  6. Once the datastore is created on one host, you can use the Mount Datastore to Additional Hosts wizard to ensure that the configuration matches on your hosts.

    Screenshot_205

EXERCISE 4.2 Add an NFS v4.1 datastore

Requires a NAS with an NFS v4.1 share and Kerberos credentials set.

  1. Connect to vCenter using the vSphere web client, open the Host and Cluster view, and click an ESXi host.

  2. Open the Authentication Services menu on the host and set the Kerberos credentials.

    Screenshot_206
  3. Open the Datastores tab, click Add Datastore, and choose NFS from the first screen of the wizard.

  4. On the second screen, pick 4.1 for the version of NFS your server is using to present the share.

    Screenshot_207
  5. With NFS version 4.1, you can specify multiple DNS names or IP addresses if your NFS server supports multipathing (see your vendor's documentation for best practices). Here you will enter your NAS IP address(es).

    Screenshot_208
  6. You can then configure Kerberos for security and data integrity. You will see an error if you skipped step 2.

    Screenshot_209
    Screenshot_210
  7. Once the datastore is created on one host, you can use the Mount Datastore to Additional Hosts wizard to ensure that the configuration matches on your hosts:

    Screenshot_211

Using Block Storage

Block-level shared storage accessed using the iSCSI and Fibre Channel protocols can be accessed using either physical adapters-host bus adapters (HBAs) or converged network adapters (CNAs), where the physical card handles most of the I/O and communication with the storage server-or network interface cards (NICs) that may or may not provide hardware assistance for some of the I/O- and storage-related communication tasks.

For Fibre Channel arrays, the communication path between the ESXi hosts and the storage arrays can be either dedicated Fibre Channel fabrics with HBAs in the host connected to Fibre Channel switches using special cables or Fibre Channel over Ethernet (FCoE), where the cables and switches are regular network cables. With FCoE, the Fibre Channel protocol is encapsulated in Ethernet frames (not TCP/IP packets)-which requires an extra step when sending or receiving data from the array. There is a variety of hardware choices available for the hosts, ranging from CNAs that will handle the I/O and network encapsulation duties to NICs that support FCoE but require the host to do most of the work. If you want to boot your host using FCoE, you will need either an HBA or a network card that supports FCoE Boot Firmware Table (FBFT) or FCoE Boot Parameter Table (FBPT). The trade-off is generally the more expensive the adapter, the lower the CPU load on the host. Which adapter to choose is really a conversation to explore with your storage vendor if you are considering FCoE.

Fibre Channel fabrics use the concept of zoning to determine what hosts have access to what storage arrays. Usually created and maintained at the Fibre Channel switch, zones are created between host ports and array ports to ensure that hosts have access to only what they need. Normally, groups are created to reduce the number of objects maintained-for example, all host ports needing to connect to a specific set of array ports are grouped together and then a zone is created to allow access to the correct group of array ports. Once a zone is created to allow communication from a host port to an array port, a LUN map is created to allow access from the host to the specific LUNs created on the array. For shared storage (as required for vMotion, HA, DRS), the same LUNs need to appear identically to all required hosts.

However, when booting a host from an array, the boot LUN should only be mapped to the host booting from it.

With iSCSI storage, the communication path is over regular network cables and switches. However, you still have a choice of adapters ranging from iSCSI HBA to normal network cards. The HBAs available for iSCSI can be either Independent, where the adapter has a more sophisticated chipset and has a BIOS-level configuration, or Dependent, where the adapter is configured from ESXi. Network card options available to iSCSI implementations include the ability to take some of the load off the CPU by handling TCP packet processing using a TCP/IP offload engine, or TOE. You can also use a network card that offers an iSCSI Boot Firmware Table (iBFT), which will let you boot from the SAN without the expense of an iSCSI HBA.

To boot a host from a storage array, you need a LUN created and presented to the host (usually using zoning and mapping for Fibre Channel and sharing for iSCSI-but see your vendor's documentation), and then you need to configure the storage adapter on the host. Each of the options (Fibre Channel HBA, FCoE HBA, network card with FBFT or FBPT, network card with iBFT) requires you to access the adapter during the boot process of the host. The parameters you set might vary depending on the vendor, but generally a Fibre Channel HBA will provide a list of arrays and LUNs that it finds and you select the LUN to boot from, while a FCoE adapter requires VLAN and IP settings for the adapter and IP or DNS name for the array before providing a list of available LUNs. Booting from an iSCSI array is similar to FCoE, where you need to specify network parameters for the adapter and array, but you may also need to add credentials if your array has Challenge Handshake Authentication Protocol (CHAP) configured for security.

Configuring the Software iSCSI Initiator

If you are accessing iSCSI storage using a NIC instead of an HBA, you will need to enable and configure the software iSCSI initiator (sometimes also called the iSCSI adapter) on each host. As shown in Figure 4.4, the software iSCSI adapter can be added from the Storage Adapters option of the host's Configure tab. Only one software iSCSI adapter can be added per host.

FIGURE 4.4 Adding an iSCSI initiator
Screenshot_212

Once the iSCSI software initiator appears in the adapters list, you can start configuring it. iSCSI hosts are identified by a iSCSI qualified name (IQN), which in the GUI is referred to as ISCSI name. This is used by the iSCSI array to identify hosts and map them to allowed LUNs. By default the IQN for a host is iqn.1998-01.com.vmware: -, which can be modified if needed.

NOTE

If a host is rebuilt, the random part of the IQN will change. You can either make a note of the IQN before rebuilding and reset it after the update (Figure 4.5) or update the array with the new IQN.

If your array has CHAP configured for security, you can set the credentials for the host either for the adapter (so all arrays receive the same credentials) or per array. When setting the credentials, you have four options (Figure 4.6).

FIGURE 4.5 Changing the IQN for a host
Screenshot_213
FIGURE 4.6 CHAP options
Screenshot_214

Unidirectional means the array checks the credentials the host sends (Outgoing). Bidirectional means the array will check the credentials of the host (Outgoing) and reply with credentials for the host to verify (Incoming). Unidirectional has three versions: use “if required,” use “unless prohibited,” and simply “use.” Refer to your vendor's documentation for which options it supports and which it recommends.

To add an array, open the Targets tab of the iSCSI and click Add with Dynamic Discovery selected (Figure 4.7).

If your array requires CHAP and you did not set the credentials at the adapter level as described earlier, you can add them here by unchecking Inherit Settings from Parent (Figure 4.8).

The difference between Dynamic and Static refers to whether the array dynamically populates the Static page with all the LUNs the host is mapped to or if the LUN information needs to be entered annually. Refer to your array's documentation to see which method is supported.

FIGURE 4.7 Adding an iSCSI target
Screenshot_215
FIGURE 4.8 Setting CHAP on a target server
Screenshot_216

Binding VMkernels to the Software iSCSI Initiator

The final step for software iSCSI configuration is to ensure that there are multiple paths to the storage array. You need to ensure that there are at least two VMkernel adapters and that the adapters are “bound,” or configured to use distinct physical NICs. This ensures discrete paths for the storage traffic. Preferably the NICs will connect to different physical switches, but that is not a requirement.

You will need to create two new VMkernel adapters that can be dedicated to iSCSI storage. They can be created on two switches, which guarantees different network cards, or both can be created on a single switch with two NICs. The example shown in Figure 4.9 assumes one switch with two NICs and two VMkernel ports.

The steps to dedicate one NIC per VMkernel will vary depending on if you are using a distributed switch (vDS) or a standard switch (vSS), as the setting is at the port group level. Port group settings are configured on each host for a vSS as opposed to just once at the switch for vDS.

For a vSS, click the name of the port group (VMkernel x), then click the pencil icon to edit the port group, and pick Teaming and Failover. Leave one of the adapters as Active and move the others to Unused (Figure 4.9).

FIGURE 4.9 Getting a standard switch ready for iSCSI binding
Screenshot_217

For a vDS, use the Networking view to find and select the port group the VMkernel port is connected to. Also, with vDS you need two different port groups, one for each VMkernel port. In the Configure tab of the port group, pick the Policies menu and then click Edit. On the Teaming and Failover section, leave one Uplink or Link Aggregation Group (LAG) in the Active group and move all others to Unused (Figure 4.10). Link aggregation groups are discussed in further detail in Chapter 3, "Networking in vSphere."

With either vSS or vDS, you need to repeat the process for each VMkernel (vSS) or port group (vDS), picking a different NIC/Uplink/LAG to be active for each.

FIGURE 4.10 Setting up iSCSI binding on a vDS
Screenshot_218

After configuring the VMkernel ports to have distinct network adapters, you are ready to “bind” them to the iSCSI initiator. On each host, select the iSCSI software adapter from the Storage Adapters menu. In the Adapters Details pane, select the Network Port Binding menu and click the green plus sign.

In the Bind wizard, select the appropriate network adapters ( Figure 4.11 ). You will only be able to select adapters with unique NIC/Uplink/LAGs, so if the appropriate adapters don't show up, double-check your work.

FIGURE 4.11 Binding VMkernel adapters
Screenshot_219

You will then see the bound ports. If there are no datastores created yet, then Path Status will be set to Not Used (Figure 4.12).

FIGURE 4.12 Path status is Not Used when no datastores are in use.
Screenshot_220

Scanning for Changes

After configuring your iSCSI software adapter or making any other changes in the storage configuration, you can use the Scan function to look for new storage devices or new datastores. You can scan either all of the storage adapters (Figure 4.13) or just the adapter you have selected (Figure 4.14). If you have several adapters, just scanning one may save you some time.

FIGURE 4.13 Scan all
Screenshot_221
FIGURE 4.14 Scan one
Screenshot_222

If you decide to scan all adapters, you can also choose to scan for new storage devices, new datastores, or both (Figure 4.15).

FIGURE 4.15 Rescan storage options
223

Storage Filters

When using storage arrays, not all possible LUNs are presented to your hosts at all times. The vCenter server uses four filters to restrict certain operations to help prevent corruption:

  • The VMFS filter prevents adding an existing datastore as an RDM disk for a virtual machine.
  • The RDM filter prevents formatting a virtual machine's RDM for VMFS.
  • The Same Host and Transports filter filters for incompatibilities such as preventing iSCSI LUNs from displaying when you want to add an extent to a local VMFS volume.
  • The Host Rescan filter turns off the automatic rescan when you're performing certain storage operations, such as presenting a new LUN to a host or cluster.

These filters are set at the vCenter server level and do not appear in the settings list by default. If you want to disable them, use the Advanced Settings menu of the Configure tab on the vCenter server and add the filter by name with a value of false (Figure 4.16).

FIGURE 4.16 Modifying storage filters
Screenshot_224

The full names of the filters are as follows:

  • config.vpxd.filter.vmfsFilter
  • config.vpxd.filter.rdmFilter
  • config.vpxd.filter.SameHostsAndTransportsFilter
  • config.vpxd.filter.hostRescanFilter

It is suggested that you only change these filters after consulting with VMware support.

Thin Provisioning

Many storage arrays offer thin provisioning functionality. In Chapter 11 , "Administer and Manage vSphere Virtual Machines," we will also cover thin provisioning for virtual machines. In both cases, the full storage quota specified is not allocated up front. The primary benefit to thin provisioning is cost savings through increased efficiency; effectively, your storage is now pay-as-you-go. An environment that typically uses 50 percent of the allocated storage would cut the storage cost in half.

For storage arrays, this means that while a host might see a 500 GB iSCSI LUN and format it as a 500 GB VMFS volume for virtual machine storage, the LUN on the actual array might not take up any space at all on the physical disks of the array until virtual machines start being created on it or copied to it. With thin-provisioned LUNs, you can then overallocate the storage on your array. If you are planning on 500 GB LUNs and project them to only be 50 percent used, you could create three of them on a 1 TB array. While ESXi would be seeing 1.5 TB of total VMFS datastores, your array would only have 750 GB of space actually in use.

With virtual machine thin provisioning, the VMDK (virtual machine disk) “disks” start very small and grow as data is written to them. A thin-provisioned VMDK for the 60 GB C: drive of a new Windows 2016 virtual machine will be 0 KB before the OS is installed and 9 GB after the install (which would include any Windows swap files stored locally). With your VMs taking up less space than they are allocated, you can then overallocate the space in your VMFS volume. On a 100 GB VMFS datastore, you could create eight Windows 10 virtual machines with thin-provisioned 60 GB C: drives. While the operating systems would report 480 GB total (eight 60 GB C: drives), there would only be 72 GB (eight 9 GB VMDK files) used on the datastore for those virtual machines. Keep in mind that this is a simplified example that ignores Virtual Machine Swap (.vswp) files.

If you are thin-provisioning virtual machines and decide to thickprovision critical machines to ensure that they will not run out of space, make sure those machines are monitored for snapshots. Creating a snapshot on a thick-provisioned VMDK effectually makes it thin-provisioned because the snapshot VMDK files are written to as blocks change. If the VMFS volume runs out of space, that thickprovisioned-with-a-snapshot virtual machine will halt along with the thin-provisioned VMs.

The primary detriment to thin provisioning is the increased administrative overhead from monitoring the environment. A virtual machine with thin-provisioned disks on a VMFS datastore that runs out of room will not be able to write changed blocks to its disks. This might crash the OS, or stop applications, databases, and certainly local logs for that VM. Any thin-provisioned virtual machine on that VMFS volume would encounter problems. A thick-provisioned VM would continue to run with no problems, but a thick-provisioned machine that was powered off would not be able to be started if its swap file could not be created. Fortunately, only thin-provisioned VMDKs of running virtual machines on the VMFS that ran out of space would be affected.

For an overallocated storage array that runs out of space, all virtual machines on all thin-provisioned LUNs would be affected. However, running out of space isn't the only concern as some arrays will see significant performance issues when they start to run out of space.

In either scenario, the solution is careful monitoring, usage projections, and a set plan to remediate the issue. For thin-provisioned VMFS volumes, you can ensure that there is sufficient space to expand on the array and expand the VMFS on-the-fly. For storage arrays, you can either ensure that there is extra space or plan ahead to purchase the extra space when needed. This is where accurate projections come into play so you can have plenty of lead time to obtain the storage. Part of monitoring will include tracking anomalies-changes that are outside the norm that could affect your projections.

As with many storage topics, check with your particular vendor for its best practices on thin provisioning, monitoring, and remediating.

Storage Multipathing and Failover

A constant mantra in datacenter reliability is “no single point of failure.” The idea is to ensure there is no one “thing,” one Achilles' heel, that can take components down. To that end, servers have redundant power supplies, databases have clusters, and storage has multipathing. Multiple paths ensure that there is no single point of failure that can prevent access to the storage. Multipathing can also provide a performance boost when more than one of the paths is used.

Multipathing requires software and hardware components. Regardless of the technology, you need at least two physical components on the server (HBA, NIC, etc.), at least two cables leading to a distribution component (network or Fibre Channel switch), two cables to the storage array, and two discrete components on the storage array, often called a head or node or processor, to receive and process the I/O. While the hardware is out of the scope of this guide, make sure you work with your vendor to ensure that there is no single point of failure for the hardware.

Redundancy in Duplicate

In my time as a consultant, I have seen more than one company with a dual-head Fibre Channel array and two FC HBAs in each host, with only one switch connecting the hosts to the array. Two of the times I was in a datacenter with a setup like this, they actually owned a second FC switch and had it either in a box or mounted next to the working switch but powered off. Staff at both companies said the second switch was there in case the first switch failed-but they had no idea they could have both switches configured, connected, and running at the same time.

You need to work with your various vendors to ensure that you have redundant paths for all storage and no single point of failure for any mission-critical function. Different vendors will take different paths, so you might need two discrete IP networks for iSCSI or just one with multiple IPs and all redundant hardware. Be sure you understand the different options your vendor has and the pros/cons of all requirements.

Also make sure you have redundant power, sufficient cooling, and physical security and that your critical functions are monitored and logged. The software components on an ESXi host that manages multiple hardware paths are a collection of APIs called the Pluggable Storage Architecture (PSA). The PSA manages multipathing plug-ins such as the Native Multipathing Plug-in (NMP) that is included with ESXi or multipathing plug-ins (MPPs) obtained from a storage vendor. VMware's NMP comprises two parts, the Path Selection Plug-in (PSP) and the Storage Array Type Plug-in (SATP). Your storage vendor might provide a replacement for one of those plug-ins or provide its own MPP to use instead of the NMP.

The Storage Array Type Plug-in provides array-specific commands and management. ESXi ships with a variety of SATPs for specific vendors and models as well as generic SATPs for Asymmetric Logical Unit Access (ALUA) and active-active storage arrays. An active-active array will provide multiple paths to its LUNs at all times, while an ALUA array will report paths through each head but only one head will be active for the LUN at a time.

You must use an SATP that is compatible with your array, and you should check your array documentation for the preferred SATP- whether default or available separately from your vendor. ESXi should choose a working SATP for any array on the Hardware Compatibility List using the claim rules (which we'll discuss later in this section), but changes to your array (such as enabling ALUA) could result in unexpected behavior

The Path Selection Plug-in chooses which path for storage I/O to take. ESXi ships with PSPs for three path algorithms. The PSPs can be differentiated by what happens before and after a path failure.

  • Most Recently Used (VMW_PSP_MRU) The most-recentlyused (MRU) plug-in selects one path and uses it until it fails and then chooses another. If the first path comes back up, the PSP will not revert to the old path but will continue to use the mostrecently-used path. The MRU plug-in is the default for activepassive arrays.

  • Fixed (VMW_PSP_FIXED) The fixed PSP uses one path until it fails and will choose another path if the first fails. With fixed, you can manually set a preferred path, and if that path fails, the PSP will pick another working path. If the preferred path comes back up, the PSP will switch back to using the preferred path. This PSP is usually the default for active-active arrays.

  • Round Robin (VMW_PSP_RR) The round robin PSP is the only load-balancing PSP that ships with ESXi. For an activepassive array, it will rotate between active paths and will rotate between available paths for an active-active arrays.

You should work with your array vendor to choose the correct PSP for your array. You can set the PSP per LUN or per SATP-however, setting it per SATP will set it for any array using that SATP.

NOTE

Changing the PSP for the SATP will affect only new datastores created-existing datastores will not receive the policy.

When choosing paths, some SATPs like the included ALUA SATP will report paths as active or inactive and optimized or unoptimized as determined by the array and path. While the PSPs will always use an active path, they will default to using an optimized path if available. This includes the MRU PSP; while it will continue to use its mostrecently-used, it will switch from unoptimized to optimized when it can.

The PSP for a datastore can be viewed and changed using the Storage view in the web client. Look for the Connectivity and Multipathing menu under the Configure menu for the datastore you would like to change. Select the host to view and edit the settings for how that datastore is accessed on that host (Figure 4.17).

FIGURE 4.17 Viewing a datastore's PSP for a specific host
Screenshot_225

Here you can make a change to the PSP in use for that datastore on that host or set the preferred path if available (Figure 4.18).

You can change the default PSP for a given SATP, which affects all future datastores created on arrays using that SATP; however, that must be done from the vSphere CLI and is outside the scope of the VCP-DCV 6.5 exam.

FIGURE 4.18 Setting the path selection policy
Screenshot_226

ESXi uses claim rules to determine which MPP and/or which SATP is used for a given device. Claim rules are a series of requirements for an array to meet. When new physical storage is detected, the host will run through MPP and SATP claim rules from the lowest-numbered rule to the highest and assign the MPP, SATP, and PSP for the first rules that match. The order of rules is driver rules, vendor/model, then transport. If no match occurs, the NMP will be used with a default SATP and a PSP will be assigned. For iSCSI and FC arrays, the default SATP is VMW_SATP_DEFAULT_AA with the VMW_PSP_FIXED PSP. The default PSP for devices using the VMW_SATP_ALUA SATP is VMW_PSP_MRU.

Best practice is to work with your array vendor to install its preferred MPP, SATP, and PSP on each host and ensure that the correct claim rules are created. You can see the default claim rules in Figure 4.19.

FIGURE 4.19 Default claim rules
Screenshot_227

The following types of claim rules can be used:

  • Vendor/Model The device driver returns a string identifying the vendor and model of the array.

  • Transport The type of array or data access, including USB, SATA, and BLOCK.

  • Device ID You can set a device ID returned by the array as a rule requirement. This is most useful when using the MASK_PATH plug-in to prevent a path or device from being used.

  • SATP The SATP to assign if the rule is met.

  • PSP The PSP to assign if the rule is met.

Take care when creating claim rules to avoid unintended effects. A lower-numbered rule created with just a vendor name will take precedence over a higher-numbered rule with vendor and model. A device rule will take precedence over a transport rule. Also, a rule assigning a specific PSP will only take effect for datastores created or discovered after the rule is created. Existing datastores will need to have the PSP manually changed on each host.

When a host loses connectivity to a particular storage device, it will try to determine if the device will become available again. A device that should become available again will be flagged as All Paths Down (APD), meaning the host has no access currently. If an array responds on a path but rejects the host or sends SCSI sense codes indicating that the device requested is no longer available, the host will flag the storage device as Permanent Device Loss (PDL).

NOTE

For a device to be flagged as PDL, the host must receive the codes on all paths for the device.

A host will continue to try to communicate with a datastore flagged as APD for 140 seconds by default. After this period, if the device is not responding, the host will stop non-virtual machine I/O but will not stop VM I/O. VMs can be migrated to a different host that is not experiencing the problem. If a datastore is flagged as PDL, the host will stop all virtual machine I/O and power off the VMs. vSphere HA (if configured) will try to migrate and start the VMs on a host that is not showing PDL for that device, if a host is available.

Storage policies and VASA Storage policies can be created for virtual machines to provide a set of rules to govern the storage that virtual machines are placed on. Storage policies are created and managed under the Policies and Profiles view from the Home menu. These policies can leverage host-based “common rules” or datastore-based rule sets. Common rules pertain to services offered by the host, such as encryption and Storage I/O Control, while datastore rules can include datastore tags, VSAN, and VVol settings. Storage policies will be covered in more detail in the section “Configuring Software-Defined Storage” later in this chapter.

Storage arrays can communicate with vSphere using vSphere APIs for Storage Awareness, or VASA. While vSphere includes some APIs, many storage vendors have additional APIs available. Using VASA, arrays can report on their performance characteristics and health and pass events back to vCenter. In return, vCenter can use VASA to determine if the storage array meets the requirements of a particular storage policy. Storage arrays using VASA are registered as storage providers on the Configure tab of each vCenter Server (Figure 4.20).

FIGURE 4.20 Storage providers
Screenshot_228

If a path to a device is lost, it will be shown as dead on the host and on the datastore (Figure 4.21 and Figure 4.22).

FIGURE 4.21 Dead path as seen on the host adapter
Screenshot_229
FIGURE 4.22 Dead path as seen from the datastore
Screenshot_230

Relevant events will also be shown in the host and datastore event logs (Figure 4.23 and Figure 4.24).

FIGURE 4.23 Path messages in the host events
Screenshot_231
FIGURE 4.24 Path messages in the datacenter events
Screenshot_232

Configuring and Upgrading VMFS and NFS

VMware vSphere starting with version 4.1 offers a set of instructions to offload some operations to the storage array. The Storage APIs - Array Integration (VAAI) depend heavily on support from the storage vendors. While vSphere ships with many VAAI “primitives,” or commands, some arrays may provide limited or no support any of the included primitives and might require software installation from the vendor. Some arrays might require a configuration on the array before responding to any primitives also.

The VAAI operations for block storage include Atomic Test & Set (ATS), which is called when a VMFS volume is created and when files need to be locked on the VMFS volume; Clone Blocks/Full Copy/XCOPY, which are called to copy or move data; Thin Provisioning, which instructs hosts to reclaim space on thinprovisioned LUNs; and Block Delete, which allows the SCSI command UNMAP to reclaim space. When VAAI is available and working, the hosts will demonstrate slightly lower CPU load and less storage traffic. For example, without VAAI, copying a VMDK requires each block to be read by the host, then written back to the array. The Full Copy primitive will instruct the array to manage the copy process-and no I/O for the copy will be sent to the host.

The VAAI primitives are enabled by default, but they can be disabled (Figure 4.25) from the client using the Advanced System Settings menu on each host.

FIGURE 4.25 Disabling VAAI primitives
Screenshot_233

In the Advanced System Settings, you need to set a value of 0 for three options:

  • VMFS3.HardwareAcceleratedLocking

  • DataMover.HardwareAcceleratedMove

  • DataMover.HardwareAcceleratedInit

This should only be performed after talking with VMware and your storage vendor's support teams. There may also be updated primitives available from your storage vendor to be installed on each host.

There are no included primitives for NAS servers, but there is a framework in place to support vendor-supplied primitives, including Full File Clone (similar to the block copy, but NFS servers copy the file instead of block by block), Reserve Space, and a few others. Reserve Space is useful as NFS servers typically store VMs as thin-provisioned on the actual array, regardless of VMDK settings. When the Reserve Space primitive is used, the array will allocate the space for the VMDK at creation.

The NAS primitives need to be obtained from the storage vendor. You should follow their instructions for installing, but typically there will be a vSphere Installation Bundle (VIB) to install on each host using the following command:


esxcli --server=server_name software vib install -v|--viburl=URL

You cannot typically disable the VAAI VIB, but you can remove it if needed by using this command on the host command line:


esxcli --server=server_name software vib remove --vibname=name

Once you are connected to an array and have an MPP, SATP, and PSP selected and VAAI in use, you are ready to create a VMFS datastore on one of the LUNs presented by the array. VMFS is VMware's file system, created to allow multiple hosts to have access to the same block storage. Once you format the LUN on one host, rescanning the storage on the other hosts in the cluster will allow them to see and access the new datastore.

Once a VMFS datastore is created, it can be resized in one of two ways, either by adding an additional LUN as an “extent” or by changing the size of the underlying LUN and using an “extend” operation to resize the VMFS datastore. An extend operation is preferred as keeping a 1:1 relationship between LUNs and datastores is simpler. However, if your storage array does not support changing the size of a LUN on-the-fly or has a LUN size limitation, then extents might make more sense. VMFS supports datastores of up to 64 TB. If your storage array only supports 32 TB LUNs, you would need two LUNs to make a maximum-size datastore. If you do use extents, it is important to ensure that the LUNs used are as identical as possible for performance and reliability, and in addition, all LUNs for a datastore must have the same sector format, either 512e or 512n. Note that losing one of the LUNs that is an extent of a datastore will prevent access to the datastore.

NOTE

VMware has periodically updated VMFS over the years. New with vSphere 6.5 is the most recent update, VMFS6. Regardless of the naming convention, vSphere 6.0 is not compatible with VMFS6; only vSphere 6.5 and later can use VMFS6. The two previous versions, VMFS5 and VMFS3, are also compatible with vSphere 6.5, but vSphere 6.5 can only create VMFS5 or VMFS6 datastores. VMFS3 can be accessed but a new VMFS3 datastore cannot be created.

The new VMFS6 datastore offers several improvements over VMFS5, including automatic space reclamation (using the VAAI UNMP primitive), SEsparse snapshots for all VMDK files, and support for 4k storage, but only in 512e mode. Upgrading a VMFS5 datastore to VMFS6 consists of creating a new VMFS6 datastore and copying the virtual machines over. There is no in-place upgrade to VMFS6.

If your environment is only vSphere 6.5, create VMFS6 datastores. If you are still using previous versions of vSphere and might have 6.0 or 5.5 ESXi servers accessing the datastore, then create VMFS5 datastores. If you still have VMFS3 datastores in your environment, you should replace them with VMFS6 datastores unless there are 6.0 or 5.5 ESXi servers accessing the datastore; then you should upgrade the VMFS3 datastore to VMFS5.

To upgrade a VMFS3 datastore to VMFS5, take the following steps:

  1. Open the upgrade wizard from the Datastore view or by rightclicking the datastore and choosing Upgrade to VMFS-5 ( Figure 4.26 )

    FIGURE 4.26 The Configure tab of a VMFS3 datastore
    Screenshot_234
  2. Select the datastore to upgrade and click OK (Figure 4.27).

    FIGURE 4.27 Upgrading to VMFS5
    Screenshot_235
  3. Verify that the update was successful on the Summary tab of the datastore (Figure 4.28).

    FIGURE 4.28 Verifying VMFS5 conversion
    Screenshot_236

Configuring VMFS Datastores

VMware vSphere allows for the creation of datastore clusters to provide pooling of storage resources. Storage DRS, which allows for VMDK files to be migrated between datastores automatically to balance space and storage I/O, requires the use of datastore clusters. When creating a datastore cluster, you should ensure that the datastores' characteristics are similar. All hosts should have access to all datastores in the cluster, and the performance (latency, spindle speed) and reliability characteristics (RAID, multipathing) should match. While both VMFS and NFS are supported for datastore clusters, you cannot mix and match VMFS and NFS datastores in one cluster.

You can create a datastore cluster from the Datastore view by rightclicking the data center object and choosing StorageNew Datastore Cluster. Please note that Storage DRS is enabled by default, and on the Automation screen you will see that the level is set to manual by default.

As with compute DRS, this means recommendations will be made but no VMDKs will move without user intervention. You can change this to Fully Automated if you would like the VMDKs to be migrated according to the settings and rules you create individual automation levels for: space balance, I/O balance, rule enforcement, policy enforcement, VM evacuation. So with a cluster default of No/Manual, you can set changes to happen automatically when an I/O imbalance is detected on the datastores but only recommend or alert for any other trigger.

The third screen of the New Datastore Cluster wizard lets you set the I/O and space triggers for the automation, which defaults to 80 percent for space used (Figure 4.29). When a datastore hits 80 percent utilization, Storage DRS will either recommend or start moving VMDKs to free up space on the datastore.

The default I/O trigger is 15 ms latency before moves are triggered. This figure is taken from the VMObservedLatency performance value, which measures round-trip I/O. Storage DRS I/O has no direct relationship with Storage I/O Control (SIOC), which will be covered in more detail later. While SIOC is responsible for the individual performance of VMs and how the available performance is distributed between VMs on a datastore, Storage DRS I/O is responsible for maintaining performance balance between datastores in a datastore cluster. Storage DRS latency settings are intended to provide a balance over time; SIOC is there to adjust real-time performance storage I/O of virtual machines.

FIGURE 4.29 Storage DRS default settings

Once the datastore cluster is created, you can edit these settings from the Storage DRS menu of the Configure tab for the VM, including virtual machine rules. If a virtual machine has multiple VMDKs, Storage DRS will attempt to keep them together. However, you can use the VM Overrides to keep a VM's VMDKs together or apart. You can also disable Storage DRS for a VM.

Under the Rules menu, you can create VMDK anti-affinity rules to keep two VMDKs from being stored on the same datastore (such as the data drives for two clustered database servers) or VM anti-affinity rules to prevent virtual machines from being placed on the same datastore.

Datastore clusters also enable the ability for datastores to be place in maintenance mode, which can be performed from the actions menu of the datastore (Figure 4.30). Maintenance mode can be used when replacing a datastore, changing the multipathing, or performing any other potentially disruptive maintenance. When a datastore is placed in maintenance mode, Storage DRS can automatically move affected VMs and VMDKs off the datastore using the same rules or the files can be manually moved. All registered VMs and VMDKs must be moved off the datastore before it will enter maintenance mode.

FIGURE 4.30 Placing a datastore in maintenance mode
Screenshot_238

EXERCISE 4.3 Create and configure a new datastore cluster.

Requires two identical LUNs and a VM on one of them.

  1. Connect to the host using the vSphere web client and open the Datastore view.

  2. Create a datastore cluster from the Datastore view by rightclicking the data center object and choosing Storage ➢ New Datastore Cluster.

    Screenshot_239
  3. Set a name for the new datastore cluster and leave Storage DRS On.

    Screenshot_240
  4. Leave Storage DRS set for manual mode.

    Screenshot_241
  5. The third screen of the New Datastore Cluster wizard lets you set the I/O and space triggers for the automation. Set these to 90% and 10 ms.

    Screenshot_242
  6. Select the cluster your hosts are in.

    Screenshot_243
  7. Pick your two identical datastores to add to the cluster. Click Next and then Finish.

    Screenshot_244
  8. Edit the new cluster and set the rule enforcement automation level to Fully Automated. This will automatically move VMs when a rule is triggered.

    Screenshot_245
    Screenshot_246
  9. Add a VM override for one of the virtual machines, choosing the Keep VMDKs Together option. This will allow Storage DRS to move one VMDK at a time for that virtual machine.

    Screenshot_247
    Screenshot_248
  10. Set one of the datastores of the cluster to Maintenance Mode.

    Screenshot_249
  11. Apply the recommendations.

    Screenshot_250

Raw Device Mapping and Bus Sharing

Host storage is not the only use for LUNs; virtual machines can be provided direct access to LUNs also using raw device mapping (RDM). RDMs are used for some clustering solutions such as MSCS where one of the cluster nodes will be either physical or on a separate host. RDMs are also used where tools or applications in a virtual machine require direct access to the underlying storage, such as taking advantage of array-based snapshots.

When an RDM is attached to a virtual machine, the vCenter storage filters prevent that LUN from being available to hosts for a VMFS datastore. The RDM is dedicated to the virtual machine it is assigned to-but all hosts the virtual machine can run on still need access to the LUN so that the virtual machine can migrate to (vMotion) or restart on (HA) different hosts. There are two modes for RDMs to use: physical mode, where SCSI commands from the VM are passed directly to the storage, and virtual mode, which provides functionality such as vSphere snapshots and advanced file locking. Either mode requires a VMFS volume to hold a mapping file for the RDM.

When adding an RDM, you can change the location of the mapping file, which is useful if the virtual machine is on an NFS datastore. The sharing options (Figure 4.31) will enable or disable multi-writer, or simultaneous write protection. This protects against data corruption by default by blocking virtual machines from opening and editing the same file. Change this setting if directed to by your application vendor.

FIGURE 4.31 VMDK sharing options
Screenshot_251

Virtual machines can also have access to the same VMDK files using SCSI Bus Sharing. You can enable or prohibit multiple virtual machines writing to the same VMDK by combining VMDK sharing and SCSI Bus Sharing options.

SCSI Bus Sharing has two options (Figure 4.32): Virtual, where VMs must reside on the same host to share VMDKs, and Physical, where VMs on different hosts can share VMDKs. Each virtual machine needing to access the same VMDK needs the same Bus Sharing and VMDK sharing options. Make sure the shared VMDK is Thick Provisioned Eager Zeroed, and please note that snapshots are not supported for VMs configured for Bus Sharing. If you use Bus Sharing with RDMs for Windows Server Failover Clustering solutions, you will be able to vMotion clustered VMs.

FIGURE 4.32 Setting the SCSI Bus Sharing property
Screenshot_252

EXERCISE 4.4 Add an RDM to a virtual machine.

Requires a virtual machine and an unused LUN.

  1. Connect to vCenter using the vSphere web client and open the VM and Templates view.

  2. Open the settings of the virtual machine, select RDM Disk from “New Device,” and click Add.

    Screenshot_253
  3. Select the correct LUN from the list.

    Screenshot_254
  4. Set the RDM to use virtual mode.

    Screenshot_255
  5. Click OK to complete.

Configuring Software-Defined Storage

A key component of VMware's Software Defined Data Center (SDDC) vision is virtualized storage, which for vSphere takes the forms of virtual storage area network (vSAN) and Virtual Volumes (VVols). VMware's vSAN technology uses local SSD and (optionally) HDD storage to create a distributed storage pool available to all the hosts in a cluster. VVols is a software layer to abstract existing physical arrays and create a framework for ESXi hosts that is optimized for virtual workloads. While vSAN is VMware software to aggregate local host storage, VVols is a framework that requires support from the array vendor to implement. Both solutions provide storage profiles to allow virtual machines a set of rules to determine where they should be placed.

Virtual Storage Area Network

For a working vSAN cluster, you need at least three ESXi hosts managed by vCenter with dedicated local storage consisting of at least one SSD and one HDD, or two SSD drives. Each host needs to have a VMkernel adapter enabled for Virtual SAN traffic, and vSAN needs to be licensed. The licenses for vSAN are purchased separately from vSphere, and there are currently five versions (Standard, Advanced, Enterprise, ROBO Std, ROBO, Adv). The important differences between the license levels is that while all levels offer all-flash storage capabilities, Advanced adds deduplication and compression and RAID-5/6 erasure coding, and Enterprise adds stretched clusters and data at rest encryption.

Each host in the vSAN cluster will have a storage provider created for it and added to vCenter. This allows vCenter to communicate with the vSAN components, receive the capabilities of the datastore, and report virtual machine requirements. However, vCenter will only use one storage provider/host at a time for a given vSAN cluster. If something happens to the active host, another host will be selected to be the active storage provider.

Creating and Configuring a vSAN Cluster

You create a vSAN cluster by configuring it under the vSAN ➢ General menu from the Configure tab of the cluster (Figure 4.33).

FIGURE 4.33 Creating a vSAN datastore on a cluster
Screenshot_256

The first screen of the Configure vSAN wizard (Figure 4.34) has you configure the basic capabilities of the vSAN, including Services (Deduplication and Compression, and Encryption) and Fault Domains and Stretched Cluster. You can also configure a two-host vSAN cluster, which is intended for remote offices; however, it actually still requires three hosts-two hosts contributing storage at the remote office and a witness host at the main site.

The second screen of the wizard (Figure 4.35) lets you check the hosts in the cluster to verify that they have VMkernel ports configured for vSAN traffic.

Once the network settings have been verified, you can set the local hosts disks to use for vSAN. The Claim Disks screen (the third one of the wizard) will try to allocate disks by speed (SSD/HDD) and size. Hosts will need one SSD claimed for cache and at least one disk (SSD or HDD) claimed for capacity. Once claimed, the drives will not be available for anything else-which means you can't boot from a disk used for vSAN. Disks need to be identified by the host as SSD to be used for cache and as local to be used by vSAN at all.

FIGURE 4.34 Configuring vSAN capabilities
Screenshot_257
FIGURE 4.35 vSAN network validation
Screenshot_258

If a disk is not being properly identified as local or SDD (or both), you can use the All Actions menu for that disk in the Storage Devices menu of the host to set the local and/or SSD flags it as local or SSD ( Figure 4.36 ).

FIGURE 4.36 Changing the local and/or SDD flags for a datastore

Host disks claimed by vSAN are formed into groups. Each host can have up to five disk groups, and each group must have one SSD disk for cache and at least one and up to seven capacity disks. You have the option of allowing vSAN to create disk groups automatically, or you can manually associate specific cache disks with specific capacity disks (Figure 4.37).

FIGURE 4.37 Claiming disks for the vSAN cluster
Screenshot_260

After claiming disks, make sure all settings are correct (Figure 4.38) before clicking Finish.

FIGURE 4.38 Ready to complete the vSAN configuration
Screenshot_261

After your vSAN cluster is configured, you can use the Configuration Assist tab (Figure 4.39), which is also available from the Configure tab of the cluster, to check the configuration, display warnings, and troubleshoot issues. Configuration Assist even includes the ability to change host networking and claim disks from within the assistant.

FIGURE 4.39 Configuration Assist
Screenshot_262

You can configure fault domains (Figure 4.40) to define blade chassis, rack boundaries, or any other grouping of hosts. Hosts in the same fault domain will not receive duplicate data, so loss of the entire fault domain will not affect integrity.

FIGURE 4.40 VSAN fault domain setup
Screenshot_263

Creating an iSCSI Target from vSAN

A new feature of vSAN is the ability to present part of the vSAN datastore as an iSCSI target to physical servers. Similar to other iSCSI target services, you set aside part of the vSAN storage to serve as a LUN, then provide access to the LUN via one of the vSAN cluster hosts. The LUN receives a storage policy just like a VM and can use all of the vSAN capabilities.

During configuration (Figure 4.41), you choose the VMkernel port that will be responding to iSCSI requests. You can use an existing VMkernel adapter, but a better practice would be creating a new, dedicated VMkernel with a VLAN dedicated to iSCSI.

FIGURE 4.41 Enable vSAN iSCSI target
Screenshot_264

To provide multipath access, use the IP address of multiple vSAN hosts when configuring the initiator. CHAP support is provided for authentication if needed. You will need to create an initiator group containing the IQN of the iSCSI initiators and associate the initiator group with the LUN it will be accessing.

Important iSCSI Target Terminology

iSCSI target service Provides the ability for a vSAN cluster to serve part of its data as a LUN to iSCSI initiators on physical servers

iSCSI initiator Client of an iSCSI target

initiator group List of server IQNs that can access the iSCSI target service

iSCSI target Part of vSAN datastore presented as an iSCSI LUN

Monitoring vSAN

Monitoring of vSAN can be accomplished from the Monitor tab of the vSAN cluster. Monitoring options include Health, Capacity, Resynching Components, Virtual Objects, and Physical Disks. There are also proactive tests you can run on your vSAN cluster.

  • Health Similar to Configuration Assist, this reports the results of the Health Service, which periodically runs tests on the vSAN cluster.

  • Capacity The capacity overview (Figure 4.42) shows the total amount of space used on the drive and the overhead consumed. You can also see deduplication and compression statistics, including overhead, saved percentage, and ratio.

FIGURE 4.42 vSAN capacity overview
Screenshot_265

The capacity screen will also display object information about the datastore by object type or data type, including total storage used and percentage of the total by type (Figure 4.43).

FIGURE 4.43 vSAN capacity by object
Screenshot_266
  • Resynching Components Displays the progress of cluster changes, including changing the storage policy used by a virtual machine, hosts going into maintenance mode, and recovering a host from a failure.

  • Virtual Objects Lets you view virtual machines on the vSAN and details of their vSAN usage.

  • Physical Disks Lets you view statistics and properties of the physical disks in use by the vSAN.

The Performance tab of the vSAN cluster reports the results of the Virtual SAN performance service. This service is disabled by default and can be enabled using the Health and Performance menu of the Configure tab (Figure 4.44). With the service enabled, you can report on IOPS (input/output operations per second), throughput, and latency for the cluster, virtual machines, and hosts.

FIGURE 4.44 Enabling the vSAN performance service
Screenshot_267

For advanced performance analysis and monitoring of vSAN, you can use Ruby vSphere Console (RVC) on your vCenter server, which enables you to utilize the VMware Virtual SAN Observer. The Observer provides advanced details about disk groups, CPU, and memory usage. When working with VMware technical support, you can export a log bundle from vSAN Observer using the following on a single line.


vsan.observer <cluster> --run-webserver --force --generate-htmlbundle /tmp --interval 30 --max-runtime 1

Or you can create a full raw statistics bundle using


vsan.observer <cluster> --filename /tmp/vsan_observer_out.json

Virtual Volumes

VMware's Virtual Volumes implementation requires one host managed by vCenter, a storage array or NAS that is compatible with Virtual Volumes, and the storage vendor's VASA storage provider added to vCenter. After the array is configured to present a storage container to the host, the ESXi host can then create a Virtual Volumes datastore for that storage container (Figure 4.45).

FIGURE 4.45 Creating a Virtual Volumes datastore
Screenshot_268

NOTE

The storage presented is not formatted with VMFS; rather data is stored directly on the storage array.

The storage container can be configured to present a variety of features and performance tiers that can be grouped by Virtual Volumes into different storage profiles. Using Storage Policy-Based Management (SPBM), virtual machines can be configured to consume the storage resources they need.

When a virtual machine is created on or migrated to a VVol datastore, there are multiple VVols created:

  • A Config VVol will hold the VMX (.vmx) file, log files, and virtual disk descriptor file.

  • A Data VVol will be created for each virtual disk (the -flat file).

  • A SWAP VVol will be created for the VM swap file at power on.

  • If the VM has snapshots, there will be a Data VVol for each snapshot VMDK.

  • If the VM has snapshots, there will be a Mem-VVol for each memory snapshot.

Snapshots for virtual machines stored on a VVol datastore are still created by the vSphere client, but the snapshots are managed by the storage provider, not by the host running the virtual machine.

These VVols are just constructs for the storage server or array to use when storing those file types; a VVol is not an object you can view in the vSphere GUI. Virtual machines still see the VMDK as a local SCSI disk; there is no change from the virtual machine's point of view. Similarly, the vSphere UI, host CLI, and PowerCLI all see the virtual machine and its files the same way as a virtual machine stored on any other datastore type. The key differences of how the files are stored are all handled by the storage array or storage server.

Key VVols Terminology

VASA storage provider The API mechanism for vSphere and the storage array to manage the storage consumption for Virtual Volumes. vSphere 6.5 introduced support for VASA 3.0, which includes data protection and disaster recovery capabilities.

storage container Storage presented by the storage array for Virtual Volumes usage.

Virtual Volumes datastore Datastore framework used by vSphere to access the storage container. There is a 1:1 ratio of storage container to Virtual Volumes datastore.

protocol endpoint The Virtual Volumes equivalent of mount points and LUNs. Created on a storage array, protocol endpoints provide an access point to administer paths and policies from the host to the storage system.

storage profile Collection of storage capabilities such as performance and redundancy used to allocate virtual machines the storage they need.

Virtual volumes are not compatible with RDMs, and vSAN cannot provide a storage container for Virtual Volumes. While a single array can provide block and NFS storage containers, one storage container cannot span array types. For instance, you cannot have NFS and iSCSI storage in the same storage container.

To create a VVol datastore, you need to be sure your storage array has presented a storage container and has protocol endpoints configured. You also need to deploy the vendor's VASA storage provider in vCenter. See the storage vendor's documentation for more information.

EXERCISE 4.5 Configure VVols provider.

Requires an array or server that supports Virtual Volumes. If you're using an array, the LUNs must be available on the hosts.

  1. Connect to vCenter using the vSphere web client, open the Host and Cluster view, and click the vCenter server.

  2. Click the Configure tab, select Storage Providers, and click the green plus sign. Enter the information for your new storage device.

    Screenshot_269
  3. Once the provider is added, you can create VVol datastores for the storage containers presented. Use the cluster's Actions menu to add a datastore.

    Screenshot_270
  4. Choose VVol.

    Screenshot_271
  5. Name the new datastore and choose the correct container.

    Screenshot_271
  6. Choose the hosts that will access the datastore. You should include all the hosts in a cluster to ensure the most flexible virtual machine placement.

    Screenshot_273
  7. The newly created VVol datastore will appear in any of the relevant datastore lists with a type of VVol.

    Screenshot_274

Storage Policy-Based Management

Storage Policy-Based Management (SBPM) is a method of providing storage capabilities to virtual machines. By creating storage policies that link to different capabilities of the underlying storage, administrators can then attach the storage policies to virtual machines, ensuring that those VMs have the capabilities they need.

Storage policies can include tags applied to datastores, vSAN capabilities, configuration settings, VVol storage capabilities, and other capabilities as passed along by VASA storage providers. Datastore tags are useful for traditional storage that does not interact deeply with vSphere. As datastores are created, they can be tagged with RAID level, relative performance level, or replication level. Storage policies can then be created to leverage those features.

If you create a tagging system with performance levels of Gold for SSD-backed LUNs, Silver for 10K SCSI-backed LUNs, and Bronze for 5400RPM HDDs and further tag any LUNs replicated offsite as Replicated, you can then create storage policies to reflect those capabilities. You might have a storage policy of Fast, Replicated for any datastore tagged with Gold, and Replicated. If you then attach that storage policy to a virtual machine, vSphere will ensure that virtual machine disks always reside on datastores with the appropriate tags and will alert if the VM is moved to an inappropriate datastore.

Storage policies used with vSAN and Virtual Volumes capabilities do not depend on a vSphere administrator to manually assign and maintain the correct tags to a datastore. Rather, the capabilities are supplied using Storage API calls and available to choose when creating policies.

NOTE

Tag-based storage policy rules are created by virtualization administrators to differentiate datastores, and VVol storage policy rules pull available capabilities from the storage array. However, vSAN-based storage policy rules create the capabilities when the storage policies are applied to virtual machines.

Enabling and Configuring Storage I/O Control

Storage I/O Control (SIOC) provides mechanisms to decide how storage I/O is allocated to virtual machine VMDKs, especially in times of contention. Storage I/O Control can be used to limit the IOPs available to specific VMDKs at all times and to prioritize VMDKs when resources are scarce. Without SIOC, one machine can grab an excessive amount of IOPs and starve any other VMDK on the datastore. With SIOC enabled (and not other settings), just prior to the IOPs maxing out, the virtual machines on the datastore would begin receiving equal access to the storage, reducing the impact of any “noisy neighbor.”

Storage I/O Control is disabled by default and needs to be enabled for each datastore you would like to use it on using the Configure Storage I/O Control wizard (Figure 4.46). The wizard can be launched with the Edit button under Datastore Capabilities in the General section of the Configure tab of the datastore.

FIGURE 4.46 Enabling Storage I/O Control
Screenshot_275

SIOC defaults to the Congestion Threshold triggering at 90 percent of peak throughput, or you can choose to set a millisecond threshold (which defaults to 30 ms). When the congestion threshold is reached, the datastore will evenly distribute storage I/O between the VMDKs on it, unless the default shares for the VMDK are modified. With the I/O being balanced between VMDKs and not virtual machines, a VM with two VMDKs on the same datastore will get twice the I/O of a singleVMDK VM.

When you enable Storage I/O Control, you have the option of excluding I/O statistics from Storage DRS (SDRS). This is useful if you have SDRS configured but your storage array automatically adjusts VMs or LUNs for performance and you would like the stats used for SIOC for performance throttling but not for virtual machine placement.

If you are using SDRS for VM relocation for performance and SIOC, you should consider how they work together. Storage DRS is intended to work over a longer period of time, gradually ensuring a balance of I/O, while SIOC is intended for short-term fixes. We would suggest setting the Storage DRS performance limit below that of SIOC so that gradual imbalances are corrected before contention is reached and SIOC steps in.

Using the setting of the VMs, you can adjust the shares and set an IOPs limit for each VMDK (Figure 4.47). These settings only take effect when the VMDK is on a datastore with Storage I/O enabled, and the shares setting will only take effect when the congestion threshold is reached.

FIGURE 4.47 Setting the shares and limits for a VMDK
Screenshot_276

The virtual machine must be powered off to change either setting or you will see the error shown in Figure 4.48.

FIGURE 4.48 Power state notification
Screenshot_277

You can view the storage shares and IOPs from the VM tab of the datastore.

You can also create storage policies to apply limits and shares to VMs. There are three default storage policy components you could create a storage policy with, or you can create a custom one. Each of the default storage policy components includes an IOPs limit-and using the storage policy enables an IOPs Reservation value (Figure 4.49) that is not available from the virtual machine settings in the GUI.

FIGURE 4.49 Storage policy showing storage I/O reservations
Screenshot_278

Even when using storage policies, a virtual machine needs to be power cycled when changing limits or IOPs for the new values to take effect.

Storage I/O Control uses the shares assigned to the virtual machines and the total number of shares allocated to VMDKs on the datastore to set the queue slots for the virtual machines.

SIOC also adjusts the I/O queue depth for the hosts to balance the I/O available for each host to be proportional to the VM shares available on that host.

For example:

Datastore: iSCSI01 with Storage I/O control enabled at 90 percent

  • Host A:

    • Virtual machine Tiny01 with one VMDK on iSCSI01with 1000 shares
  • Host B:

    • Virtual machine Tiny02 with one VMDK on iSCSI01 with 500 shares
    • Virtual machine Tiny03 with one VMDK on iSCSI01 with 1500 shares
  • Host A shares for iSCSI01:1000

  • Host B VM shares for iSCSI01: 2000

  • Total shares: 3000

When the IOPs for iSCSI01 hit 90 percent, SIOC will adjust the I/O queue depth of the hosts until the ratio between the hosts is 1:2. On Host B, the virtual machines will have their queue slots adjusted until the ratio of slots is 1:3.

SIOC's queue depth adjustment is done proactively at the threshold limit set by the SIOC, and it has an effect on the host relative to the VM shares in play. Hosts also have a feature called Adaptive Queuing that will cut the queue depth of the storage in half if the storage reports that it is busy or has a full queue. The limit set by SIOC is enforced per VM by an I/O filter.

Summary

There are many options for storing vSphere virtual machines, from local disks to traditional arrays. With vSphere 6.5, you can add modern twists to those options with vSAN and Virtual Volumes bringing new capabilities and introducing the ability to set storage policies on your VMs to set the capabilities they will consume.

With vSAN, VMware has given vSphere environments the ability to have shared storage without adding third-party hardware or software and features such as deduplication and compression to ensure that you are making the most efficient use of your hardware. With Virtual Volumes, traditional arrays and storage servers have a much more flexible way of presenting storage to ESXi hosts. With all of these options, you want to make sure there is no single point of failure to ensure reliability.

Exam Essentials

Understand VMware vSAN and how it is implemented. One of VMware's flagship features for vSphere, vSAN keeps adding features. You should be aware of the requirements both for vSAN and features such as deduplication and compression and All-Flash.

Know vSAN terminology and configuration settings. Know how RAID5/6 Erasure Encoding differs from RAID-1, how each is implemented, and the capacity implications of each. You should know how and why to create disk groups and fault domains. You should be able to create an iSCSI target and know the requirements for them.

Describe networking and multipathing for block and NFS storage. You need to know the PSA framework, what each component does, and how to configure the different settings. You should be able to describe how vSphere networking supports the different block and file storage technologies and know how to configure vSphere networking to support storage.

Understand policy-based storage management. Know why storage policies are used and how to create them for the different types of storage. Know when and how to apply different policies to different VMs.

Know how to add an RDM to a virtual machine. Be able to add an RDM to a virtual machine and know the difference between physical and virtual mode. Also be able to share VMDKs between virtual machines and know the different options for that.

Understand VVols and their requirements. Know how to create a Virtual Volume datastore for a host and the requirements. Understand how a storage array stores the files and the benefits of doing so.

Be able to describe the differences between NFS v3 and NFS v4.1 . Be able to add NFS datastores to many hosts and know the requirements. Be able to configure Kerberos for the host and datastores.

Review Questions

  1. What should be considered before creating a shared VMFS6 datastore?

    1. Number of cache disks.
    2. VASA support for the array.
    3. All hosts are on version 6.5.
    4. NFS 4.1 support on the NAS.
  2. Which is the simplest method to upgrade a VMFS3 datastore to VMFS6?

    1. Create a new VMFS6 datastore and use Storage vMotion to move the VMs.
    2. Upgrade the VMFS3 datastore to VMFS5, then upgrade the datastore to VMFS6.
    3. Upgrade the VMFS3 datastore to VMFS6.
    4. Create a new VMFS6 datastore and use SIOC to move the VMs.
  3. Which storage technologies require ESXi hosts to maintain the file and folder structure? (Choose two.)

    1. VVol
    2. iSCSI LUN
    3. Local disks
    4. NFS
  4. Which options allow boot from SAN over FCoE? (Choose three.)

    1. FBFT
    2. FBPT
    3. iBFT
    4. HBA
    5. IQN
  5. What is required to support iSCSI storage arrays?

    1. HBA
    2. CNA
    3. CHAP
    4. IQN
  6. Where should CHAP be configured if you have multiple arrays with multiple capabilities?

    1. FCoE target settings
    2. iSCSI initiator adapter settings
    3. iSCSI target settings
    4. FCoE initiator adapter settings
  7. Which would prevent the use of deduplication and compression on a vSAN cluster? (Choose two.)

    1. SIOC
    2. Network card with iBFT
    3. Disk Format 2
    4. Hybrid
  8. Which storage option doesn't support Storage DRS?

    1. NFS v3
    2. NFS v4.1
    3. iSCSI with HBA
    4. iSCSI with software initiator
  9. Which options require multiple vSAN disk groups? (Choose two.)

    1. Two SSD drives and ten HDD drives
    2. Fourteen SSD drives
    3. One SSD drive and seven HDD drives
    4. Seven SSD drives
  10. Which option requires multiple vSAN disk groups to utilize all of the disks?

    1. Eight SSD drives
    2. One SSD drives and seven HDD drives
    3. Three SSD drives and ten HDD drives
    4. Seven SSD drives
  11. Which storage technology uses only local ESXi disks?

    1. NFS
    2. vSAN
    3. FCoE
    4. iSCSI with software initiator
  12. Which storage technology can vSphere leverage to supply storage to physical servers?

    1. NFS
    2. vSAN
    3. FCoE
    4. iSCSI with software initiator
  13. What security options are available for TCP/IP-based storage? (Choose two.)

    1. CHAP
    2. Kerberos
    3. TACACS
    4. SSO
  14. Which storage technology can be configured in vSphere to ensure data integrity?

    1. NFS
    2. vSAN
    3. FCoE
    4. iSCSI with software initiator
  15. Which storage technology can vSphere configure to encrypt data at rest?

    1. NFS
    2. vSAN
    3. FCoE
    4. iSCSI with software initiator
  16. Which storage profiles method creates the capabilities on the datastore when the storage policy is applied?

    1. Tagging
    2. VVol capabilities
    3. vSAN capabilities
    4. SIOC components
  17. What storage supports VVols? (Choose two.)

    1. vSAN
    2. NFS
    3. iSCSI
    4. Local disks
  18. What components are required in vCenter to use VVols? (Choose two.)

    1. Storage profile
    2. iSCSI LUN
    3. VMFS datastore
    4. Storage provider
  19. Which step could be taken to improve performance for virtual machines on an NFS v3 datastore during off-peak hours?

    1. Enable SIOC and increase the shares
    2. Increase the write percentage of the cache drive
    3. Enable Storage DRS
    4. Replace the datastore with an NFS 4.1 datastore
  20. Which technologies are supported for booting a host? (Choose two.)

    1. vSAN
    2. iSCSI
    3. TRoE
    4. Fibre Channel
UP

LIMITED OFFER: GET 30% Discount

This is ONE TIME OFFER

ExamSnap Discount Offer
Enter Your Email Address to Receive Your 30% Discount Code

A confirmation link will be sent to this email address to verify your login. *We value your privacy. We will not rent or sell your email address.

Download Free Demo of VCE Exam Simulator

Experience Avanset VCE Exam Simulator for yourself.

Simply submit your e-mail address below to get started with our interactive software demo of your free trial.

Free Demo Limits: In the demo version you will be able to access only first 5 questions from exam.