Chapter 6 Allocating Resources in a vSphere Datacenter

2V0-21.19 EXAM OBJECTIVES COVERED IN THIS CHAPTER:

  • Section 1 - VMware vSphere Architectures and Technologies

    • Objective 1.5 - Manage vCenter inventory efficiently
    • Objective 1.6 - Describe and differentiate among vSphere, HA, DRS, and SDRS functionality
    • Objective 1.7 - Describe and identify resource pools and use cases
    • Objective 1.9 - Describe the purpose of cluster and the features it provides
  • Section 4 - Installing, Configuring, and Setting Up a VMware vSphere Solution

    • Objective 4.2 - Create and configure vSphere objects
  • Section 5 - Performance-tuning and Optimizing a VMware vSphere Solution

    • Objective 5.2 - Monitor resources of VCSA in a vSphere environment
  • Section 7 - Administrative and Operational Tasks in a VMware vSphere Solution

    • Objective 7.6 - Configure and use vSphere Compute and Storage cluster options
    • Objective 7.7 - Perform different types of migrations
    • Objective 7.8 - Manage resources of a vSphere environment
    • Objective 7.11 - Manage different VMware vCenter Server objects
    • Objective 7.13 - Identify and interpret affinity/anti affinity rules

This chapter focuses on how resources are allocated to virtual machines in a vSphere 6.7 datacenter. One of the greatest benefits of vSphere virtualization is the ability to efficiently utilize all of the physical resources in a datacenter. In fact, it is common to overcommit resources based on the assumption that in most cases the average virtual machine workload demands will fall within the available capacity of the datacenter. However, when peak conditions require that demand exceed available capacity, a number of controls exist to both manually and dynamically allocate resources to virtual machines in a manner that ensures that critical workloads are receiving the resources they require. This chapter focuses on how to administer those controls and effectively manage available resources.

When an ESXi host is added to a vSphere datacenter the resources of that host become available to virtual machines. When the datacenter consists of multiple hosts, these hosts are often grouped into vSphere clusters for both availability and load balancing purposes. The first half of this chapter will cover the creation and administration of resource pools, which can be used to distribute the aggregated CPU and memory resources of a group of ESXi hosts that have been configured as a vSphere cluster. I will cover the hierarchy that can be established with resource pools, including parent, sibling, and child pools and how their interaction with each other affects resource availability to virtual machines in the cluster. In vSphere 6.5, Custom Attributes were brought back, so we will look at how you can apply Custom Attributes to a resource pool. Since the whole purpose of setting up a resource pool hierarchy is to allocate cluster resources to virtual machines and vApps, we will look at how to create and remove resource pools and populate the pools with appropriate workloads. Finally, we will look at how to control CPU and memory distribution within a resource pool by utilizing shares, reservations, and limits.

Although one of the key concepts of a vSphere cluster is the aggregation of resources across all ESXi hosts in the cluster, a virtual machine is still only able to run on one of those hosts at any given time. To achieve the optimum balance of virtual machine performance and efficient resource utilization, vSphere Distributed Resource Scheduler, or DRS, is used. DRS was traditionally developed to handle CPU and memory resources only, much in the way that resource pools do. However, DRS functionality was applied to storage resources beginning with vSphere 5.0. It is important to note that DRS, as it relates to storage resources, is not configured or administered in a vSphere cluster. Rather, a special storage-based cluster called a datastore cluster is created that uses the storage resources of existing ESXi hosts or even vSphere clusters.

In the second half of this chapter we will focus on the creation of these clusters, beginning with how groups of hosts or virtual machines are managed within a given DRS-enabled vSphere cluster. We will then look at how to manage multi-app virtual machines and/or businesscritical virtual machines using affinity and anti-affinity rules. A critical concept of DRS is how it operates depending on the level of automation configured, so we will spend some time looking at examples of this functionality. Finally, we will observe how DRS affinity and anti-affinity rules work with ESXi hosts as virtual machines are powered on in the cluster.

Administering and Managing vSphere 6.x Resources

A vSphere 6.x deployment begins with the installation and configuration of ESXi on a physical server. ESXi employs the VMkernel to virtualize the physical resources on that server. In a vSphere environment, there are four core resources: CPU, memory, network, and storage. These resources start out as physical, are virtualized by the VMkernel, and are then made available to virtual machines running on the host. As more and more hosts are brought online, a large amount of decentralized resources begins to accumulate, requiring mechanisms that can provide an efficient way to centrally manage and distribute those resources.

One of the fundamental components of this resource management methodology is the vSphere cluster. A vSphere cluster is a construct that aggregates a number of ESXi hosts so that their underlying resources can likewise be aggregated and distributed as needed to virtual machines and vApps running on the hosts within that cluster.

Figure 6.1 shows a vSphere cluster with two ESXi hosts viewed from a vSphere 6.7 Web Client using HTML5.

FIGURE 6.1 A vSphere 6.7 hierarchy containing a vSphere cluster, two ESXi hosts, and three virtual machines
Screenshot_345

To see what resources are available within the cluster, we could take a look at the Summary page for the cluster, as shown in Figure 6.2

FIGURE 6.2 The Summary page for a vSphere 6.7 cluster showing Free, Used, and Capacity Resources
346

As you can see, the resources for the two ESXi hosts have been aggregated into a single block of resources that can now be monitored for utilization. However, there are many resource management use cases that cannot be met without further refinement Additionally, if multiple departments have contributed budget resources to this cluster, there is no way to ensure that any given department is guaranteed the resources they paid for. These are just a couple of cases that can be met through the use of resource pools.

Resource Pools Solve an Organization's Workload Issues

An organization has recently run into an issue where several business-critical workloads have reported severely degraded performance. These workloads are vital to the company, and performance degradation can be directly tied to lost revenue. Upon inspection, it is determined that the affected workloads are memory intensive. There are other noncritical memory-intensive workloads that have continued to perform well during this period.

The organization decides to implement resource pools in order to resolve the problem. They create a resource pool for critical workloads and one for noncritical workloads. They utilize shares in the resource pools, setting the share value to high for the critical pool and low for the noncritical pool.

After two weeks, workload administrators report that no further performance degradation issues have been reported.

Configuring Multilevel Resource Pools

The vSphere Resource Management Guide defines a resource pool as “a logical abstraction for the flexible management of resources.” The part regarding logical abstraction is another way of saying that a resource pool can be used to represent an amount of CPU and memory resources that could come from any number of ESXi hosts in a vSphere cluster, with varying amounts of resources pulled from those hosts at any time. The part regarding flexibility is an indication that a resource pool can be easily adjusted to provide more (or less) CPU and/or memory resources, as well as adjusting the priority access to those resources. When resource pools are created as child pools of existing pools, an administrator can create a guaranteed, repeatable way to allocate resources and priority access to resources across a vSphere cluster.

For example, an organization has two departments, Sales and R&D. Both departments have contributed budgetary resources for IT infrastructure. The vSphere cluster has a total available capacity of 10 GHz CPU and 8 GB memory. In real-life environments, this would be a ridiculously small amount of resources, but throughout this book I have used an environment and screen shots identical to what you would see if you go through VMware's Hands-on Labs. The small size also works to our advantage when describing how multilevel resource pools function.

Figure 6.3 displays a resource pool configuration that would meet the needs of the organization. Starting from the top down, we have the actual vSphere cluster. The cluster contains two ESXi hosts, and the resources shown are aggregated. The vSphere cluster and its available resources are referred to as the root resource pool. From this pool, we are able to create child resource pools for each department. These pools are considered child pools because they get resources from the pool immediately above them (in this case the root pool). However, these pools are also considered sibling pools in relation to each other. Sibling pools are at the same level in a resource pool hierarchy and are completely isolated from one another.

When we create these pools, we would allocate the resources each department has budgeted. Workloads that run within these pools are only able to use the resources provided by the pools, unless the Expandable Reservation option has been selected (more on this later).

Let's focus a little more on the workloads themselves. In Figure 6.3, each virtual machine in the Sales resource pool is configured with 2 GHz of CPU and 1 GB RAM. However, just because a virtual machine is configured to use resources does not mean that the virtual machine is guaranteed those resources. In order for a virtual machine to have guaranteed access to CPU and memory resources, they must be reserved. To do this, an administrator would go to the configuration settings for the virtual machine and establish reservations for CPU and memory resources. In fact, administrators have the option of setting reservations, limits, and shares for both CPU and memory resources on virtual machines, vApps, and resource pools. Before moving further, let's take some time to ensure a good understanding of these three settings

FIGURE 6.3 A representation of a vSphere cluster with child resource pools and virtual machines
Screenshot_347

Reservations, Limits, and Shares

A reservation is a guaranteed amount of resources. These resources are allocated to the virtual machine when it is powered on, and the resources cannot be claimed for any other workload, even if the virtual machine is idle. By default, a virtual machine does not have a reservation. So, using Figure 6.3 as an example, let's say that none of the Development virtual machines have a reservation configured. If all three were powered on and running applications, they would likely have to compete for resources since the aggregated amount of resources for the three virtual machines exceeds the amount of resources available in the pool.

I use the term likely because in order for contention to occur, all three virtual machines would need to be active to the point where they exceed the resources in the pool. If one was idle, or if they were all running but using few resources, there would be no contention and no potential performance issues.

In fact, it is this functionality that allows an administrator to fully utilize all of the available resources of a vSphere cluster by overcommitting resources and assuming that not all virtual machines will operate at peak utilization at the same time. Of course, if there is high activity and virtual machine requirements exceed available resources, contention occurs and performance may be impacted. This may be OK for some virtual machines (like our Development virtual machines), but it would not be OK for more mission-critical production virtual machines.

To achieve an optimal balance between utilization and performance, it would be necessary to put some controls in place to ensure that the environment performs as expected. We have been looking at reservations, which are one type of control. We could use a full reservation for extremely mission-critical virtual machines, which would ensure that those virtual machines have complete access to resources at any time. For less critical virtual machines, it might make sense to determine average utilization and set a reservation that ensures that the average usage can be met without contention. That way, the virtual machines have the resources they need under most conditions.

To set a reservation, edit the settings of the virtual machine as shown in Figure 6.4.

FIGURE 6.4 The Edit Settings page for a virtual machine showing the configuration of CPU and memory resources
Screenshot_348

Another control that can be used with resource allocation is the limit. A limit establishes an upper boundary for the amount of CPU and/or memory resources a virtual machine is granted. By default, there is no defined limit, so a virtual machine is limited to the amount of resources allocated to it during initial configuration. For example, our Sales virtual machines were configured with a single CPU and 1 GB of memory. Therefore, the intrinsic limit for these virtual machines is the speed of the CPU in the ESXi host and 1 GB for memory.

The other control we want to address is shares. A share value determines priority access to resources during times of contention, or when virtual machines are forced to share resources. This distinction is important, because unless there is contention for resources, the share values assigned to virtual machines are not taken into consideration. There are three selectable share levels: High, Normal, and Low. These levels operate at a 4:2:1 ratio, so a single high share would provide four times more resources than a single low share. Figure 6.5 shows the default share values for virtual machine CPU and memory resources.

FIGURE 6.5 The number of shares assigned to a virtual machine depending on the share setting
Screenshot_349

To provide an example, let's say we had three virtual machines in a single resource pool, as shown in Figure 6.6.

FIGURE 6.6 Three virtual machines, each with one vCPU and the default share setting of Normal
Screenshot_350

Under normal operating conditions where no contention exists, each virtual machine would receive 100 percent of the resources it requires and share settings would not be a factor. However, when contention does exist, the share settings in this case would provide approximately 33 1/3 percent of available resources to each virtual machine. Let's further consider that one of the virtual machines is running a missioncritical application and should have higher-priority access to resources. Using the share setting of High, we change the configuration of the middle virtual machine, as seen in Figure 6.7

FIGURE 6.7 Three virtual machines, the second of which has been elevated to a High share setting
Screenshot_351

Now, should contention occur, the mission-critical virtual machine would receive 50 percent of the available resources, and each of the other machines would receive 25 percent. Shares are dynamic by nature, so as additional virtual machines are powered on, the amount of resources would scale appropriately. An example of this is shown in Figure 6.8, where a fourth virtual machine with a Low share setting has been powered on.

FIGURE 6.8 Four virtual machines, the fourth of which has a Low share setting
Screenshot_352

In this final example, should contention occur, the mission-critical virtual machine would receive approximately 44 percent of the available resources, each of the Normal virtual machines would receive 22 percent, and the new virtual machine would receive 11 percent. These amounts would continue to adjust as other machines were powered on or off.

Taking these concepts into consideration, let's establish that each virtual machine in Figure 6.9 has been configured with reservations in the amounts shown.

If an IT administrator were to power on all three virtual machines in the Sales resource pool, they would power on successfully since the requirements of each virtual machine can be satisfied by the resources from the pool. However, if an IT administrator were to power on all three virtual machines in the Development pool, the third virtual machine would fail to power on. This would occur even though the aggregated resource requirements of the virtual machines are 3 GHz of CPU and 3 GB of RAM, and the resource pool is configured with the same amount of resources. The reason for this is that when a deployment like this is designed, an architect must take into account the memory overhead required for each virtual machine. Those requirements are shown in Figure 6.10.

FIGURE 6.9 A multilevel resource pool deployment with two pools and six virtual machines
Screenshot_353
FIGURE 6.10 The memory overhead for a virtual machine based on various vCPU and memory configurations
Screenshot_354

To resolve this issue, the resources for the pool could be increased, or the Expandable Reservation option could be enabled. If this option is selected and additional resources are required, they can be allocated from a higher-level resource pool if available (in this case the root pool). Since the pool only needs about 78 MB (three 1vCPU, 1 GB VMs with approximately 26 MB of overhead each as seen in Figure 6.10), the requirement could be fulfilled in this way and the third virtual machine could be powered on using resources from the root pool.

The problem with using the Expandable Reservation option is that it negates the purpose of the resource pool configuration. It allows a pool to allocate resources from an upper-level pool. Figure 6.11 shows a similar configuration as Figure 6.9 but with an additional virtual machine added to the Sales pool and with increased CPU reservations on the Development virtual machines. Both resource pools have also been configured to utilize the Expandable Reservation property.

FIGURE 6.11 A virtual machine has been added to the Sales pool, and Expandable Reservations is enabled.
Screenshot_355

Now, an IT administrator powers on the four virtual machines in the Sales pool. Because there are insufficient resources to meet the virtual machines' requirements and the Expandable Reservation option has been enabled, the Sales pool can look to its parent pool to see if resources are available. In this case, 9 GHz of CPU resources and 7 GB of RAM have been reserved by the two pools, leaving 3 GB of CPU and 3 GB of RAM available. Since the resources are available, they are allocated from the root pool. If an administrator then attempts to power on virtual machines in the Development pool, the first virtual machine would power on. The second virtual machine requires additional CPU resources, but since the Expandable Reservation option is enabled, it can obtain the resources from its parent pool. Since the root pool has 1 GHz of CPU still available, it is allocated to the Development pool and the virtual machine is powered on. However, it is not possible to power on the final virtual machine. Even though enough memory could be allocated, there are no CPU resources available. Every time an administrator attempts to power on a virtual machine in a resource pool, or create a child pool from a resource pool, this same process is repeated. This process is known as resource pool admission control.

Consider the multilevel resource pool shown in Figure 6.12. An additional set of pools has been configured from the Sales parent pool to allow for virtual machines related to the organization's web storefront to have resources isolated from the rest of the Sales group's machines.

FIGURE 6.12 A multilevel resource pool deployment showing parent, child, and sibling relationships
Screenshot_356

As shown in the figure, if a virtual machine in the RP-SBM pool were powered on and resources were not available in the pool, the resources could be allocated from a parent. So, the RP-SBM pool could get resources from the RP-Sales pool, which in turn could get resources from the root pool. Pools cannot attempt to obtain resources from sibling pools. If insufficient resources were available, the power operation would fail.

Now that we've had an opportunity to explore critical exam concepts around creating a multilevel hierarchical structure; dealing with shares, limits, and reservations; and looking at the impact of the Expandable Reservation property, let's turn to some of the common administrative tasks that should be mastered.

Resource Pool Administration Exercises

Now that you have learned about resource pools and resource allocation mechanisms, let's apply that knowledge in some hands-on exercises. For most of these exercises, you can use either C# or HTML5 clients. If you do not have a test environment, I would recommend using the one of the Hands-on Labs available from VMware, which can be used for free. If you are using your own environment, I would recommend at least two ESXi hosts added to a vSphere cluster with a couple of virtual machines.

In the first exercise, you are creating a resource pool hierarchy for a hospital. We are assuming a cluster has already been created, labeled Northwest Regional.

EXERCISE 6.1 Create a resource pool

  1. Connect to a vCenter Server using the vSphere Web Client.

  2. Right-click the Northwest Regional vSphere cluster and click New Resource Pool.

    Screenshot_357
  3. Name the resource pool Sales. Define a 4 GHz CPU reservation for the pool. You can choose GHz from the dropdown as shown when using larger resource quantities. Make sure to uncheck the Expandable box in the Reservation Type section.

    Screenshot_358
  4. Define a 4 GB memory reservation in the same manner, again unchecking the Expandable box:

    Screenshot_359
  5. Click OK to complete the Sales resource pool configuration. Your resulting configuration should look like this.

    Screenshot_360

Next, you will add a virtual machine to the Sales resource pool you just created.

EXERCISE 6.2 Add a virtual machine to a resource pool

  1. Connect to a vCenter Server using the vSphere Web Client.

  2. Click the Sales resource pool you just created, then click the VMs tab just under the Actions drop-down. Currently, no VMs are attached to the pool.

    Screenshot_361
  3. There are multiple methods for moving a VM into a resource pool. One method is to drag the VM into the pool. In this exercise, right-click the VM, in this case the html5-app VM, and click Migrate.

    Screenshot_362
  4. Click the Resource Pools filter, then choose the Sales resource pool. The Compatibility window should indicate “Compatibility checks succeeded.”

    Screenshot_363
  5. Click Next. You will be prompted to select a vMotion priority. Since this is the Sales resource pool, leave the option set to high priority.

    Screenshot_364
  6. Click Next. A summary of your choices appears. Click Finish.

    Screenshot_365
  7. The VMs tab just under the Actions drop-down will now show the html5-app VM. It takes a minute or so before the resource pool information is updated.

    Screenshot_366

Using Tags and Custom Attributes

Before the next exercise, let's take a moment to review the concept of tags and custom attributes. A tag is a label that can be applied to many vSphere inventory objects, including a resource pool. These tags can be used to add specific metadata to an object, which could then be used to identify one or more objects that have the same tag. However, before tags debuted in vSphere 5.1, they were actually known as custom attributes. Now, as of vSphere 6.5, custom attributes are back and you are free to use both tags and custom attributes to add information to your inventory objects. The key difference between the two is that when you define a custom attribute, you can then assign a specific value for that attribute to every object. With tags, you can, for example, create a category and apply that category to one or more objects, but you cannot assign the category with a unique value to those objects.

Now that we have reviewed what a custom attribute is and what it is used for, let's assign one to our resource pool.

EXERCISE 6.3 Configure a custom attribute for a resource pool

  1. Connect to a vCenter Server using the vSphere Web Client. For this exercise, it is necessary to use the Flash Client in order to see the Custom Attributes window (although you can still set a custom attribute in either client).

  2. Right-click the Sales resource pool and scroll to the Tags & Custom Attributes option. When the option expands, click Edit Custom Attributes.

    Screenshot_367
  3. We will add a custom attribute that indicates who the administrator of each resource pool is. For the Sales resource pool, that individual is Jacob Barnes. Type RP_Administrator into the Attribute box and Jacob Barnes into the Value box.

    Screenshot_368
  4. Click Add. Notice how the attribute and value now show in the box.

    Screenshot_369
  5. Click OK. The Summary page of the Sales resource pool now shows the new custom attribute in the relevant panel.

    Screenshot_370

In the next exercise, you will remove a virtual machine from a resource pool. This might occur if you are moving a VM to another pool, or if you are doing development work on the VM and don't want resources in the pool impacted by the work you are doing.

EXERCISE 6.4 Remove a virtual machine from a resource pool

  1. Connect to a vCenter Server using the vSphere Web Client.

  2. Click the Sales resource pool you just created, then click the VMs tab. You should see the html5-app in the pool.

    Screenshot_371
  3. Right-click the html5-app VM and select Migrate.

    Screenshot_372
  4. To remove this VM from the pool, we need to place it into another pool, or into the root pool, which in this case is the Northwest Regional cluster. Select the Clusters filter and then select the Northwest Regional cluster. Confirm that the Compatibility window indicates “Compatibility checks succeeded.”

    Screenshot_373
  5. Click Next. You will be prompted to select a vMotion priority. Since this is the Sales resource pool, leave the option set to high priority.

    Screenshot_374
  6. Click Next. A summary of your choices appears. Click Finish.

    Screenshot_375
  7. The VMs tab will now be empty and the html5-app VM will show in the inventory under the Northwest Regional cluster. It takes a minute or so before the resource pool information is updated.

    Screenshot_376

In the next exercise, you will remove the resource pool you have created. You may need to remove a resource pool if you intend to change your resource allocation structure.

EXERCISE 6.5 Remove a resource pool

  1. Connect to a vCenter Server using the vSphere Web Client.

  2. Right-click the Sales resource pool you just created, then click Delete.

    Screenshot_377
  3. Confirm your selection by clicking Yes.

    Screenshot_378
  4. A look at the inventory should indicate that the Sales resource pool has been successfully removed.

    Screenshot_379

Configuring vSphere DRS and Storage DRS Clusters

With resource pools, we first gather resources from a number of ESXi hosts into a cluster, then allocate resources from that cluster to pools of virtual machines. However, this is only one method of resource management, and it does not cover all of the concerns an organization would have regarding the allocation of resources. For example, how do we prevent one of the hosts in the cluster from becoming resource constrained even though there are other hosts with available resources? Or how do we prevent a datastore from becoming I/O bound? These are just two of the concerns addressed by the use of the Distributed Resource Scheduler, or DRS. DRS comes in two flavors. The first is simply DRS, which governs CPU and memory resources. The second is Storage DRS, which governs storage I/O. Both of these are detailed in the following sections.

Distributed Resource Scheduler

Distributed Resource Scheduler, or DRS, is a technology designed to balance the distribution of resources in a cluster with the virtual machines running on the cluster. When DRS is enabled on a cluster, it becomes aware of the resources available across the cluster. DRS can then work to ensure that virtual machines are distributed across the cluster in a manner that balances resource utilization. It does this in two ways. First, DRS controls the initial placement of virtual machines. Initial placement involves the placement of a virtual machine on a host in the cluster based on current workloads across the cluster. When a virtual machine is first powered on, DRS looks at the available resources on the cluster as well as the individual resource utilization of each host in the cluster. Based on this information, the virtual machine is placed on the host with the most available resources. This is done for every virtual machine, balancing the resources of the hosts with the resource requirements of the virtual machines.

Next, DRS maintains the balance by monitoring resource utilization across the cluster. Should a host become resource constrained due to an uptick in utilization by the VMs running on the host, DRS is capable of migrating one or more VMs off the host and on to other hosts in the cluster. This helps to ensure that the cluster maintains a balanced load across all hosts.

Figure 6.13 shows a three-host cluster. The top part of the diagram shows the cluster before DRS load balancing is performed. After enabling DRS, migrations are made that balance out the workload, resulting in the end state shown.

FIGURE 6.13 A vSphere 6.7 cluster before and after DRS is enabled
Screenshot_380

When enabling DRS, there are a number of settings to control functionality. The first is the Automation Level set for DRS. The DRS Automation Level is the degree to which DRS will automatically control both initial placement and load balancing across the cluster. There are three levels of automation: Manual, Partially Automated, and Fully Automated. When it's set to Manual, DRS will provide recommendations for initial placement and migration but will not take any action. If it's set to Partially Automated, initial placement is automated but migration is still manual. Finally, when it's set to Fully Automated, both initial placement and migration are automatic. These options are in place to allow an administrator to decide exactly the amount of control they wish to allow the system to have, and there are adjustable settings so that even if the system is automated, the automation can be restricted to certain levels and virtual machines.

The first step to working with DRS is to enable it on a cluster and select the level of automation, as shown in Figure 6.14.

FIGURE 6.14 Enabling DRS on a cluster and selecting the level of automation
Screenshot_381

So, you are enabling DRS but have concerns about setting the automation level to Fully Automated. There are a number of logical reasons to be concerned. For example, you may be comfortable with the performance of an application on a specific host and are concerned about it being moved. Or you may have concerns that all of the migration performed to load balance may generate additional overhead. Not to worry, because there are several controls that can be adjusted to ensure that the amount of automation performed is exactly how you want it.

The first of these controls, and perhaps the most important, is the DRS Migration Threshold. The DRS Migration Threshold controls how aggressively migrations are performed in order to balance workloads across the cluster. The setting runs from priority 1 to 5, where 5 is the most aggressive setting. For details on each priority level, refer to the chart in Figure 6.15.

As you can see in the diagram, the default is priority level 3, which is suitable for most implementations. However, it's a good idea to monitor the balance level of the cluster to ensure that the current setting is ideal. You can view DRS status and current settings on the summary page of the cluster. A cluster with the default priority setting that is balanced would look like the image in Figure 6.16.

FIGURE 6.15 DRS Migration Threshold priorities based on criticality of recommendation
Screenshot_382
FIGURE 6.16 Cluster with default DRS Migration Threshold currently in balance

Predictive DRS

The options for configuring DRS to this point allow you to establish one of two methods for dealing with resource imbalance. The first is the Reactive method. The Reactive method is one in which DRS determines that a resource imbalance has occurred and supplies a recommendation but takes no action. This happens if DRS is configured for Manual mode. If in Partially Automated mode, this is the case for any migration recommendation. Finally, for Fully Automated mode, this still occurs for any recommendation below the threshold setting. While there is very little overhead associated with this method, resource imbalances are only addressed after the fact and after an administrator has acted on the recommendation.

The second method is the Balanced method. The Balanced method is one in which DRS determines that a resource imbalance has occurred and automatically acts to resolve the imbalance. This is the case when DRS is configured for Fully Automated mode and the recommendation is at or above the threshold setting. This method has the advantage of mitigating risk and keeping workloads balanced across hosts in the DRS cluster but has a higher overhead. In fact, it is possible to inject a large overhead into cluster operations if the DRS threshold is improperly configured. That said, this method is very effective at preventing most resource imbalances or in the very least quickly resolving those imbalances.

Effective since vSphere 6.5 is a third method, the Predictive method. The Predictive method is one in which DRS is able to predict future demand and identify when and where a resource imbalance is likely to occur. It then uses this information to move workloads before the affected resource is in contention. The Predictive method utilizes a new feature called Predictive DRS, which combines the abilities of DRS with another VMware product, vRealize Operations Manager (vROps). Because only the affected workloads are adjusted on the DRS cluster, this method requires minimal overhead.

Now, it is important to underscore that Predictive DRS is only possible by using vSphere in conjunction with vRealize Operations Manager. This is because Predictive DRS leverages the dynamic thresholds found in vROps, which use historical and live data to establish a baseline for the behavior of a given workload. The baseline is then combined with an upper and lower threshold for what is considered “typical” behavior. Anything that falls above or below these thresholds is anomalous.

Predictive DRS takes the information supplied by vROps and asks three simple questions. First, what resources are available on each host in the DRS cluster? Second, what VMs are powered on, and on which hosts are they running? Finally, how much of a given resource is required for each of the VM workloads over the day?

To better understand this concept, let's look at the graph in Figure 6.17 .

The graph depicts the demand for a resource (like CPU or memory) by a workload over a period of time. The dotted line represents the actual utilization of the resource by the workload. The long dashed line represents the predicted utilization of the resource by the workload over the same period of time. So, for example, let's say that an accounting company performs batch operations at the same time every day, resulting in a spike in the utilization of CPU resources. Since vROps is using historical data, it is aware that this is a spike that occurs with given regularity. As a result, Predictive DRS can use this information to proactively remediate the workload to a host that has sufficient resources to handle the spike, thereby avoiding any impact to performance. This vastly reduces the likelihood that a spike in utilization will result in any noticeable impact.

FIGURE 6.17 Predictive DRS method vs. Balanced method (resource demand for a given workload over a 24-hour period recommendation)
Screenshot_384

It is important to be aware that the Predictive method does not resolve every workload imbalance. This is because Predictive DRS relies on regular changes in utilization. No method can proactively resolve an unforeseen spike in utilization, which is why it is important to properly configure Fully Automated mode with an appropriate threshold. This way, Predictive DRS can proactively resolve any known utilization changes, while DRS can react and balance workloads in the event an unforeseen change occurs.

Network-Aware DRS

Traditionally, DRS has always focused on two of the four “core” resources used by virtual machines, CPU and memory, when balancing workloads across hosts in the cluster. This is because for the most part, CPU and memory resources are the most likely to experience rapid increases in utilization that could impact other workloads on the same ESXi hosts. However, there are some workloads that can have similar spikes in network and/or storage resource utilization. Fortunately, there are tools in vSphere 6.7 to help mitigate these types of spikes and ensure that all workloads perform well no matter what resource or resources might be impacted.

In earlier releases of vSphere, DRS did not analyze network utilization and did not factor this resource into consideration when migrating a workload. As a result, a workload with heavy network resource requirements could be migrated due to a CPU spike to an ESXi host that is already network saturated. Unfortunately, this means that the workload could continue to experience potential performance issues, but this time due to contention for network resources instead of CPU resources. In addition, since earlier versions of DRS didn't look at network utilization, the problem would go unnoticed by DRS and require manual, reactive intervention.

DRS became network-aware beginning with version 6 and was significantly enhanced in 6.5. This feature is known as network-aware DRS. Network-aware DRS monitors the network send and receive rates of the physical uplinks on ESXi hosts in the cluster and avoids placing virtual machines on hosts that are network saturated. This means that DRS now considers the network utilization of ESXi hosts in the cluster as well as the network requirements of VMs during both initial placement and load balancing.

Looking at placement first, when a user powers on a VM, DRS looks at available CPU and memory resources on hosts and the CPU and memory requirements of the VM and makes an initial determination as to which host the VM will be placed on. DRS then factors in some network heuristics and makes a final decision on where the VM should be placed.

Before discussing actions taken by DRS for workload balancing, it is important to note that DRS will not move a virtual machine due to a network resource imbalance. What DRS will do is make sure that when a VM is moved due to a CPU or memory imbalance, it is moved to a host in the cluster that can accommodate the VM's network requirements.

As a result, when DRS performs a load balancing check for a VM, it starts by making a list of possible migration destinations. It then eliminates from that list ESXi hosts in the cluster that are network saturated. Finally, it makes a recommendation using a host from the remaining list of destinations that both provides the best CPU and memory balancing and contributes to network resource availability on the VM's source ESXi host.

So, at what point exactly does DRS consider a host in the cluster to be network saturated? By default, DRS considers a host to be network saturated if the host network utilization reaches or exceeds 80 percent. If needed, this can be adjusted using a DRS advanced option, NetworkAwareDrsSaturationThresholdPercent. DRS advanced options can be set using the vSphere Web Client, as shown in Figure 6.18 .

FIGURE 6.18 The vSphere DRS Configure tab showing the entry point for Advanced Options
Screenshot_385

You can also view the current network utilization for the DRS cluster. The utilization is shown for each ESXi host in the cluster and is based on the average capacity across all the physical NICs (pNICs) on the host. For example, if a host has four pNICs where two are 50 percent utilized and two are 0 percent utilized, then the network utilization of the host is considered to be 25 percent. This can be seen in the vSphere Web Client, as shown in Figure 6.19.

FIGURE 6.19 The vSphere DRS Monitor tab showing network utilization for the DRS cluster
Screenshot_386

Storage DRS

The final resource that needs to be addressed when considering the load balancing of resources across a cluster is storage. Some workloads can be very storage intensive, causing a potential bottleneck for other virtual machines accessing the same storage resource. These potential bottlenecks can be mitigated using Storage DRS. Storage DRS allows you to create a cluster of storage resources, then monitors that collection of resources to provide recommendations for virtual machine disk placement and migration in order to balance storage capacity and I/O. Storage DRS groups storage resources into a datastore cluster, much like hosts are grouped into a DRS cluster to manage compute resources. You'll find detailed information about Storage DRS in Chapter 4, “Storage in vSphere,” but as we are talking about resource allocation and consumption, remember that this is one of the core resources that must be carefully managed when administering a vSphere datacenter.

Establishing Affinity and Anti-Affinity

Now that we have discussed all of the different ways that vSphere can load balance resources across a cluster, it is important to approach the subject of af inity rules and anti-af inity rules. First, let's look at affinity. Many workloads are not confined to a single virtual machine; they may consist of multiple virtual machines working in concert. For example, a typical web application consists of at least three virtual machines, including a web server, an application server, and a database server. In a case such as this, the virtual machines pass data back and forth, so moving just one of these VMs to another host may do more harm than good in terms of performance. An affinity rule is used to keep a number of VMs together, and if a migration needs to take place, it ensures that all of the VMs in the group are moved to the same destination ESXi host.

Anti-affinity is all about availability. For example, if you had a mission-critical application or key infrastructure component that existed on multiple VMs for the sake of redundancy, the last thing you would want is for those VMs to reside on the same ESXi host. This is because if the host were to fail for some reason, the application or infrastructure component would go down as well, at least until the VMs are restarted on other hosts. An anti-affinity rule is used to ensure that a number of VMs are kept apart, and if a migration needs to take place, it ensures that the VMs do not wind up on the same ESXi host.

Affinity and anti-affinity settings can be established between individual VMs, but they can also be set up to work with groups of VMs and/or groups of hosts. This can be beneficial if you have licensing concerns that limit one or more VMs to specific hosts, or if you want to extend the capability of a standard affinity/anti-affinity rule. For example, let's say you have two domain controllers and you want to make sure they are always placed on separate hosts. You could create a VM-VM anti-affinity rule for the domain controller VMs, which would work well in a medium to large DRS cluster. However, in a small cluster a downed host might result in the need to place both VMs on the same host, which would be prevented by the rule. In a situation like this, it might be advantageous to establish groups.

The first step to using DRS groups is to create a DRS host group. A DRS host group is a subset of ESXi hosts in a DRS cluster that will be used in conjunction with a VM group to establish affinity or antiaffinity rules. A DRS host group is created by selecting Cluster → Configure → VM/Host Groups → Add and selecting the host group type. Next, add the hosts in the cluster that should be part of the group. At this point, if your goal is to create rules for individual VMs within this host group, you may think you are done here. However, if you want to establish rules for even a single VM in conjunction with a host group, you will also need to create a DRS VM group. A DRS VM group is a collection of virtual machines that will be used in conjunction with a host group to establish affinity or anti-affinity rules. The process of creating a DRS VM group is identical to the process for creating host groups, except you are adding VMs to the group.

Once you have created the groups, the next step is to create a VM/Host rule, making sure to set type Virtual Machines to Hosts. There are additional options when creating this type of rule. These options revolve around how strict you want the rule to be. For a VM-Host affinity rule, the options are Must Run on Hosts in Group and Should

Run on Hosts in Group. If you set the rule to Must Run on Hosts in Group and the selected hosts are down, the VMs will also be down and will not be restarted on other hosts. I would recommend using this option only if you have licensing requirements tied to specific hosts. In all other cases, choosing Should Run on Hosts in Group allows DRS to prefer the selected hosts but still use other hosts in the cluster should the need arise.

For a VM-Host anti-affinity rule, the options are Must Not Run on Hosts in Group and Should Not Run on Hosts in Group. Going back to the domain controller example, we saw that a simple VM-VM antiaffinity rule could result in the second domain controller remaining offline if the only option was placing both VMs on the same host. Using a VM-Host anti-affinity rule with Should Run on Hosts in Group would provide the same benefit but allow both controllers to exist on the same host if no other option was available.

DRS Cluster Administration Exercises

Now that you have learned about DRS clusters and the mechanisms that control virtual machine recommendations and migrations, let's apply that knowledge in some hands-on exercises. You can use either the Flash or the HTML5 client for most of the upcoming exercises. I recommend using the VMware Hands-on Labs environment available from VMware, which can be used for free. If you are using your own environment, I recommend at least two ESXi hosts added to a vSphere cluster with a couple of virtual machines.

In the first exercise, you will use the same cluster as you did for the resource pool exercises, labeled Northwest Regional. You will begin by enabling this cluster for DRS and configuring an automation level and threshold.

EXERCISE 6.6 Enabling a cluster for DRS

  1. Connect to a vCenter Server using the vSphere Web Client.

  2. Right-click the Northwest Regional vSphere cluster and click Settings.

    Screenshot_387
  3. The Configure panel is displayed, and vSphere DRS is highlighted under Services in the navigation pane. Click the Edit button.

    Screenshot_388
  4. The Edit Cluster Settings window is displayed. vSphere DRS is currently disabled. To enable it, click the slider.

    Screenshot_389
  5. Next, to set an Automation Level, click the drop-down and select Fully Automated

    Screenshot_390
  6. Change the Migration Threshold to level 4 by moving the slider one notch to the right. The level will momentarily highlight as shown.

    Screenshot_391
  7. Click OK to return to the Configure panel. DRS will now show that it is enabled. If you want to see the specific configuration settings you set for automation, click the down arrow next to DRS Automation:

    Screenshot_392

Next, you will create a host DRS group to ensure that a licensed virtual machine is not migrated off the ESXi host it is tied to.

EXERCISE 6.7 Add a host DRS group

  1. Connect to a vCenter Server using the vSphere Web Client.

  2. Highlight the Northwest Regional cluster and click the Configure tab.

    Screenshot_393
  3. Next, select VM/Host Groups from the Configuration dropdown in the navigation pane, then click the Add button.

    Screenshot_394
  4. The Create VM/Host Group window is displayed. In the Name field, type AppLicensed, then click the drop-down menu for Type and select Host Group. Finally, click the Add button.

    Screenshot_395
  5. The Add Group Member window is displayed. Select the esx01a.corp.local host, then click OK.

    Screenshot_396
  6. The Create VM/Host Group window is displayed again, this time with the selected hosts. Click OK to complete the operation.

    Screenshot_397

In the next lab, you need to identify which VMs will run on the host group, even if it is only a single VM.

EXERCISE 6.8 Create a VM group

  1. Connect to a vCenter Server using the vSphere Web Client.

  2. Highlight the Northwest Regional cluster and click the Configure tab.

    Screenshot_398
  3. Next, select VM/Host Groups from the Configuration dropdown in the navigation pane, then click the Add button.

    Screenshot_399
  4. The Create VM/Host Group window is displayed. In the Name field, type AppVMLicensed, then click the drop-down menu for Type and select VM Group. Finally, click the Add button.

    Screenshot_400
  5. The Add Group Member window is displayed. Select the html5-app virtual machine, then click OK.

    Screenshot_401
  6. The Create VM/Host Group window is displayed again, this time with the selected virtual machines. Click OK to complete the operation.

    Screenshot_402

Now that we have a DRS host group and a DRS VM group, we can establish an affinity rule that ensures that the VM will only run on the licensed host.

EXERCISE 6.9 Create a VM/Host affinity rule

  1. Connect to a vCenter Server using the vSphere Web Client.

  2. Highlight the Northwest Regional cluster and click the Configure tab.

    Screenshot_403
  3. Next, select VM/Host Rules from the Configuration dropdown in the navigation pane, then click the Add button.

    Screenshot_404
  4. The Create VM/Host Rule window is displayed. In the Name field, type LicenseRule, then click the drop-down menu for Type and select Virtual Machines to Hosts. Finally, click the Add button.

    Screenshot_405
  5. Normally, you would select the VM group, the host group, and the rule type. Since you have only created one group of each type, they will automatically populate. However, you must still choose the appropriate rule, which in this case is Must Run on Hosts in Group. Select this rule, then click OK.

    Screenshot_406
  6. You are returned to the Configure panel, where you can now see the VM/Host rule details.

    Screenshot_407

The last two exercises show how to remove the host and VM group entities. You do not have to remove associated rules first, but you will receive a warning.

EXERCISE 6.10 Remove a VM group

  1. Connect to a vCenter Server using the vSphere Web Client.

  2. Highlight the Northwest Regional cluster and click the Configure tab.

    Screenshot_408
  3. Next, select VM/Host Groups from the Configuration dropdown in the navigation pane, then click the Delete button.

    Screenshot_409
  4. You will receive a warning, as there is an associated rule. Click OK to acknowledge the warning and remove the VM Group.

    Screenshot_410
  5. The VM/Host Group window is displayed again, this time showing that the group has been removed.

    Screenshot_411

EXERCISE 6.11 Remove a host group

  1. Connect to a vCenter Server using the vSphere Web Client.

  2. Highlight the Northwest Regional cluster and click the Configure tab.

    Screenshot_412
  3. Next, select VM/Host Groups from the Configuration dropdown in the navigation pane, then click the Delete button.

    Screenshot_413
  4. You will receive a warning, as there is an associated rule. Click OK to acknowledge the warning and remove the VM group.

    Screenshot_414
  5. The VM/Host Groups window is displayed again, this time showing that the group has been removed.

    Screenshot_415

Summary

Allocating resources in your vSphere datacenter is critical to ensure virtual machines perform as expected. Using the different features available in vSphere you can provide resource pools for specific workloads, ensure workloads don’t use an excessive amount of resources and affect where workloads run in the datacenter.

With resource pools you can reserve resources (CPU and memory) for the virtual machines in that pool, or set limits on the pool to prevent those virtual machines from using excessive resources. By creating multilevel pools you allow for scenarios such as a department pool with specific resources shared between child pools for different projects. Child pools can also be set with expendable reservations, allowing their usage to increase - if the parent pool has available resources.

The distributed resource scheduler or DRS provides a mechanism to balance resource usage within a host. Typically used to automatically migrate virtual machines from heavily-used to lightly-used hosts, DRS can ensure resources are not starved on one host while others are unused. You can also use affinity rules to ensure certain virtual machines run on certain hosts and to ensure or prevent two virtual machines from running on the same host.

With a good understanding of how resources are shared in a vSphere environment, you can leverage these different tools to keep your datacenter balanced and efficient.

Exam Essentials

Understand how resource pools work Know how virtual machine reservations work within resource pools. Know what expendable reservations do and how they affect parent and sibling resources.

Be able to explain reservations, limits and shares And when each takes effect. While reservations are made as soon as the virtual machine powers on, shares only come into play during times of contention while limits prevent the virtual machine (or pool) from consuming recourse over the set amount.

Be able to calculate shares and reservations from pools Know how shares in a pool work to determine how resources are distributed during times of contention and be able to calculate available resources in a pool hierarchy given pool and virtual machine settings.

Understand Distributed Resource Scheduler (DRS) Know the different settings for DRS including automation level and know the migration threshold options. Know what the requirements for Predictive DRS are. Predictive DRS requires both DRS and vROPs in order to anticipate workload demands.

Know how to configure affinity rules Affinity rules can keep virtual machines on (or off) specific hosts and either keep or prevent virtual machines from running on the same host.

Review Questions

  1. Which is a valid reason for creating a snapshot of the resource pool tree?

    1. An administrator needs to disable HA for maintenance purposes.
    2. An administrator needs to disable DRS for maintenance purposes.
    3. An administrator is adding an ESXi host with resource pools to a vSphere cluster.
    4. An administrator is removing an ESXi host with resource pools from a vSphere cluster.
  2. An administrator is configuring resource pools for a vSphere 6.x cluster. The cluster has these characteristics:

    • 4 ESXi 6.x hosts
    • 8 cores per host
    • 60 virtual machines with 1 vCPU each

    The administrator configures three resource pools and places the virtual machines into the pools as follows:

    • Sales pool-High share value with 30 virtual machines
    • Engineering pool-Normal share value with 20 virtual machines
    • Test pool-Low share value with 10 virtual machines

    Given this configuration, what resources would be allotted to each pool during resource contention?

    1. The Sales pool will receive twice the amount of resources as the Engineering pool.
    2. The Engineering pool will receive twice the amount of resources as the Test pool.
    3. Each pool will receive the same amount of resources.
    4. The Test pool will perform two times as well as the Engineering pool.
  3. An administrator determines that a Windows virtual machine in a resource pool called Sales is unable to power on. Which two actions might resolve this issue? (Choose two.)

    1. Increase the memory reservation of the virtual machine.
    2. Increase the CPU shares on the resource pool where the virtual machine resides.
    3. Decrease the CPU reservation of the virtual machine.
    4. Set the Expandable Reservation property on the resource pool.
  4. Which element should be configured if a resource pool requires guaranteed memory resources?

    1. Shares
    2. Reservation
    3. Limit
    4. Expandable Reservation
  5. Which statement best describes the Expandable Reservation parameter?

    1. The Expandable Reservation parameter can be used to allow a sibling resource pool to request resources from any other sibling.
    2. The Expandable Reservation parameter can be used to allow a child resource pool to request resources from any parent.
    3. The Expandable Reservation parameter can be used to allow a child resource pool to request resources from its parent.
    4. The Expandable Reservation parameter can be used to allow a child resource pool to request resources from a sibling.
  6. Which two resources can be allocated using resource pools? (Choose two.)

    1. Memory
    2. Storage
    3. CPU
    4. Network
  7. When two pools exist at the same level in the hierarchy and are completely isolated from each other, what are they called?

    1. Sibling pools
    2. Child pools
    3. Parent pools
    4. Root pools
  8. What resource allocation mechanism should be used to guarantee that a virtual machine can only use the resources granted to it?

    1. Reservation
    2. Limit
    3. Shares
    4. Expandable Reservation
  9. What is the ratio of resources allocated when using the High, Normal, and Low settings?

    1. 4:2:1
    2. 10:5:1
    3. 3000:2000:1000
    4. 6000:3000:1000
  10. An administrator is configuring resource pools for a vSphere 6.x cluster. The cluster has these characteristics:

    • 4 ESXi 6.x hosts
    • 8 cores per host
    • 60 virtual machines with 1 vCPU each

    The administrator configures three resource pools and places the virtual machines into the pools, as follows:

    • Sales pool-High share value with 30 virtual machines
    • Engineering pool-Normal share value with 20 virtual machines
    • Test pool-Low share value with 10 virtual machines

    Given this configuration, what resources would be allotted to each pool if no contention exists?

    1. The Sales pool will receive twice the amount of resources as the Engineering pool.
    2. The Engineering pool will receive twice the amount of resources as the Test pool.
    3. Each pool will receive as much resources as it needs.
    4. The Test pool will perform two times as well as the Engineering pool.
  11. An administrator has a single VM that can only run on specific hosts due to licensing requirements. Which two steps must be taken to ensure that DRS will satisfy the requirements? (Choose two.)

    1. Create a DRS host group.
    2. Create a DRS host group and a VM group.
    3. Create a VM-VM affinity rule.
    4. Create a VM-Host affinity rule.
  12. In which case would the use of VM-VM affinity rules not be supported?

    1. The cluster is configured for HA using the Cluster resource percentage option and the percentage is greater than 25 percent.
    2. The cluster is configured for HA using the Slot policy option and more than two slots are configured.
    3. The cluster is configured for HA using the Dedicated failover hosts option and multiple failover hosts are configured.
    4. The cluster is configured for HA using the Proactive HA option and the automation level is set to Manual.
  13. An administrator has configured a DRS VM group containing four infrastructure VMs. The administrator removes one of the VMs from the cluster, then adds it back into the cluster at a later date. Which statement accurately explains the condition of the VM once it has been added back into the cluster?

    1. The VM is automatically added back into the DRS VM group.
    2. The VM must be manually added back into the DRS VM group.
    3. The DRS VM group was removed when the VM was removed from the cluster. It must be re-created, and all VMs must be added back into the group.
    4. The DRS VM group must be deleted, then re-created, and all VMs must be added back into the group.
  14. What additional VMware product must be available in order to enable Predictive DRS?

    1. vRealize Automation
    2. vRealize Orchestrator
    3. vRealize Operations Manager
    4. vRealize Code Stream
  15. Which statement is an accurate description of how network-aware DRS functions?

    1. Network-aware DRS monitors the network utilization of virtual machines and takes action in the event a network resource is saturated.
    2. Network-aware DRS monitors the network utilization of ESXi hosts in the cluster and takes action in the event a network resource is saturated.
    3. Network-aware DRS monitors the compute utilization of virtual machines and takes action in the event a compute resource is saturated while taking into consideration the network utilization of the VM.
    4. Network-aware DRS monitors the compute utilization of ESXi hosts and takes action in the event a compute resource is saturated while taking into consideration the network utilization of the host.
  16. Which two use cases would be reasons for implementing an antiaffinity VM-Host rule? (Choose two.)

    1. Licensing restrictions limit a VM to one or more ESXi hosts in the cluster.
    2. Two infrastructure VMs must be kept apart in a mediumsized cluster.
    3. The cluster has non-uniform hardware capabilities.
    4. Maximum availability is required for a clustered application.
  17. At what point is an ESXi host with three physical uplinks considered to be network saturated?

    1. When the collective utilization of the physical uplinks reaches or exceeds 80 percent
    2. When any one of the three physical uplinks reaches or exceeds 80 percent
    3. When the collective utilization of the physical uplinks reaches or exceeds 70 percent
    4. When any one of the three physical uplinks reaches or exceeds 70 percent
  18. Which three conditions would result in DRS generating a migration recommendation? (Choose three.)

    1. The DRS cluster is experiencing a CPU imbalance.
    2. The DRS cluster is experiencing a memory imbalance.
    3. The DRS cluster is experiencing a storage imbalance.
    4. A resource pool reservation must be satisfied.
    5. An ESXi host in the cluster experiences an unplanned downtime issue.
  19. An administrator creates a VM-VM affinity rule for two virtual machines. Six months later, another administrator attempts to create a VM-VM anti-affinity rule for the same VMs. Which statement accurately describes the end result of this action?

    1. The older rule will remain enabled and the new rule will be disabled.
    2. The new rule will be enabled and the older rule will be disabled
    3. Both rules will be enabled.
    4. Both rules will be disabled and an alert will be generated.
  20. An administrator has established a VM-Host affinity rule using the Must Run On option on a DRS cluster. Which two actions are not performed in the cluster if doing so would violate the affinity rule? (Choose two.)

    1. Virtual machines are migrated off an ESXi host that is being placed into maintenance mode.
    2. An ESXi host is removed from the cluster.
    3. The cluster is imbalanced, and DRS migrates one or more virtual machines.
    4. The administrator performs some manual migrations within the cluster.
UP

LIMITED OFFER: GET 30% Discount

This is ONE TIME OFFER

ExamSnap Discount Offer
Enter Your Email Address to Receive Your 30% Discount Code

A confirmation link will be sent to this email address to verify your login. *We value your privacy. We will not rent or sell your email address.

Download Free Demo of VCE Exam Simulator

Experience Avanset VCE Exam Simulator for yourself.

Simply submit your e-mail address below to get started with our interactive software demo of your free trial.

Free Demo Limits: In the demo version you will be able to access only first 5 questions from exam.