5V0-21.21 VMware Practice Test Questions and Exam Dumps


Question No 1:

In a stretched vSAN cluster, how is Read Locality established after a failover to the secondary site?

A. 100% of the reads come from vSAN hosts on the local site
B. 50% of the reads come from vSAN hosts on the local site
C. 100% of the reads come from vSAN hosts on the remote site
D. 50% of the reads come from vSAN hosts on the remote site

Answer: A

Explanation:

In a stretched vSAN cluster, data is distributed across two sites—usually one primary site and one secondary site. The key advantage of this configuration is the ability to maintain data availability and resilience in the event of a failure at one of the sites. When failover occurs, it’s crucial to understand how read locality works, especially when the secondary site takes over the operation due to a failure or planned downtime at the primary site.

Read locality refers to how the system ensures that reads are served from the local site where the VM or application is running. In the case of a stretched vSAN cluster, if failover occurs and the workloads shift to the secondary site, vSAN will still prioritize reading data from the local site to optimize performance.

After a failover, 100% of the reads will come from vSAN hosts on the local site where the workloads are now running. This is because vSAN ensures that, after a failover, the read operations are served by the closest available vSAN hosts in order to maintain high performance. Since the data is synchronized across both sites, the secondary site will have a complete copy of the data, allowing it to handle read and write operations seamlessly.

For example, if a VM originally located on the primary site fails over to the secondary site, the system ensures that the reads are retrieved from the local (secondary) site, ensuring there is no additional latency or performance degradation due to accessing data over long-distance links between sites.

If read locality were not preserved, the system might attempt to serve reads from the remote site, which would introduce unnecessary latency. Therefore, option A—that 100% of the reads come from vSAN hosts on the local site—is correct.

Options B, C, and D are incorrect because they either overstate or misunderstand the behavior of read locality in a stretched vSAN cluster. The correct approach is ensuring that the local site always serves the reads after a failover to maintain optimal performance.

Question No 2:

In a vSAN stretched cluster, which value must be set in the vSAN policy if there is no requirement for data mirroring across sites?

A. SFTT = 0
B. SFTT = 1
C. PFTT = 1
D. PFTT = 0

Answer: D

Explanation:

In a vSAN stretched cluster, the PFTT (Preferred Failures to Tolerate) and SFTT (Secondary Failures to Tolerate) settings in the vSAN policy control how data is replicated and protected across multiple sites. These values are particularly important when determining how data should be stored and how resilient the cluster should be to failures.

SFTT refers to how many additional failures can be tolerated within a secondary site, while PFTT refers to the number of failures that can be tolerated within the primary site. If there is no requirement for data mirroring across sites, the policy should be set to PFTT = 0, which means there is no data redundancy or mirroring required between the sites in the stretched cluster. Setting PFTT = 0 ensures that the data is not mirrored across sites and resides only in the primary site.

Conversely, if PFTT = 1, it would indicate that the system should tolerate one failure at the primary site, and if SFTT = 1, it would tolerate one failure at the secondary site, potentially introducing unnecessary mirroring across sites.

By setting PFTT = 0, you ensure that there is no data mirroring across the sites, and data remains on the primary site only, as specified in the scenario where no cross-site mirroring is required. This results in a simpler and more cost-effective configuration for environments where only local resiliency is needed.

Question No 3:

Which solution requires the use of VMware vSAN for automating infrastructure that supports VMware Horizon as well as VMware Tanzu?

A. VMware Cloud Foundation
B. VMware Horizon
C. VMware Tanzu
D. VMware vRealize Automation

Answer: A

Explanation:

When setting up an infrastructure that integrates VMware Horizon and VMware Tanzu, it’s crucial to understand the requirements of each solution to determine whether VMware vSAN is a mandatory component. VMware vSAN (Virtual SAN) is VMware’s software-defined storage solution, which is tightly integrated with VMware vSphere and is often required in specific VMware solutions.

Option A — VMware Cloud Foundation — is the correct choice. VMware Cloud Foundation is an integrated software platform that combines VMware vSphere, vSAN, NSX, and vRealize Suite to provide a complete software-defined data center (SDDC) solution. For VMware Cloud Foundation to function effectively, vSAN is an essential component because it provides the storage layer for the SDDC. VMware Cloud Foundation is designed to support a range of applications, including VMware Horizon for virtual desktop infrastructure (VDI) and VMware Tanzu for modern application development. Therefore, VMware Cloud Foundation requires vSAN to manage and automate the underlying storage infrastructure, making it the mandatory solution in this case.

Option B — VMware Horizon — does not require VMware vSAN specifically. VMware Horizon, a solution for virtual desktop and application delivery, can be configured with various storage options. While using vSAN can enhance performance and integration within the Horizon environment, it is not a strict requirement. Other storage solutions such as traditional SAN or NAS can also be used with Horizon.

Option C — VMware Tanzu — is a suite of tools designed for modern application development and Kubernetes management. VMware Tanzu can function with various storage solutions depending on the environment, including vSAN but is not explicitly dependent on it. While vSAN may improve the storage experience when running Tanzu on VMware infrastructure, it is not a required component for Tanzu itself.

Option D — VMware vRealize Automation — is a cloud automation platform that can orchestrate the deployment of applications, workloads, and infrastructure. VMware vRealize Automation does not inherently require VMware vSAN, as it can manage workloads on various types of storage systems, including traditional SANs, NAS, or other third-party storage solutions.

In summary, the only solution that mandates the use of vSAN in this context is VMware Cloud Foundation because it is an integral component of the underlying infrastructure required for Cloud Foundation’s operation. Therefore, A is the correct answer.

Question No 4:

An administrator is setting up vSAN file services on a vSAN cluster. Which two security policies on the distributed port groups are automatically enabled in the process? (Choose two.)

A. Forged Transmits
B. Promiscuous Mode
C. DVFiltering
D. Jumbo Frames
E. MacLearning

Answer: C, D

Explanation:

When setting up vSAN file services on a vSAN cluster, certain network configurations and security policies are automatically enabled to ensure proper functionality and performance of the service. The process involves configuring distributed port groups to handle the network traffic appropriately for vSAN operations, especially when it is providing file services (i.e., NFS or SMB).

DVFiltering (C) is a security policy that controls the filtering of network traffic on the distributed virtual switch (DVS). It is automatically enabled when setting up vSAN file services. DVFiltering works by enforcing specific traffic filtering rules for virtual machines and other network entities, ensuring that only authorized traffic is allowed on the network. This is particularly important in environments like vSAN file services, where controlling network access and maintaining security and traffic integrity are critical.

Jumbo Frames (D) is another policy that is automatically enabled in the process. Jumbo frames support larger Ethernet frame sizes (typically up to 9000 bytes), which can improve network throughput and reduce CPU utilization for large data transfers. Since vSAN file services often involve significant data movement (especially for file-based access to virtual machine data), enabling jumbo frames helps optimize network performance and reduce overhead when transferring large files.

The other options are not automatically enabled when setting up vSAN file services:

  • Forged Transmits (A) controls whether packets with a source MAC address different from the one configured on the virtual machine or port group are allowed. This is generally not enabled automatically by vSAN file services, as this policy is often used for tighter network security controls and is typically disabled by default unless needed for specific use cases.

  • Promiscuous Mode (B) allows a network interface to receive all packets on the network, not just those addressed to it. This mode is often used in specific security or monitoring scenarios (like network analysis) but is not automatically enabled when setting up vSAN file services. Promiscuous mode can introduce security risks and is typically avoided unless explicitly required.

  • MacLearning (E) is a security feature that controls the ability of a virtual machine to learn the MAC addresses on a given port group. It is not directly related to the setup of vSAN file services and is generally used in environments where stricter network controls are needed.

Thus, the two security policies that are automatically enabled when setting up vSAN file services are DVFiltering (C) and Jumbo Frames (D).

Question No 5:

What step should be taken to resolve the locked state of the vSAN disk groups after rebooting a node in an encrypted vSAN cluster?

A. Manually replace the Host Encryption Key (HEK) of each affected host.
B. Restore the communication with the KMS server, and re-establish the trust relationship.
C. Replace the caching device in each affected disk group.
D. Run /etc/init.d/vsanvpd restart to rescan the VASA providers.

Correct answer: B

Explanation:

In a vSAN cluster that uses encryption, when a node is rebooted, its disk groups can become locked if the communication with the Key Management Server (KMS) is lost or if there is an issue with the trust relationship between the node and the KMS. Encryption in vSAN relies on the Key Management Interoperability Protocol (KMIP) to ensure that the encryption keys are available and correctly used to access data.

If the node is rebooted and the vSAN disk groups become locked, the most likely cause is a disruption in the communication with the KMS or a loss of the trust relationship between the node and the KMS. To resolve this, the administrator should restore the communication with the KMS server and re-establish the trust relationship. This will allow the node to re-authenticate with the KMS and access the encryption keys required to unlock the disk groups.

Let’s look at the other options:

  • A. Manually replace the Host Encryption Key (HEK) of each affected host: Replacing the Host Encryption Key would typically be unnecessary in this case. If the keys are lost or corrupted, this could be an option, but in most cases, re-establishing communication with the KMS should restore the proper key access.

  • C. Replace the caching device in each affected disk group: Replacing the caching device might help with hardware issues or performance, but it does not address the locked state caused by encryption and KMS communication problems.

  • D. Run /etc/init.d/vsanvpd restart to rescan the VASA providers: While restarting VASA (vSphere APIs for Storage Awareness) providers may help with certain storage-related issues, it will not address the encryption or KMS-related issue causing the disk groups to be locked.

Thus, B is the correct answer, as restoring communication with the KMS server and re-establishing trust is the appropriate solution when facing locked disk groups in an encrypted vSAN cluster.

Question No 6:

Which boot device is supported for the vSAN ESXi nodes for this customer?

A. A 16GB single-level cell (SLC) SATADOM device must be used.
B. A 4GB USB or SD device must be used.
C. A 16GB multiple-level cell (MLC) SATADOM device must be used.
D. ESXi Hosts must boot from a PMEM device.

Answer: B

Explanation:

When deploying a vSAN cluster on ESXi nodes, there are specific requirements regarding the boot device, which is necessary for the ESXi operating system to run on each node. In this case, the customer is deploying a distributed ERP system, and the hardware specifications of their server nodes include powerful CPUs and a substantial amount of memory. Given this scenario, it is important to understand the different boot device options supported by VMware vSAN for the ESXi hosts.

Option A suggests using a 16GB single-level cell (SLC) SATADOM device. While SLC SATADOMs (Small Form Factor SATA DOMs) are high-performance storage devices, particularly for environments requiring high endurance and fast read/write cycles, they are not a typical requirement for ESXi host boot devices in vSAN deployments. Typically, SLC SATADOM devices are used for specific, high-performance applications, but they aren't the default choice for booting ESXi on a vSAN node.

Option B mentions the use of a 4GB USB or SD device. VMware officially supports booting ESXi from a USB or SD card, especially for standard deployments. The 4GB size is sufficient for booting ESXi, as the operating system itself typically takes up much less space. This type of boot device is common in ESXi installations for vSAN and other VMware environments due to its simplicity, cost-effectiveness, and broad compatibility. VMware also specifies that the boot device should be separate from the storage used for vSAN, ensuring that the ESXi system's boot process doesn't interfere with the data plane.

Option C suggests using a 16GB multiple-level cell (MLC) SATADOM device. While MLC devices are more affordable and suitable for environments that don't require high endurance, they are generally not used for ESXi boot devices in vSAN deployments. SLC devices (which are more durable than MLC) are typically preferred in high-performance environments, but MLC devices are not necessary for the boot process in most standard deployments. This option is not commonly recommended for booting ESXi in vSAN environments.

Option D indicates that ESXi hosts must boot from a PMEM (Persistent Memory) device. PMEM devices are used for providing non-volatile memory for specific workloads, especially for in-memory databases or large-scale data processing applications. However, they are not standard boot devices for typical ESXi deployments, especially in vSAN environments. PMEM devices are specialized and are not typically required or supported as the primary boot device for ESXi nodes in a vSAN configuration.

In summary, the most commonly supported and recommended boot device for ESXi hosts in a vSAN environment is a 4GB USB or SD device (Option B), as it meets VMware's standard requirements for booting and is cost-effective and compatible with most hardware configurations.

Question No 7:

A company hosts a vSAN 7 stretched cluster for all development workloads. The original sizing of a maximum of 250 concurrent workloads in the vSAN cluster is no longer sufficient and needs to increase to at least 500 concurrent workloads within the next six months.
To meet this demand, the original 8-node (4-4-1) cluster has recently been expanded to 16 nodes (8-8-1).

Which three additional steps should the administrator take to support the current growth plans while minimizing the amount of resources required at the witness site? (Choose three.)

A. Add the new vSAN witness appliance to vCenter Server.
B. Deploy a new large vSAN witness appliance at the witness site.
C. Configure the vSAN stretched cluster to use the new vSAN witness.
D. Deploy a new extra large vSAN witness appliance at the witness site.
E. Upgrade the vSAN stretched cluster to vSAN 7.0 U1.
F. Configure the new vSAN witness as a shared witness appliance.

Answer: B. C. F

Explanation:

When expanding a vSAN 7 stretched cluster to accommodate additional workloads, it is essential to ensure the architecture remains efficient, particularly at the witness site, which must handle a minimal load to avoid resource strain. In this scenario, the goal is to ensure the witness site can scale with the additional 500 concurrent workloads, while also maintaining the efficiency of the stretched cluster setup. Here’s how each option fits into this plan:

B. Deploy a new large vSAN witness appliance at the witness site:
The witness appliance serves as a tie-breaker in the event of a failure between the two data sites. Expanding the cluster requires a larger capacity at the witness site to handle the increased demands of a larger number of concurrent workloads. A "large" witness appliance is typically better suited for this task than a smaller one, providing the necessary computational and storage overhead to maintain consistency and availability across the entire stretched cluster.

C. Configure the vSAN stretched cluster to use the new vSAN witness:
Once a new witness appliance is deployed, the vSAN stretched cluster must be configured to use it. This step ensures that the cluster architecture incorporates the new witness as the failover mechanism between the two main sites. Without this reconfiguration, the cluster could still rely on the previous, smaller witness appliance, which would not provide enough resources to manage the larger cluster’s demands.

F. Configure the new vSAN witness as a shared witness appliance:
In a stretched cluster, it is common to minimize the number of resources used at the witness site. Configuring the new vSAN witness as a shared appliance means that it can handle the overhead of multiple clusters, reducing the strain on the witness site by sharing resources between different clusters. This is particularly helpful in large-scale deployments, as it optimizes resource usage while still maintaining the availability and resiliency of the system.

Now, let’s review the other options:

A. Add the new vSAN witness appliance to vCenter Server:
While adding the new witness appliance to vCenter Server is a necessary administrative step, it does not directly contribute to the scaling of the vSAN stretched cluster or optimizing resources at the witness site. This action is basic setup rather than a critical step to support the scaling requirements.

D. Deploy a new extra large vSAN witness appliance at the witness site:
Deploying an “extra large” witness appliance would provide excess resources that may not be needed for the intended scale. The goal is to minimize the resource consumption at the witness site, so opting for a large rather than an extra-large witness appliance offers a more balanced solution.

E. Upgrade the vSAN stretched cluster to vSAN 7.0 U1:
While upgrading to the latest patch release could provide valuable new features and enhancements, the question does not specifically indicate that an upgrade is necessary to meet the scaling requirements. The most direct steps to support scaling are those that focus on witness appliances and resource allocation, not software upgrades.

In conclusion, to support the scaling of the vSAN stretched cluster to 500 concurrent workloads while minimizing resources at the witness site, the most efficient and targeted steps involve deploying and configuring a new large witness appliance, reconfiguring the cluster to use it, and utilizing shared resources to avoid unnecessary strain.

Question No 8:

Which two causes explain the high backend IOPs on a vSAN cluster during a workload performance issue? (Choose two.)

A. The cluster DRS threshold has been set to Aggressive.
B. There is a vSAN node failure.
C. The vSAN Resync throttling is enabled.
D. The object repair timer value has been increased.
E. The vSAN policy protection level has changed from FTT=0 to FTT=1.

Answer: B, E

Explanation:

In a vSAN cluster, high backend IOPs (input/output operations per second) can occur due to various reasons, typically related to the processes of rebuilding, resynchronization, or changes in the fault tolerance settings of the storage system. Let’s break down each of the options to understand why B and E are the most likely causes:

  • Option A: The cluster DRS threshold has been set to Aggressive
    DRS (Distributed Resource Scheduler) is responsible for managing VM placement across hosts within a cluster to balance load. The threshold setting being "Aggressive" means DRS will move VMs more frequently to achieve optimal performance or load balancing. However, DRS activity does not directly influence backend IOPs related to vSAN storage operations, and it primarily impacts CPU and memory utilization, not the I/O performance of the backend storage. This makes A an unlikely cause for the high backend IOPs.

  • Option B: There is a vSAN node failure
    A node failure in a vSAN cluster can trigger an automatic resynchronization process, where the system starts rebuilding lost data across the remaining nodes in the cluster. This involves significant I/O operations as vSAN re-replicates data and rebuilds objects that were stored on the failed node. The rebuilding process can cause high backend IOPs as the system strives to restore data redundancy. This is a common cause for high backend IOPs in vSAN environments, making B a valid cause.

  • Option C: The vSAN Resync throttling is enabled
    vSAN has a feature called "resync throttling," which controls the amount of I/O dedicated to resynchronization tasks. Enabling resync throttling limits the impact on storage performance by slowing down the resynchronization of data, which could potentially reduce backend IOPs rather than increase them. Therefore, C does not explain high backend IOPs, as resync throttling typically serves to minimize such spikes.

  • Option D: The object repair timer value has been increased
    The object repair timer value determines the frequency with which vSAN attempts to repair or rebuild objects. Increasing this value might delay repairs but does not directly cause an increase in backend IOPs. While this setting can affect the timing of repairs, it does not explain a spike in IOPs on its own, making D unlikely.

  • Option E: The vSAN policy protection level has changed from FTT=0 to FTT=1
    vSAN's FTT (Failure To Tolerate) setting defines how many failures the storage system can withstand before data is lost. FTT=0 means no redundancy (data is stored only once), while FTT=1 means the system keeps two copies of each object for redundancy. Changing from FTT=0 to FTT=1 significantly increases the backend I/O as vSAN needs to replicate the data to provide fault tolerance. This requires more read and write operations, resulting in higher backend IOPs. Thus, E is a plausible cause of the high backend IOPs.

In summary, the most likely causes for high backend IOPs in this scenario are a node failure (B) and a change in the vSAN policy protection level from FTT=0 to FTT=1 (E). These actions both directly increase the workload on the storage system, causing the observed spike in backend IOPs.

Question No 9:

Why are performance metrics not displayed for workloads and their virtual disks on a vSAN cluster in the vSphere client?

A. vSAN network diagnostic mode is not enabled.
B. vSAN proactive tests haven’t been run yet.
C. vSAN performance service is turned off.
D. vSAN performance verbose mode is not enabled.

Answer: C

Explanation:

In this scenario, the absence of statistical charts for vSAN workloads and their virtual disks in the vSphere client is most likely due to the vSAN performance service being turned off. This service is crucial for collecting and presenting performance data such as I/O operations, latency, and throughput. When the performance service is disabled, vSAN will not gather or present these key metrics, leading to the issue where no statistical charts are visible in the vSphere client.

Now, let's analyze the other options:

  • A. vSAN network diagnostic mode is not enabled: This option refers to a diagnostic mode that helps in troubleshooting networking issues within a vSAN cluster. While enabling this mode can help identify problems related to network performance, it is not directly related to displaying performance metrics for virtual disks. Hence, this option does not address the root cause of the issue described.

  • B. vSAN proactive tests haven’t been run yet: Proactive tests in vSAN help identify potential hardware or configuration issues before they affect the cluster. These tests, however, are more focused on the health and configuration of the vSAN environment rather than on the collection and display of performance metrics. Therefore, the absence of proactive test execution does not explain the lack of statistical charts.

  • D. vSAN performance verbose mode is not enabled: While enabling verbose mode in vSAN can provide more detailed performance metrics, it is not the primary cause of the issue here. The verbose mode enhances the level of detail available in the metrics but does not impact the basic functionality of the performance service. The lack of charts is more likely related to the performance service being turned off, not verbose mode settings.

Therefore, the root cause is C. vSAN performance service is turned off, and enabling this service will restore the availability of performance metrics and charts in the vSphere client.

Question No 10:

During a maintenance action on a vSAN node, a vSAN administrator noticed that the default repair delay time is about to be reached. Which two commands must be run to extend the time? (Choose two.)

A. /etc/init.d/vsanmgmtd restart
B. esxcli system settings advanced set -o /VSAN/ClomRepairDelay -i 50
C. esxcli system settings advanced set -o /VSAN/ClomRepairDelay -i 80
D. /etc/init.d/clomd restart
E. /etc/init.d/vsanobserver restart

Answer:C .B

Explanation:

In a vSAN (Virtual SAN) environment, repair delay settings determine how long a node will wait before attempting to repair degraded components, such as disks that have failed or components that are out of sync due to maintenance. The repair delay can be adjusted to allow more time for maintenance actions without triggering unnecessary repairs or notifications.

The default repair delay is set to a specific value to avoid immediate, automatic repair attempts, which could be undesirable during planned maintenance activities. In this scenario, the administrator needs to extend the repair delay time, which can be done using the correct commands.

Option B: "esxcli system settings advanced set -o /VSAN/ClomRepairDelay -i 50" is not the correct choice in this case because it suggests setting the repair delay to 50 seconds. This could be a valid command for changing the delay, but the value is not sufficient to extend the time as required by the question.

Option C: "esxcli system settings advanced set -o /VSAN/ClomRepairDelay -i 80" is the correct choice because it sets the ClomRepairDelay to 80 seconds, which would extend the repair delay and ensure that vSAN does not attempt repairs prematurely. This provides the administrator with more time for ongoing maintenance activities.

Option A: "/etc/init.d/vsanmgmtd restart" is not relevant for extending the repair delay. This command is used to restart the vSAN management daemon, which is responsible for the overall management of the vSAN environment but does not directly relate to adjusting repair delay settings.

Option D: "/etc/init.d/clomd restart" is also not the correct choice. While this command restarts the clomd (Cluster Object Manager Daemon), it does not affect the repair delay time, which is controlled by the advanced system settings in vSAN.

Option E: "/etc/init.d/vsanobserver restart" is used to restart the vSAN Observer tool, which helps monitor vSAN performance and health but does not influence the repair delay settings.

Thus, the correct approach to extend the repair delay is by using Option B and Option C, which directly adjust the delay settings to a higher value.


UP

LIMITED OFFER: GET 30% Discount

This is ONE TIME OFFER

ExamSnap Discount Offer
Enter Your Email Address to Receive Your 30% Discount Code

A confirmation link will be sent to this email address to verify your login. *We value your privacy. We will not rent or sell your email address.

Download Free Demo of VCE Exam Simulator

Experience Avanset VCE Exam Simulator for yourself.

Simply submit your e-mail address below to get started with our interactive software demo of your free trial.

Free Demo Limits: In the demo version you will be able to access only first 5 questions from exam.