Use VCE Exam Simulator to open VCE files

NCM-MCI Nutanix Practice Test Questions and Exam Dumps
Question 1
An administrator is notified about performance issues in a Nutanix environment where virtual machines are experiencing significant latency. Upon reviewing the setup, it is discovered that each node has only one SSD, which is utilized at 95%, and three HDDs, each of which is utilized at just 40%.
What is the most likely reason for the high latency affecting guest VMs?
A. CVMs are overwhelmed by disk balancing operations.
B. All VM write operations are going to HDD.
C. All VM read operations are coming from HDD.
D. VMs are unable to perform write operations.
Answer: B
Explanation:
In a Nutanix hyperconverged infrastructure (HCI), performance of virtual machines (VMs) heavily depends on how storage operations—both reads and writes—are handled by the cluster. Nutanix leverages a tiered storage architecture that uses SSDs and HDDs to balance performance and capacity. SSDs serve as the primary tier for handling high-speed I/O operations, particularly write operations, while HDDs are used for capacity storage and less performance-intensive read operations.
In this scenario, each node has one SSD that is operating at 95% utilization, and three HDDs that are only 40% utilized. This disparity in utilization suggests a storage tier imbalance. Since SSDs are the first point of contact for write operations in Nutanix’s architecture (before data is later tiered down to HDDs for capacity), an SSD operating near maximum capacity is a clear bottleneck.
Here’s why this leads to high VM latency:
SSDs handle all initial write operations. If these drives are nearly full or over-utilized, they can't absorb new write I/O efficiently, causing a queue to form. This queuing effect introduces high latency for write-heavy applications.
The HDDs, though underutilized, are not used directly for write caching. Their 40% utilization indicates they are not contributing significantly to the current performance issues.
Since Nutanix’s architecture prioritizes SSDs for performance, the inability to handle further write operations quickly causes noticeable latency in the guest VMs. This aligns directly with the symptoms: high latency and an SSD usage nearing saturation.
Let’s briefly evaluate the incorrect options:
A: While disk balancing (known as Curator operations in Nutanix) does occur, it is a background process and would not overwhelm CVMs to the extent of causing such high latency unless there are deeper issues. The data does not support excessive disk balancing as the primary cause.
C: If read operations were coming only from HDDs, we would expect higher HDD utilization, which is not the case here (only 40%). Moreover, Nutanix uses caching and tiering strategies to keep hot data on SSDs, making this unlikely unless caching has failed system-wide.
D: This is an extreme scenario and not typically caused by a high SSD utilization alone. Writes may be slow, but not entirely blocked unless the SSD is completely full or failed, which the question doesn’t indicate.
In summary, the bottleneck lies in the SSD tier. The nearly full SSDs are preventing the efficient handling of write operations, which is critical to VM performance. Since Nutanix writes first go to SSDs, their saturation leads to high I/O latency for the VMs, especially during heavy write workloads.
Therefore, the most accurate and technically sound explanation is that all VM write operations are going to HDD, as a consequence of SSD saturation—though technically, it's the failure of SSDs to absorb writes, forcing more direct writes or deferred writes, that causes the performance issue.
The best choice given the options is B.
Question 2
An administrator is managing a Nutanix cluster where a Protection Domain includes 50 entities replicated to a remote single-node replication target. The replication schedule is every 6 hours, with a local retention policy of 1 snapshot and a remote retention policy of 8 snapshots. The schedule starts at 12 a.m. On Monday at 8 a.m., a VM is found to be corrupted, and the last known good state was at 2 p.m. on Sunday.
To restore the VM while keeping the current protection policy intact, what should the administrator do?
A. From the Remote site, activate the Protection Domain, then re-protect the entity.
B. From the Remote site, restore the VM from the local snapshot by selecting the correct snapshot.
C. From the local site, retrieve the correct remote snapshot, then restore the VM locally.
D. From the local site, restore the VM from the local snapshot by selecting the correct snapshot.
Answer: C
Explanation:
To determine the best recovery strategy, we must examine the facts about the Protection Domain, retention policy, and the time of the incident. The Nutanix Protection Domain (PD) replicates every 6 hours starting from midnight. So snapshots are created at:
12 a.m.
6 a.m.
12 p.m.
6 p.m.
This continues every day. The local retention policy retains only 1 snapshot, meaning only the most recent snapshot is available locally. The remote retention policy, however, stores up to 8 snapshots, which allows administrators to recover data from up to 48 hours earlier (since 8 snapshots × 6 hours = 48 hours of coverage).
The incident occurs at 8 a.m. on Monday, and the administrator identifies the last known good state as 2 p.m. on Sunday, which is 18 hours prior. Given the schedule, there would have been snapshots taken at 12 p.m. and 6 p.m. on Sunday. The 2 p.m. point falls between those, so the 6 p.m. snapshot on Sunday is the one closest after the last good state and is therefore most suitable for restoration.
Now let’s evaluate each option based on this context:
A suggests activating the PD at the remote site. However, activating a PD is typically a disaster recovery action, used when the entire primary site is down. This would temporarily shift operations to the remote site, which is not necessary here, especially since the goal is to maintain the current protection configuration and not fail over operations.
B implies restoring the VM at the remote site using a local snapshot. However, the remote site in this case is a Single Node Replication Target, typically meant only for holding replicated snapshots and not intended for running VMs or production workloads. So, this is technically not feasible or practical.
C is the correct approach. From the local site, the administrator can access the remote snapshots created as part of the replication policy. Since the remote site has 8 snapshots retained, the Sunday 6 p.m. snapshot should still be available as of Monday 8 a.m. The administrator can choose to restore the VM locally using that snapshot, which meets the goal of restoring to a known good state without breaking the protection domain setup or initiating a failover.
D is invalid because the local retention policy keeps only one snapshot, which would be the most recent one taken around 6 a.m. Monday. That snapshot is after the VM corruption was detected, so it would not help restore the VM to its good state from Sunday 2 p.m.
In conclusion, the only strategy that allows the administrator to retrieve the needed data from the correct time window while maintaining the current protection structure is to access the remote snapshot from the local site and restore the VM accordingly. This avoids unnecessary disruption, makes use of the extended retention at the remote target, and complies with Nutanix best practices for recovery.
Thus, the correct answer is C.
Question 3
An administrator is using a custom backup application on a Windows virtual machine that includes a 2TB disk. The current VM configuration includes four vCPUs (each with one core), 4GB of memory, a 50GB vDisk for the OS, and a 2TB vDisk for the application. However, the backup application’s throughput is significantly lower than expected.
What configuration change would most effectively improve throughput for the application?
A. Increase the number of cores per vCPU
B. Increase the vCPUs assigned to the VM
C. Span the 2TB disk across four vDisks
D. Add 4GB of memory to the VM
Answer: C
Explanation:
When performance issues arise in a virtualized backup application—especially when large disk throughput is required—it’s essential to understand how virtual disks, memory, CPU, and storage architecture impact I/O performance. In this scenario, the VM has a modest amount of CPU and memory, but the application relies on a single 2TB vDisk. This storage configuration is the key to the observed performance bottleneck.
Let’s explore each component and the implications of the possible configuration changes:
Virtual Disk Configuration and Throughput Bottleneck
Virtual machines in hyperconverged environments such as Nutanix interact with the underlying storage via virtual disks (vDisks). A single vDisk in many virtualization platforms—including Nutanix’s AHV and VMware ESXi—has a throughput limit, often associated with the virtual SCSI controller or disk queue depth. When all the application’s I/O is directed through a single vDisk, you’re constrained by that single data path.
By spanning the 2TB volume across four vDisks, the application can take advantage of parallel I/O streams, increasing the effective disk throughput. This works particularly well in applications that are multithreaded or perform heavy read/write operations, such as backup utilities. Each vDisk would have its own queue, reducing contention and improving performance.
Evaluating the Other Options
A: Increase the number of cores per vCPU
In most hypervisors, vCPUs are scheduled individually, and the concept of “cores per vCPU” is more relevant for licensing or software that checks for physical core count. It doesn't directly impact performance in the same way that increasing the number of vCPUs would. This would not significantly affect I/O-bound performance.
B: Increase the vCPUs assigned to the VM
While this could benefit CPU-bound workloads, the backup application is suffering from low disk throughput, not CPU starvation. If the CPU usage on the VM is not consistently pegged near 100%, then more vCPUs won’t help disk throughput issues. Monitoring tools can confirm this.
D: Add 4GB of memory to the VM
Memory might improve performance marginally if the application was experiencing paging or needed more cache space. However, low throughput due to disk I/O is unlikely to be solved by simply doubling RAM from 4GB to 8GB, especially if memory usage wasn’t already maxed out. Memory expansion is more relevant for applications with large working sets or in-memory caching—not I/O-heavy backup tasks.
Best Practice in Hypervisors
Common best practices for performance tuning in virtual environments suggest using multiple virtual disks to parallelize disk I/O. This is particularly emphasized for disk-intensive applications. Additionally, using separate controllers (e.g., separate SCSI controllers) for different disks can further reduce contention.
In conclusion, the most impactful change in this case is to span the 2TB disk across multiple vDisks, ideally four. This will increase the number of parallel I/O paths available to the application, improving throughput significantly. It directly addresses the bottleneck without unnecessarily increasing resource allocations like CPU or memory.
The correct answer is C.
Question 4
An administrator manages a Nutanix Enterprise Cloud setup consisting of a central datacenter with a 20-node cluster and 1.5PB of storage, along with five remote sites, each with a 4-node cluster and 200TB of storage. The remote sites are connected to the central datacenter through 1Gbps links that average 6 milliseconds round-trip latency (RTT).
What is the minimum Recovery Point Objective (RPO) that can be achieved across this environment?
A. 0 minutes
B. 15 minutes
C. 1 hour
D. 6 hours
Answer: B
Explanation:
In Nutanix environments, the Recovery Point Objective (RPO) refers to the maximum acceptable amount of data loss measured in time between the last backup or replication point and a failure. The achievable RPO depends on several factors, including the capabilities of the replication technology used, the network bandwidth and latency, and the configuration of protection domains.
Let’s break down the key aspects of this environment:
Each remote site connects to the central datacenter via a 1Gbps link with 6ms RTT.
While 1Gbps provides decent bandwidth, it's not high enough for continuous data replication at large scales, especially when multiple remote sites are involved.
6ms RTT is relatively low latency, so latency isn't the limiting factor here—bandwidth is.
Nutanix provides several options for replication:
Asynchronous replication, which can support minimum RPOs as low as 1 minute, but typical configurations used in WAN environments settle on 15-minute intervals for practicality and network stability.
NearSync replication, which offers RPOs as low as 1 minute. However, this requires dedicated low-latency links (generally under 5ms RTT), higher bandwidth, and is not typically recommended for 1Gbps WAN links connecting multiple remote sites.
Synchronous replication, which achieves 0-minute RPO, but this is only possible with very low latency (sub-5ms) and high-bandwidth connections—not feasible here due to the 1Gbps limit and 6ms RTT.
Given these constraints, asynchronous replication is the most viable solution in this scenario.
Although NearSync or synchronous replication might theoretically offer lower RPOs, most Nutanix deployments over 1Gbps WAN links use asynchronous replication with 15-minute intervals. This strikes a balance between:
Effective data protection
Network load
Operational simplicity
High likelihood of completion within each scheduled replication window
Trying to replicate more frequently than every 15 minutes (e.g., every 1 minute) over a 1Gbps WAN may lead to:
Snapshot queuing
Incomplete or failed replications
Excessive bandwidth usage
Inconsistent performance across sites
A (0 minutes): Requires synchronous replication, which is not feasible over 1Gbps links with 6ms latency.
C (1 hour) and D (6 hours): These are valid RPOs in certain configurations, but they are not the minimum achievable. They represent less aggressive, possibly legacy or low-priority configurations. Since the question asks for the minimum RPO, these are too conservative.
In this environment, the best balance of feasibility, network capability, and Nutanix supportability yields a 15-minute minimum RPO using asynchronous replication.
The correct answer is B.
Question 5
A Nutanix Enterprise Cloud administrator is alerted by the security team that a virtual machine has been showing suspicious network behavior. Microsegmentation is enabled, and a firewall VM is active in the environment. The administrator needs to isolate the affected VM from the production network to prevent potential spread, but still requires access to perform diagnostics and troubleshooting.
What is the most appropriate action to meet these goals?
A. Disable the vNIC on the affected VM
B. Quarantine the VM using the Forensic Method
C. Create a firewall rule that blocks VM traffic but permits diagnostic access
D. Create a security policy with a service chain directing that VMs traffic to the firewall
Answer: B
Explanation:
In Nutanix environments with microsegmentation enabled via Flow (Nutanix’s native network security and policy framework), isolating virtual machines showing abnormal or suspicious activity can be handled efficiently while maintaining control and diagnostic access. The challenge here is to both protect the environment from potential threats posed by the VM and to retain the ability to investigate the VM, which requires a balance of isolation and accessibility.
Let’s evaluate what the best method is and why Option B, the Forensic Method, is most appropriate.
Nutanix Flow provides quarantine options to isolate compromised or suspect virtual machines while still allowing specific controlled access—a technique referred to as the Forensic Method. This approach:
Completely blocks all traffic to and from the VM, except for whitelisted IPs, ports, or protocols needed by the administrator for analysis.
Isolates the VM from the rest of the production network, preventing any lateral movement by potential malware or an attacker.
Maintains microsegmentation policies and does not require shutting down the VM or altering its vNIC configuration.
Provides a secure and reversible method to analyze the VM using only designated management interfaces, jump boxes, or bastion hosts.
This strategy is explicitly designed for the kind of use case described in the question: contain a VM but still allow access for forensic investigation or diagnostic tasks.
A (Disable the vNIC on the affected VM):
While this would immediately halt all network activity from the VM, it also completely blocks any access, including from administrators trying to run diagnostics. This is too blunt an instrument and doesn’t allow for controlled investigation.
C (Create a firewall rule that blocks VM traffic but permits diagnostic access):
Although possible, this approach is manual and error-prone. It doesn’t leverage Nutanix Flow’s purpose-built quarantine features, which are safer and more auditable. Crafting precise firewall rules can take time and might not ensure full isolation from production if not perfectly configured.
D (Create a security policy with a service chain directing that VM’s traffic to the firewall):
Service chaining can route traffic through a firewall for inspection, but it doesn’t provide isolation. It’s used to inspect traffic, not to quarantine or contain a VM. This method allows the VM to continue communicating, which violates the key requirement to isolate it.
Using the Forensic Quarantine Method through Nutanix Flow enables the administrator to:
Maintain strict containment of the potentially compromised VM
Allow targeted access for investigation
Avoid production disruption
Use built-in automation and auditability features
This method is specifically recommended for suspected security incidents involving VMs in a microsegmented Nutanix environment.
The correct answer is B.
Question 6
A customer has configured asynchronous replication between Site A and Site B in a Nutanix environment. They perform a planned failover by activating the protection domain on Site B. Afterward, they run the command ncli pd deactivate_and__destroy_vms name=<protection_domain_name> on Site A.
What is the result of executing this command in the environment?
A. VMs get deleted from Site B and the protection domain is now Active.
B. VMs are powered off on Site A and must be manually powered on at Site B.
C. VMs get deleted from Site A and the protection domain is no longer active.
D. Customer must then manually power off VMs at Site A and power them on at Site B.
Answer: C
Explanation:
To understand what happens here, it's important to dissect both the functionality of asynchronous replication in Nutanix and the purpose of the ncli pd deactivate_and__destroy_vms command. Let’s walk through the sequence of events and the impact on the virtual machines and protection domain status.
In Nutanix, asynchronous replication is used to periodically replicate snapshots of virtual machines from a primary site (Site A) to a secondary site (Site B) using Protection Domains (PDs). During normal operation, the PD is active on Site A, and Site B only stores replicated snapshots.
A planned failover means the administrator wants to intentionally switch operations from Site A to Site B, such as for maintenance, testing, or a migration event. This is typically done by:
Activating the Protection Domain on Site B.
Ensuring that replicated snapshots are up to date.
Performing appropriate power-on and validation procedures at Site B.
The ncli pd deactivate_and__destroy_vms command performs two actions:
Deactivates the Protection Domain on the current site (in this case, Site A).
Destroys the VMs associated with that Protection Domain on Site A.
This is a destructive operation that removes the VMs from Site A. The rationale behind this command is to prevent conflicts or accidental VM duplication when operations are switched to the recovery site (Site B).
Here’s what happens in the customer’s environment:
The administrator activates the PD on Site B, transitioning Site B into the primary role.
When the command is run on Site A (ncli pd deactivate_and__destroy_vms), it:
Deactivates the Protection Domain on Site A (meaning Site A no longer controls replication).
Permanently deletes the VMs from Site A.
At this point, Site B holds the only active and operational versions of the VMs, and the PD is active there.
A is incorrect. VMs are not deleted from Site B; they are deleted from Site A. Site B becomes active, not Site A.
B is incorrect. The command does not simply power off VMs; it destroys them from Site A. Also, the VMs should already be active on Site B if the activation step was properly done.
C is correct. The VMs on Site A are deleted, and the protection domain on Site A is deactivated.
D is incorrect. Manual VM power operations are not relevant here because the command explicitly destroys the VMs at Site A.
Running ncli pd deactivate_and__destroy_vms is a powerful action and must be used with caution. It is typically part of a planned migration or failover, where operations are fully intended to shift from one site to another. Once executed, the original site’s VMs are irretrievable unless backups exist.
Thus, in this scenario, the customer effectively removes the VMs from Site A and finalizes the failover to Site B, with the protection domain now active only at the secondary site.
The correct answer is C.
Question 7
An administrator must provision and start five new virtual machines for an OLAP data analytics project. Each VM will be configured with 4 vCPUs, 64 GB of RAM, and a 1.5 TB virtual disk. The Nutanix cluster consists of four nodes, each with 24 vCPUs (20% used), 192 GB RAM (60% used), and a storage layout of 2 × 1.92 TB SSDs and 4 × 2 TB HDDs. The RF2 container at the cluster level is 30% utilized and has 13.5 TB of extent store capacity remaining.
What cluster component should the administrator be most concerned about in supporting this workload?
A. Physical RAM, because it is not enough to power on all of the new VMs.
B. Physical Cores, because they are not enough to power on all of the new VMs.
C. Storage, because the capacity is not enough to create VMs.
D. Flash Tier because it is not enough to accommodate the workloads.
Answer: D
Explanation:
To determine which component of the Nutanix cluster requires attention, we must analyze the resource requirements of the new virtual machines and compare them against the available resources in the cluster. The question involves provisioning five OLAP-type VMs, which are typically read-intensive and IOPS-heavy, especially on storage. Here's how each component measures up:
Each VM requires 4 vCPUs, so five VMs need 20 vCPUs total. The cluster has 4 nodes, each with 24 vCPUs, totaling 96 vCPUs across the cluster. With 20% usage, only 19.2 vCPUs are currently utilized, leaving roughly 76.8 vCPUs free.
Since only 20 vCPUs are needed, there is no CPU bottleneck.
→ Eliminate option B
Each VM needs 64 GB RAM, so five VMs require 320 GB total. The cluster has 4 nodes × 192 GB = 768 GB total RAM. At 60% usage, about 460.8 GB is already used, leaving 307.2 GB free.
However, we need 320 GB and have only 307.2 GB available. At first glance, it appears there’s a 12.8 GB deficit. But in most virtualization environments (especially Nutanix AHV or ESXi), memory overcommitment is supported, and ballooning, swapping, and compression can temporarily compensate for small shortfalls. Additionally, actual memory usage by OLAP VMs might not immediately hit peak on boot, making it a manageable scenario depending on workload behavior.
Therefore, while RAM is tight, it’s not a critical blocker—yet.
→ Deprioritize option A
Each VM needs 1.5 TB, so five VMs require 7.5 TB total. Nutanix uses RF2 (Replication Factor 2), which means data is duplicated across nodes. So the actual storage requirement is 7.5 TB × 2 = 15 TB.
The cluster has an extent store capacity of 13.5 TB remaining. That’s not enough to store 15 TB with RF2. However, extent store capacity refers to hot-tier data and metadata, not full storage capacity. Since the container is only 30% utilized, the overall raw storage capacity is much higher.
Let’s calculate that:
SSDs: 4 nodes × 2 × 1.92 TB = 15.36 TB
HDDs: 4 nodes × 4 × 2 TB = 32 TB
Total = 47.36 TB raw
At 30% utilization, about 14.2 TB is in use, so ~33 TB remains (raw). Considering RF2 overhead (effectively halving usable space), that’s still ~16.5 TB usable, just enough to accommodate the new VMs.
So while tight, storage capacity is just sufficient.
→ Eliminate option C
The Flash Tier is critical in Nutanix for:
Metadata storage
Caching hot data
Write buffer before tiering down to HDD
Each node has 2 × 1.92 TB SSDs, totaling 15.36 TB flash storage across the cluster. OLAP workloads are read-heavy and performance-sensitive, meaning they rely heavily on SSDs for fast I/O.
Here’s the problem:
15.36 TB SSD total for 5 OLAP VMs × 1.5 TB = 7.5 TB of hot data expected
The flash tier will also need to service existing workloads
If the existing workload already utilizes SSDs heavily, there may not be enough SSD space to accommodate the new VMs’ active data. Moreover, Nutanix doesn’t store all 1.5 TB of a vDisk on SSD. Instead, only hot data is promoted to SSD from HDD. But OLAP systems generate large working sets and may try to promote a significant portion of their dataset to SSDs, exceeding SSD capacity, which results in:
SSD contention
Higher latency
Frequent tiering and cache eviction
This makes the Flash Tier the bottleneck, as it may not be able to absorb or serve the I/O demand from the OLAP workloads, leading to degraded performance.
CPU is sufficient.
RAM is tight, but manageable.
Storage capacity is close, but acceptable.
Flash Tier is the most likely performance bottleneck and the component that requires proactive management, especially for read-heavy OLAP workloads that depend on SSDs for performance.
The correct answer is D.
Question 8
An administrator is managing two Nutanix AOS 5.15 clusters—Corp-cluster-01 and Corp-cluster-02—registered with Prism Central. The administrator needs to ensure that VM images are available only on Corp-cluster-01 and are not accessible by Corp-cluster-02 or any additional clusters that may be added to Prism Central in the future.
Which two configuration settings must be selected when creating the appropriate image placement policy? (Choose two.)
A. Create an image placement policy that identifies cluster Corp-cluster-01 as the target cluster
B. Set the policy enforcement to Soft.
C. Set the policy enforcement to Hard.
D. Create an image placement policy that identifies cluster Corp-cluster-02 as the target cluster
Answers: A and C
Explanation:
Nutanix Prism Central allows administrators to centrally manage images across multiple registered clusters using Image Placement Policies. These policies determine where images can be stored, replicated, or accessed, and include enforcement rules to define whether the policy is optional or mandatory.
Let’s break down the requirements:
Images must be available only on Corp-cluster-01.
Images must not be accessible from Corp-cluster-02.
Images must not be accessible from any future clusters added to Prism Central.
Enforcement must strictly prevent distribution outside of Corp-cluster-01.
This implies a restrictive and exclusive configuration is needed, and strict policy enforcement is required.
Image placement policies in Prism Central involve two main decisions:
Target cluster(s) where the image should reside.
Enforcement level — either Soft or Hard.
✔ This is essential.
To ensure that the image is only available on Corp-cluster-01, the policy must explicitly designate that cluster as the placement target. If this isn’t specified, Prism Central may replicate the image to other clusters, either immediately or when requested.
✔ This is also critical.
A Hard enforcement setting ensures the policy is strictly enforced, meaning:
The image will only be stored on Corp-cluster-01.
No checkouts (temporary or permanent copies) are allowed on any other clusters—including Corp-cluster-02 or future clusters.
Violations are blocked, not just logged.
This meets the requirement to prohibit image checkout to other clusters completely.
✘ Incorrect.
Soft policies are not restrictive. They act more like guidelines: Prism Central will prefer the designated cluster (Corp-cluster-01), but if needed, images may still be copied or checked out to other clusters. This fails to meet the security and exclusivity requirements.
✘ Incorrect.
This contradicts the requirement. Corp-cluster-02 should not have access to the images. Creating a policy that includes it as a target cluster directly violates the access restriction goal.
Create an Image Placement Policy with:
Corp-cluster-01 as the only target cluster
Hard enforcement selected
Ensure that Corp-cluster-02 and any future clusters are not added to the policy’s scope.
This guarantees the image is locked to Corp-cluster-01 and not replicable or accessible from any other location.
The correct answers are A and C.
Question 9
An administrator is managing a business-critical environment and has deployed metro availability for a zero data loss configuration. The two clusters are connected via a 1GbE link. A new workload with the following requirements is being deployed:
150 MB/s of sustained write throughput
20 MB/s of sustained read throughput
What change must the administrator make to ensure the workload can be deployed successfully on this cluster?
A. The bandwidth must be increased to support this workload.
B. The workload must be configured to read at greater than 12.5 MB/s.
C. The replication frequency must be less than 60 minutes.
D. Zero data loss nearsync must be used to support this workload.
Answer: A
Explanation:
In a Metro Availability configuration, data is continuously replicated across geographically dispersed clusters, ensuring zero data loss in the event of a site failure. For this to be effective, especially in business-critical environments, the replication bandwidth and network capacity must be sufficient to handle the workload’s data transfer needs.
Workload write throughput: 150 MB/s (sustained)
Workload read throughput: 20 MB/s (sustained)
Cluster connection: 1 GbE link
1 Bandwidth Calculations:
1 GbE link speed translates to a 1,000 Mbps (megabits per second) connection.
To convert to megabytes per second (MB/s), divide by 8 (since 1 byte = 8 bits):
1,000 Mbps ÷ 8 = 125 MB/s of total available throughput on the link.
The workload requires:
150 MB/s of write throughput
20 MB/s of read throughput
Total required throughput = 150 MB/s + 20 MB/s = 170 MB/s.
Comparison with available bandwidth:
The available 1 GbE link provides only 125 MB/s.
The workload demands 170 MB/s, which exceeds the available bandwidth by 45 MB/s.
Thus, the bandwidth is insufficient to support the workload's combined read and write throughput requirements.
In order to successfully support the new workload, the bandwidth must be increased to at least 170 MB/s to accommodate the combined read and write throughput. Since the current 1 GbE link is limited to 125 MB/s, the administrator should consider upgrading the connection to a higher-speed link, such as 10GbE or 25GbE, depending on available resources.
This option is misleading and irrelevant. The required read throughput is 20 MB/s, and configuring the workload to read less than that would not solve the problem of bandwidth insufficiency. The primary issue is the lack of sufficient total bandwidth to support the combined write and read throughput demands.
The replication frequency controls how often data is replicated between the two clusters, but frequency is not the primary bottleneck here. The issue is the total available bandwidth, which cannot handle the workload’s sustained throughput requirements at the current 1 GbE link speed. Reducing the replication frequency would not mitigate the bandwidth shortfall.
Zero data loss is already enabled with metro availability. NearSync is a feature that allows for more frequent replication but doesn’t address the underlying issue of insufficient bandwidth. Enabling NearSync would likely place even more strain on the existing 1 GbE link, further exacerbating the problem rather than solving it.
The issue is that the current 1 GbE link does not have enough bandwidth to meet the workload's throughput requirements. Therefore, the correct action is to increase the bandwidth to at least 170 MB/s to accommodate both the write and read throughput needs. The correct answer is A.
Top Training Courses
LIMITED OFFER: GET 30% Discount
This is ONE TIME OFFER
A confirmation link will be sent to this email address to verify your login. *We value your privacy. We will not rent or sell your email address.
Download Free Demo of VCE Exam Simulator
Experience Avanset VCE Exam Simulator for yourself.
Simply submit your e-mail address below to get started with our interactive software demo of your free trial.