3V0-42.20 VMware Practice Test Questions and Exam Dumps




Question 1

Which is a family of solutions for data center designs that span compute, storage, networking, and management, serving as a blueprint for a customer’s Software Defined Data Center (SDDC) implementations? (Choose the best answer.)

A. VMware SDDC Design
B. VMware Validated Design
C. VMware POC Design
D. VMware Cloud Foundation

Answer: B. VMware Validated Design

Explanation:

  • A. VMware SDDC Design: While this refers to designing a Software Defined Data Center (SDDC), it doesn't necessarily represent a complete solution for data center design that spans compute, storage, networking, and management. It's more of a conceptual approach.

  • B. VMware Validated Design: This is the correct answer. VMware Validated Design is a comprehensive family of solutions that serve as blueprints for building out a Software Defined Data Center (SDDC). It covers all aspects, including compute, storage, networking, and management, and ensures that all the components are compatible and work together seamlessly for the customer’s SDDC implementation.

  • C. VMware POC Design: A Proof of Concept (POC) design is typically used for testing and validating a solution in a controlled environment, but it’s not a family of solutions for full data center design.

  • D. VMware Cloud Foundation: While VMware Cloud Foundation is a comprehensive platform that integrates compute, storage, and networking, it is a product, not a design framework or blueprint. VMware Cloud Foundation can be a part of the VMware Validated Design.

The correct answer is B. VMware Validated Design, as it provides a proven, validated set of designs and best practices for implementing a fully functional Software Defined Data Center.


Question 2

Which three IPv6 features are supported in an NSX-T Data Center design? (Choose three.)

A. IPv6 OSPF
B. IPv6 static routing
C. IPv6 switch security
D. IPv6 DNS
E. IPv6 Distributed Firewall
F. IPv6 VXLAN

Answer:
B. IPv6 static routing
E. IPv6 Distributed Firewall
F. IPv6 VXLAN

Explanation:

  • A. IPv6 OSPF: While OSPF (Open Shortest Path First) is supported in NSX-T, it is not specifically listed for IPv6 in the same way as IPv4 OSPF. NSX-T primarily supports static routing and does not have direct OSPFv3 support for IPv6 in the way it does for IPv4.

  • B. IPv6 static routing: This is correct. NSX-T supports static routing for IPv6, which allows for defining fixed routes for IPv6 networks in the system.

  • C. IPv6 switch security: While NSX-T has various security features, IPv6 switch security specifically is not a feature highlighted in the context of the question. Most security features for IPv6 would be covered by the distributed firewall and other network security capabilities.

  • D. IPv6 DNS: DNS (Domain Name System) is not directly tied to NSX-T but is managed at the application layer or through other network services. NSX-T doesn’t specifically manage IPv6 DNS functionality in a distinct manner from the underlying network configuration.

  • E. IPv6 Distributed Firewall: This is correct. NSX-T supports the Distributed Firewall, which works for both IPv4 and IPv6, providing micro-segmentation and security policies at the individual VM or interface level.

  • F. IPv6 VXLAN: This is correct. NSX-T supports the use of VXLAN for both IPv4 and IPv6, enabling Layer 2 network overlays across the data center.

The correct answers are B. IPv6 static routing, E. IPv6 Distributed Firewall, and F. IPv6 VXLAN. These features are explicitly supported in an NSX-T Data Center design for IPv6 networking and security.


Question 3

An architect is helping an organization with the Physical Design of an NSX-T Data Center solution. This information was gathered during a workshop:
✑ Some workloads should be moved to a Cloud Provider.
✑ Extend network's VLAN or VNI across sites on the same broadcast domain.
✑ Enable VM mobility use cases such as migration and disaster recovery without IP address changes.
✑ Support 1500 byte MTU between sites.

Which selection should the architect include in their design? (Choose the best answer.)

A. Load Balancer
B. Reflexive NAT
C. SSL VPN
D. L2 VPN

Answer:
D. L2 VPN

Explanation:

  • A. Load Balancer: A Load Balancer distributes traffic across multiple servers or resources to optimize performance, but this does not address the requirements outlined in the scenario. The needs mentioned focus on network extension and VM mobility, which a Load Balancer doesn’t directly address.

  • B. Reflexive NAT: Reflexive NAT is used for handling outbound connections and typically works for return traffic in situations where one network device establishes a session with an external service and NAT is required. This does not fulfill the need to extend networks across sites with the same broadcast domain or support VM mobility without IP address changes.

  • C. SSL VPN: SSL VPN provides secure remote access to the network over the internet, but it does not help with extending the network between sites, supporting VM migration, or ensuring that the network infrastructure is in place for workload mobility between on-premises and cloud environments.

  • D. L2 VPN: L2 VPN (Layer 2 VPN) enables the extension of Layer 2 networks (VLANs or VNIs) between sites, which supports extending the same broadcast domain across different locations. L2 VPN allows for VM mobility without the need for IP address changes, as it allows workloads to seamlessly migrate between locations while maintaining their IP addresses. Additionally, L2 VPN can support the required MTU (Maximum Transmission Unit) size (1500 bytes), making it an ideal solution for the requirements mentioned in the scenario.

The best answer is D. L2 VPN, as it supports extending VLANs or VNIs across sites and enables VM mobility with no IP address changes, addressing the architect's needs for the design.


Question 4

An architect is helping an organization with the Physical Design of an NSX-T Data Center solution.
This information was gathered during a workshop:
✑ There are six hosts and hardware has already been purchased.
✑ Customer is planning a collapsed Management/Edge/Compute cluster.
✑ Each host has two 10Gb NICs connected to a pair of switches.
✑ There should be no single point of failure in any proposed design.

Which virtual switch design should the architect recommend to the organization? (Choose the best answer.)

A. Create a vSphere Distributed Switch (vDS) for Management VMkernel traffic and assign one NIC. Also, create an NSX-T Virtual Distributed Switch (N-VDS) for overlay traffic and assign one NIC.
B. Create an NSX-T Virtual Distributed Switch (N-VDS) for Management VMkernel traffic and assign one NIC. Also, create an NSX-T Virtual Distributed Switch (N-VDS) for overlay traffic and assign one NIC.
C. Create an NSX-T Virtual Distributed Switch (N-VDS) for Management VMKernel and overlay traffic and assign both NICs.
D. Create an NSX-T Virtual Distributed Switch (N-VDS) for Management VMkernel and overlay traffic and assign a new virtual NIC.

Answer:
C. Create an NSX-T Virtual Distributed Switch (N-VDS) for Management VMkernel and overlay traffic and assign both NICs.

Explanation:

The key considerations in this design are:

  1. No single point of failure: With six hosts and two 10Gb NICs per host, it is crucial to ensure that both NICs are used for redundancy, as losing a single NIC should not cause failure in traffic.

  2. Collapsed Management/Edge/Compute cluster: This design requires a simplified yet robust network infrastructure, as the management, edge, and compute functionalities will be hosted on the same set of hosts.

  3. No failure in the design: Redundancy is critical to ensure high availability, especially in the context of both management and overlay traffic.

Let's look at the options:

  • A. Create a vSphere Distributed Switch (vDS) for Management VMkernel traffic and assign one NIC. Also, create an NSX-T Virtual Distributed Switch (N-VDS) for overlay traffic and assign one NIC.
    This option introduces a single NIC for both Management VMkernel and Overlay traffic, which introduces a single point of failure. It does not meet the requirement of avoiding a single point of failure.

  • B. Create an NSX-T Virtual Distributed Switch (N-VDS) for Management VMkernel traffic and assign one NIC. Also, create an NSX-T Virtual Distributed Switch (N-VDS) for overlay traffic and assign one NIC.
    Similar to option A, this design assigns one NIC to both traffic types, leading to redundancy issues and violating the "no single point of failure" requirement.

  • C. Create an NSX-T Virtual Distributed Switch (N-VDS) for Management VMkernel and overlay traffic and assign both NICs.
    This is the best choice. By using both NICs for the NSX-T Virtual Distributed Switch (N-VDS), this solution ensures redundancy for both the Management VMkernel traffic and the overlay traffic, avoiding a single point of failure. This solution leverages the dual 10Gb NICs efficiently for both types of traffic, ensuring high availability.

  • D. Create an NSX-T Virtual Distributed Switch (N-VDS) for Management VMkernel and overlay traffic and assign a new virtual NIC.
    This option is unclear, as creating a "new virtual NIC" isn’t typically how physical NICs are assigned for network traffic. It also does not indicate using both physical NICs for redundancy, so it’s not the best option.

Option C is the best design because it maximizes redundancy by using both NICs for the management and overlay traffic, ensuring that there is no single point of failure in the network design.


Question 5

What selection is the key design benefit provided by a dedicated Edge Cluster VM or Bare Metal? (Choose the best answer.)

A. reduced administrative overhead
B. predictable network performance
C. multiple Tier-0 gateways per Edge Node Cluster
D. support for Edge Node Clusters with more than 10 Edge Nodes

Answer:
B. predictable network performance

Explanation:

A dedicated Edge Cluster VM or Bare Metal typically offers several benefits, but the key benefit it provides is predictable network performance. Here's why:

  • A. Reduced administrative overhead: While having dedicated Edge clusters may reduce administrative overhead in some cases, this isn't the primary reason for deploying a dedicated Edge Cluster VM or Bare Metal. The key benefit is related to performance, not overhead reduction.

  • B. Predictable network performance: When you deploy a dedicated Edge Cluster (whether VM or Bare Metal), the resources are reserved specifically for the Edge functionality. This isolation from other workloads ensures predictable network performance as the Edge nodes aren't sharing resources with other types of workloads or services. This is particularly important in environments where consistent and reliable networking is crucial for performance.

  • C. Multiple Tier-0 gateways per Edge Node Cluster: This is an important feature but not the key design benefit of a dedicated Edge Cluster. While a dedicated Edge Cluster can support multiple Tier-0 gateways, this feature is secondary to the focus on network performance and isolation.

  • D. Support for Edge Node Clusters with more than 10 Edge Nodes: While this may be a design consideration, it isn't the primary benefit of using a dedicated Edge Cluster. The focus is more on performance predictability rather than the specific scaling to more than 10 nodes.

The primary benefit of using a dedicated Edge Cluster VM or Bare Metal is to ensure predictable network performance, as the resources are dedicated and isolated to handle network traffic effectively without interference from other workloads. Therefore, B. predictable network performance is the best answer.


Question 6

An architect is helping an organization with the Logical Design of an NSX-T Data Center solution.
This information was gathered during the Assessment Phase:
✑ There is a performance-based SLA for East-West traffic.
✑ The business-critical applications require prioritization of their traffic.
✑ One of the services is a file share and has a high demand for bandwidth.

Which selection should the architect include in their design? (Choose the best answer.)

A. Review average North/South traffic from the core switches and firewall.
B. Include a segment QoS profile and review the impact of utilizing this feature.
C. Meet with the organization’s application team to get additional information.
D. Monitor East-West traffic throughout normal business cycles.

Answer:
B. Include a segment QoS profile and review the impact of utilizing this feature.

Explanation:

In this scenario, the architect is asked to help with a logical design for NSX-T Data Center that involves managing East-West traffic and ensuring prioritization for business-critical applications, particularly one with high bandwidth demands (a file share).

Here's a breakdown of the options:

  • A. Review average North/South traffic from the core switches and firewall: This option focuses on North-South traffic (i.e., traffic between the data center and external resources), but the question specifically focuses on East-West traffic (i.e., traffic within the data center), which is more relevant to the business-critical applications and file share use case.

  • B. Include a segment QoS profile and review the impact of utilizing this feature: Quality of Service (QoS) is the key to ensuring prioritization of critical traffic. With QoS profiles, the architect can define how the traffic should be prioritized, ensuring that the high-demand file share and critical applications are given the necessary bandwidth and lower-latency treatment. This aligns well with the requirement of prioritizing business-critical applications and managing performance in line with the SLA for East-West traffic.

  • C. Meet with the organization’s application team to get additional information: While gathering additional information is generally a good practice, this option does not directly address how the performance-based SLA and prioritization of East-West traffic can be implemented. The focus here should be on practical design actions like applying QoS profiles rather than further discovery.

  • D. Monitor East-West traffic throughout normal business cycles: Monitoring is an important step, but monitoring alone does not directly solve the problem of prioritizing traffic or meeting the SLA. The actual design change involves applying QoS and other traffic management techniques.

The best action to include in the design to meet the performance-based SLA, prioritize critical applications, and manage high-demand services like file shares is to use a segment QoS profile to manage East-West traffic effectively. Therefore, B. Include a segment QoS profile and review the impact of utilizing this feature is the most suitable answer.


Question 7

Which NSX-T feature is used to allocate the network bandwidth to business-critical applications and to resolve situations where several types of traffic compete for common resources? (Choose the best answer.)

A. Network I/O Control Profiles
B. LLDP Profile
C. LAG Uplink Profile
D. Transport Node Profiles

Answer:
A. Network I/O Control Profiles

Explanation:

The correct answer is A. Network I/O Control Profiles. Here's why:

  • A. Network I/O Control Profiles:
    This feature is used to manage and allocate network bandwidth to different types of traffic in NSX-T. It helps resolve situations where multiple types of traffic (such as business-critical applications, regular traffic, and less important services) are competing for the same network resources. Network I/O Control allows administrators to apply different levels of bandwidth control, ensuring that critical applications receive the necessary network resources, even during periods of high traffic demand. This is the feature you would use to allocate network bandwidth based on application priority.

  • B. LLDP Profile:
    LLDP (Link Layer Discovery Protocol) is used for discovering network devices and neighbors in a network. While LLDP is important for network topology and device identification, it is not related to bandwidth allocation or traffic prioritization.

  • C. LAG Uplink Profile:
    LAG (Link Aggregation Group) uplink profiles are used for aggregating multiple physical network links into a single logical link to provide higher bandwidth and redundancy. While LAG provides increased throughput and redundancy, it does not directly handle the allocation of bandwidth to different types of traffic.

  • D. Transport Node Profiles:
    Transport node profiles are used in NSX-T to define the characteristics of transport nodes (such as ESXi hosts or KVM hosts). These profiles include settings for configuring the transport network and its connectivity. However, transport node profiles do not directly manage or allocate bandwidth to specific types of traffic.

The feature that directly manages and allocates bandwidth for business-critical applications and ensures fairness among different traffic types is A. Network I/O Control Profiles. This feature helps resolve network congestion and ensures that important applications receive the necessary resources.


Question 8

An architect is helping an organization with the Logical Design of an NSX-T Data Center solution.
This information was gathered during the Assessment Phase:
✑ Customer currently has a single 10-host vSphere cluster.
✑ Customer wants to improve network security and automation.
✑ Current cluster utilization and business policies prevent changing the existing vSphere deployment.
✑ High-availability is important to the customer.

Which three selections should the architect include in their design? (Choose three.)

A. Apply vSphere DRS VM-Host anti-affinity rules to the virtual machines of the NSX-T Edge cluster.
B. Deploy at least two NSX-T Edge virtual machines in the vSphere cluster.
C. Deploy the NSX Controllers in the management cluster.
D. Apply vSphere Distributed Resource Scheduler (vSphere DRS) VM-Host anti-affinity rules to NSX Managers.
E. Remove 2 hosts from the cluster and create a new edge cluster.
F. Remove vSphere DRS VM-Host affinity rules to the NSX-T Controller VMs.

Answer:

B. Deploy at least two NSX-T Edge virtual machines in the vSphere cluster.
C. Deploy the NSX Controllers in the management cluster.
D. Apply vSphere Distributed Resource Scheduler (vSphere DRS) VM-Host anti-affinity rules to NSX Managers.

Explanation:

  • A. Apply vSphere DRS VM-Host anti-affinity rules to the virtual machines of the NSX-T Edge cluster:
    This is not necessarily required for the Edge cluster. Anti-affinity rules are typically used to ensure that certain VMs (like NSX Manager and Edge VMs) are not placed on the same host for high availability. However, since the customer wants high availability, deploying multiple NSX-T Edge VMs in the same vSphere cluster will already provide redundancy. Therefore, this is not the top priority in this scenario.

  • B. Deploy at least two NSX-T Edge virtual machines in the vSphere cluster:
    This selection is critical. To meet the requirement for high availability, deploying at least two NSX-T Edge virtual machines in the vSphere cluster ensures that the Edge services, such as routing, load balancing, and VPN, are highly available. Deploying more than one Edge VM helps with redundancy and avoids a single point of failure.

  • C. Deploy the NSX Controllers in the management cluster:
    This is a best practice. NSX Controllers manage the control plane for NSX-T, and placing them in a dedicated management cluster ensures that they are isolated from the workload clusters, thus improving security and stability. This also aligns with the customer's goal of improving automation and network security.

  • D. Apply vSphere Distributed Resource Scheduler (vSphere DRS) VM-Host anti-affinity rules to NSX Managers:
    This is important for high availability. By applying anti-affinity rules to NSX Manager VMs, you ensure that the NSX Managers are distributed across different hosts, avoiding a scenario where all NSX Manager VMs reside on the same host, which could lead to a single point of failure. This ensures redundancy and high availability for the NSX management plane.

  • E. Remove 2 hosts from the cluster and create a new edge cluster:
    This is not necessary for the current scenario. The customer has a 10-host vSphere cluster, and removing 2 hosts could lead to unnecessary complexity and resource wastage. Instead, deploying multiple Edge VMs within the existing cluster is sufficient to meet high availability requirements.

  • F. Remove vSphere DRS VM-Host affinity rules to the NSX-T Controller VMs:
    This is not the best approach in this case. Instead of removing affinity rules for NSX-T Controller VMs, applying proper anti-affinity rules (to ensure the controllers are distributed across hosts) will provide better high availability. Removing affinity rules altogether could lead to improper placement of VMs, reducing availability.

The best options for the design are:

  • B: Deploy at least two NSX-T Edge virtual machines in the vSphere cluster for high availability.

  • C: Deploy the NSX Controllers in the management cluster to isolate them from the workload clusters.

  • D: Apply vSphere Distributed Resource Scheduler (vSphere DRS) VM-Host anti-affinity rules to NSX Managers to ensure their redundancy and high availability.


Question 9

An architect is helping an organization with the Conceptual Design of an NSX-T Data Center solution.
This information was gathered by the architect during the Discover Task of the Engagement Lifecycle:
✑ There are applications which use IPv6 addressing.
✑ Network administrators are not familiar with NSX-T Data Center solutions.
✑ Hosts can only be configured with two physical NICs.
✑ There is an existing management cluster to deploy the NSX-T components.
✑ Dynamic routing should be configured between the physical and virtual network.
✑ There is a storage array available to deploy NSX-T components.

Which constraint was documented by the architect? (Choose the best answer.)

A. Dynamic routing should be configured between the physical and virtual network.
B. There are applications which use IPv6 addressing.
C. Hosts can only be configured with two physical NICs.
D. There are enough CPU and memory resources in the existing management cluster.

Answer: C. Hosts can only be configured with two physical NICs.

Explanation:

In the context of a conceptual design, a constraint refers to any limitation or restriction that impacts how the solution can be implemented. Constraints are typically things that cannot be easily changed or need to be worked around during the design process.

  • A. Dynamic routing should be configured between the physical and virtual network:
    This is a requirement for the solution, not a constraint. It describes a functional need (dynamic routing between physical and virtual networks), but it does not limit the design in a way that a constraint does.

  • B. There are applications which use IPv6 addressing:
    This is also a requirement, indicating that IPv6 addressing is required for certain applications, but it is not a constraint as it does not limit the physical design or configuration capabilities.

  • C. Hosts can only be configured with two physical NICs:
    This is a constraint because it limits the number of physical NICs available for the NSX-T deployment, impacting the design of redundancy, network configurations, and fault tolerance. This limitation must be addressed in the design.

  • D. There are enough CPU and memory resources in the existing management cluster:
    This could be a consideration, but it is not a constraint unless there is a specific limitation on resource allocation that impacts the design. The statement does not imply a limitation on the design of the solution.

The correct answer is C because "Hosts can only be configured with two physical NICs" represents a constraint that will directly influence the NSX-T design (e.g., NIC redundancy, distributed firewall traffic, etc.).


Question 10

Which two benefits can be achieved using in-band management of an NSX Bare Metal Edge Node? (Choose two.)

A. Reduces storage requirements.
B. Reduces cost.
C. Preserves packet locality.
D. Reduces egress data.
E. Preserves switchports.

Answer:

C. Preserves packet locality.
E. Preserves switchports.

Explanation:

In-band management involves managing the device over the same network that it serves (i.e., data traffic flows over the same interface as management traffic). The in-band management approach for NSX Bare Metal Edge nodes provides the following benefits:

  • C. Preserves packet locality:
    Since management traffic is transmitted over the same network interface as the application or data traffic, packet locality is preserved. This means there is no need to route management traffic through different network paths, preserving the locality of the traffic and improving performance and latency for applications.

  • E. Preserves switchports:
    By using in-band management, you do not require a dedicated physical port for management. This preserves switchports since you can use the same interface for both data and management traffic, effectively optimizing the use of the physical infrastructure.

Why the other options are incorrect:

  • A. Reduces storage requirements:
    In-band management does not directly affect storage requirements. Storage needs are related to the data and application components and not the management plane.

  • B. Reduces cost:
    While in-band management may reduce the need for separate management interfaces or ports, which could reduce infrastructure complexity, it does not directly reduce overall costs. The reduction in costs depends on other factors, such as the overall network design or the number of devices.

  • D. Reduces egress data:
    In-band management does not specifically reduce egress data. Egress data pertains to the data sent from the network to external destinations. In-band management only influences how management traffic is routed, not data traffic out of the network.

The correct answers are C. Preserves packet locality and E. Preserves switchports because in-band management allows for more efficient use of network interfaces by reducing the need for additional interfaces and ensuring that management and data traffic stay localized.


UP

LIMITED OFFER: GET 30% Discount

This is ONE TIME OFFER

ExamSnap Discount Offer
Enter Your Email Address to Receive Your 30% Discount Code

A confirmation link will be sent to this email address to verify your login. *We value your privacy. We will not rent or sell your email address.

Download Free Demo of VCE Exam Simulator

Experience Avanset VCE Exam Simulator for yourself.

Simply submit your e-mail address below to get started with our interactive software demo of your free trial.

Free Demo Limits: In the demo version you will be able to access only first 5 questions from exam.