Download Free 2V0-41.20 Exam Questions

File Name Size Download Votes  
File Name
vmware.test-king.2v0-41.20.v2022-09-23.by.maximilian.38q.vce
Size
115.7 KB
Download
632
Votes
1
 
Download
File Name
vmware.train4sure.2v0-41.20.v2021-09-10.by.ryan.42q.vce
Size
118.25 KB
Download
995
Votes
1
 
Download
File Name
vmware.pass4sure.2v0-41.20.v2021-06-10.by.blackdiamond.42q.vce
Size
118.25 KB
Download
1082
Votes
1
 
Download
File Name
vmware.testking.2v0-41.20.v2021-01-28.by.theodore.42q.vce
Size
229.89 KB
Download
1251
Votes
2
 
Download

VMware 2V0-41.20 Practice Test Questions, VMware 2V0-41.20 Exam Dumps

With Examsnap's complete exam preparation package covering the VMware 2V0-41.20 Practice Test Questions and answers, study guide, and video training course are included in the premium bundle. VMware 2V0-41.20 Exam Dumps and Practice Test Questions come in the VCE format to provide you with an exam testing environment and boosts your confidence Read More.

NSX-T Management and Control Plane

1. NSX-T vs. NSX-V

And I want to start by comparing the NSX T to its predecessor, the NSX V. Now, if you have NSX V knowledge or NSX forV Sphere knowledge, that's going to be very helpful in this course, but it's not necessarily a requirement. You can still understand the concepts of this course without an NSX for Vsphere background, but I do want to take a little time to differentiate the two solutions and explain why Nsxt is a superior option to NSX V. As a result, NSX for Vsphere is managed with Vcenter-based tools. Everything is going to be managed using tools like the Vsphere web client. For example, whereas Nsxt is decoupled from Vcenter, we're going to have standalone tools specifically dedicated to Nsxt that don't require Vcenter in order to function. NSXv is specifically for Vsphere environments, whereas NSX supports Vsphere, but it also supports KVM baremetal servers, Kubernetes, OpenShift, AWS, and Azure. As a result, NSX is platform-agnostic, whereas NSX for Vsphere is solely a Vsphere tool. NSX is the networking solution for AWS Outposts, and if you're not familiar with AWS Outposts, it's a fully managed service where you can deploy AWS infrastructure,AWS services, and utilise AWS APIs inside of an aphysical data centre or colocation space that you manage. So you're using the same AWS hardware services and APIs to build and run your applications in your own physical data center. And NSX version two and four have feature parity with NSX V, which really makes it the preferred network virtualization solution. Prior to version two-four, there were some features that weren't supported in NSXT, but that's no longer the case. NSX T 2.4 has feature parity with NSX forVsphere, so some of the features of NSX T are that it supports containers or virtual machines, and every node in an NSX environment contains a management plane agent. We're going to take a close look at the control, management, and data planning coming up shortly in a future lecture. The process of installing Nsxvis is very different from NSX. NSX for Vsphere has an NSX Manager virtualappliance that's registered with Vcenter, whereas NSX is a standalone solution, but it can integrate with Vcenter in order to register hosts. As Compute notes, NSX for Vsphere has separate virtualappliances for NSX Manager, and then it has virtualappliances dedicated to the NSX controller cluster. So with the recommended configuration, you're going to have one virtual appliance for NSX Manager and three for the NSX controller cluster. NSX-T has taken both of those functions and built them into the same virtual appliances. So now there's no longer a separate NSX Manager and NSX Controller virtual appliance. They're all baked into the same cluster of three virtual appliances, so we can eliminate one of those components. And NSX V and NSX T both leverage underlay networks to create layer two networks that can span a layer three physical network. With NSX for Vsphere That underlay network is VXLAN. With NSX T, there's a new underlay network called Genev that replaces VXLAN. The licences for Nsxv and NSXare actually the exact same license. So, if you already have NSX for Vsphere licence keys, you can use the same licence key in NSX T and it will work. Now, the virtual appliances are different, so you're going to need different installation media for Ms. XT. But at the moment, VMware has created different licences for these different products. The NSX four V sphere features a logical switch. This is the layer to switchsolution for NSX for Vsphere. And logical switches rely upon an underlying Vsphere distributed switch, which has to be managed using Vcenter. And one of the big features of NSX T is that it's decoupled from Vcenter. So the NVDs, or NSX Virtual Distributed Switch, is the newest type of virtual switch that VMware provides, and it's decoupled from the Vcenter. It's a host switch that is created on transport nodes like an ESXi host or KVM. It features cross-platform support. So it doesn't just run in a Vsphereenvironment, it can run in the different environments that are supported by NSX, making it very beneficial from a multi-cloud computing perspective.

2. NSX-T Management, Control, and Data Planes

And we'll start with the management plan. So the management plane of NSX T is a three-node cluster of three virtual appliances. We'll deploy these virtual appliances and create a cluster of three, and the cluster provides availability in the event that one or more of these NSX Manager nodes fails. It also provides scalability as well, allowing operations to be carried out on multiple NSX Manager nodes. And this is our user interface to NSX Manager. So as we get deeper into this course and you start performing some of the hands-on labs, you'll access the NSX user interface. And this user interface is hosted by NSX Manager. NSX Manager also allows us to integrate with a cloud management platform. So we can use a cloud management platform to establish our desired configuration. How do we want our NSX-T environment to be configured? And then we can push that configuration from the cloud management platform into the NSX Manager node. And the desired configuration that we've established is validated. And then it's replicated across all three of the nodes in the NSX Manager cluster. And in a future lesson, we'll explain a little more in depth how this cluster works and how it utilises an underlying shared database to keep this configuration consistent across the cluster nodes. NSX Manager also hosts an API so that we can make API calls and programmatically modify our NSX environment. The control plane of NSX-T is provided by the same three virtual appliances that provide the management plane. So with NSX for Vsphere, we used to have an NSX Manager appliance and then a separate three-note controller cluster. That's no longer the case. Now we just have one three-node cluster, and this hosts the control plane as well. So, when we think about the management plane, we're thinking about how we make configuration changes. If I want to modify my configuration, that's something that's done in the management plan. And so, if the management plane is down,I simply lose my ability to make changes. That's what the management plan gives me. The control plane has a more immediate impact on performance. The dynamic state of our NSX environment is tracked by the control plane. So information about things like the NSX virtual distributed switch, the logical routers that are built into NSX T, the distributed firewall, and the configurations that we make for this dynamic and changing configuration is tracked by the control plane. For example, if new virtual machines are created or virtual machines are moved from one host to another, it's the control plane that tracks all of those changes to our environment. If a dynamic routing table update needs to be performed, that is handled by the control plane. So it learns about the topology information and pushes all of that forwarding information down to the data plane so that the dataplane can successfully forward traffic. Now, also built into this three-node virtualappliance cluster is the Policy Manager, and the NSXPolicy Manager is an intent-based system that simplifies the consumption of NSX T services. It gives us a GUI or we can access it with an API. And basically, we push our intent as far as what we want to be present in NSX Manager. The NSX Policy Manager accepts that intent and carries out the necessary configurations in NSX. And finally, we've got the data plane. So before we get too deep into the data plane,let's go back for a moment and just review. We have the management plan. The management plane is where we make configuration changes. If the management plan is down,we simply can't make changes. The control plane tracks the dynamic state of our MSX environment. That means if our control plane components are down, things like dynamic routing updates or things like adding new virtual machines to existing NSXvirtual distributed switches can become problematic. But any existing workloads are going to still be able to communicate with the data plane. That's where my actual traffic flows. So the data plane includes multiple endpoints. Think of these endpoints as bare metal servers that are running our VMs or containers. Things like ESXi hosts, things like KVM. The data plane is where our traffic is actually flowing to and from our virtual machines or containers. So if there is a failure on the data plane, those failures are service impacting.That's where the actual traffic is flowing. So it's critical to design the dataplane for the highest possible availability. Now, the NSX edge is also part of the data plane because traffic is actually flowing through and being processed by the NSX edge. So the NSX edge is our north-south router. It's what provides the boundary between the NSXdomain and the rest of the world. It also serves all sorts of functions, like network address translation, firewall load balancer, etc. So our virtual machine traffic is going to actively flow through the NSX edge, and therefore it is a data plane component. And we have to keep that in mind when we're thinking about the availability of the NSX edge. Now, the data plan that I just mentioned was for on-premises physical resources like KVM, like ESXi, but we also have public cloud support for NSX Dash T as well. So the data plan can be instantiated and extended to these public cloud solutions as well.

3. NSX Manager Architecture

But before we get into that, I just want to take a moment to share with you the Nsxt reference design. This is a document that I will be referring to constantly throughout this course. It's a reference design for NSX and it covers all sorts of design aspects for SXT. So everything from routing to firewalls, logical switching, VPNs, and layer two bridges is covered. It's all covered in this design guide, and there are a lot of great diagrams and other documentation pieces here that are going to really help you understand how Nsxt works. So I definitely recommend downloading a copy of this document. And as we're going through this course, you may want to refer to it from time to time. And when we finish this course, it's probably worth it to take some time to actually read this document. And ideally, after you finish this course,you will be comfortable with many of the concepts within this document. So that'll make reading it a little bit easier and it'll make the content in there a lot easier for you to digest. So that's the NSX reference design. And again, that's something that we are going to be consistently referring to throughout this course. So let's break down the NSX Manager and what it does. The NSX Manager has three different roles that it fulfills: a policy role, an administrator role, and a controller role. And we will be deploying NSXManager in a three-node cluster. Each of these nodes is an independent virtual appliance. They're going to run as virtual machines on our ESXi hosts, or we could potentially roll these out on KVM as well. And as we make API calls or as we configure things in the user interface, the changes that we want to make can be handled by these three manager notes. So, by having three of these notes, we have scalability, we have efficiency, and we also have the ability to tolerate failures. Now, in order to get the full functionality of NSX Manager, we need at least two of these nodes running at all times. So ideally, we'll take these three NSXManager virtual appliances, or virtual machines, and we'll put them on different ESXi hosts. That way, a single ESXi host failure won't take down all three of my NSX Manager notes. We want to configure a DRS anti-affinity rule. And by the way, that's not created by default. So, if I put my three NSX Manager nodes on a cluster of ESXi hosts with high availability enabled, the DRS anti-affinity rule will ensure that all three NSXManager nodes are running on different ESXi hosts. So let's say, for example, I have four ESXi hosts in my cluster. If one of those hosts fails, one of the controller nodes can simplyreboot on one of the surviving hosts. I'll be back up to three MSX manager nodes and I'll have two of them up at all times. So that's kind of the ideal design here. We don't want the failure of a single ESXi host to impact us here. We also don't want the failure of a single storage device to take us down here. So just like any other virtual machine, I'm going to choose where to store the files for this VM. And ideally, I'll put all three NSX manager nodes on different data stores that are on different physical hardware, so that the failure of a single storage system doesn't take down more than one NSX manager. Note that as far as the database is concerned, there is a distributed replicated database across all three of these nodes. So we have this shared replicated database that is identical for all three nodes. If a change is made by one of the nodes, that change is replicated to the other nodes, and we could potentially have a cloud management platform. So the NSX manager virtual appliance is designed to handle large numbers of API calls from a cloud management platform, and it has fully supported the OpenStack neutron plugin. OK, so let's talk about virtual IPs. So all three of these NSX manager nodes need to be on the same subnet. Now, they could be in different physical racks, but they have to have that layer of adjacency. So these NSX manager nodes are going to hold an election and one of the NSX manager nodes is going to become the leader. So let's assume that the NSX manager node on the far left here is the leader for this NSX manager cluster. We will establish a shared virtual IP for the entire cluster. And that shared virtual IP is attached to the NSX manager node that is the leader. And this is very similar to protocols like VRP or HSRP. If you've worked with those in the past, you have this one shared IP address,but it really belongs to one specific node. And this single IP address will be used for all of our manager traffic. So the leader is going to be the point of contact for API calls. It's going to be the contact that is used when we use the graphical user interface. So the leader basically owns that virtual IP and each of the NSX manager nodes are different VMs. So they're all going to have unique Mac addresses. And so, if we think about our ARPtable, our address resolution protocol tables in our systems, they are going to associate the VIP with the Mac address of the leader node. So here's my virtual IP. Our tables are going to think that that virtualIP belongs to the Mac address of the leader. Note what happens if that leader node were to fail. So now my leader node has gone down. The VIP still exists, but the Mac address that it used to be associated with is now unavailable because the NSX manager node is down. At this point, a different NSX manager node is going to become the leader. This other NSX manager node will take over that virtual IP. And at that point, this NSX manager node that has taken over as the leader is going to send out a gratuitous ARP response. That gratuitous ARP response basically informs everyone on this layer two network that this IP address is now associated with this Mac address. If you're on this layer two network and you need to send something to this IP address, here's the Mac address that's now associated with the VIP. So it proactively just kind of goes out and updates ARP tables to let them know there's a new Mac address associated with this IP. Let's back up just a little bit here. And one other thing I just want to make a note of. Each of these NSX manager nodes does have its own IP address. Aside from the VIP, each node actually has its own IP address as well. And they use those IP addresses to communicate with each other. So the VIP is really only for management traffic and API calls and stuff like that. They're communicating with each other through their own unique node IP addresses. So what we're seeing in this slide is availability, right? We've got availability. If one of these NSX manager nodes were to fail, the other NSX manager nodes have the ability to take over. So it's availability, but it's not load balancing. The leader node is doing all of the work. There is no load balancing going on here. So instead of doing it that way, we could connect to an external load balancer. It could be an NSX load balancer. It could be something from a third-party vendor, like a five, for example. And we still have this concept of a virtual IP. So the VIP actually exists on the load balancer in this case. And that's still the single point of contact for all of our management systems. But then we've got these three NSX manager nodes. They are in different subnets, as you can see. Now, I don't require that layer to adjacencyanymore because they're just behind this load balancer. The load balancer has the VIP. The load balancer can distribute all requests across these NSX manager notes. So maybe in our data center, each rack has its own subnet. And now I want to take NSX manager nodes and I want to spread them across multiple racks. This is a way that I could do that. All the nodes will be in an active, active configuration now. So you can connect to any of them with the UI or with the API. You can make changes. And those changes are pushed down to this distributed database that is replicated constantly across three nodes, which keeps the configuration of all these NSX manager instances in sync. So the external load balancer option gives us this benefit of load balancing. What is the downside of this approach? Well, it adds complexity, right? And as you'll see, as we go through this course, Nsxt, it's pretty complex. It's complex compared to Nsxv, for sure. So we may want to do things to limit the complexity. If I do not need availability across multiple different subnets, then I probably do need an external load balancer. If my workload on NSX manager can be handled by a single node, then I probably don't need an external load bouncer. So those are really the use cases in which we would consider utilising a load bouncer. Number one, if I need multiple nodes to divide up that workload, or number two, if I want to spread my NSX managernodes across multiple false domains, multiple physical racks, and they have to have IP addresses on different subnets, then a load balancer might make sense.

4. NSX Controller Concepts

A function of the NSX manager cluster So if you're used to working with NSX for Vsphere in NSX V, we used to have different appliances for the NSX manager and the NSX controller. But now this is all based on a single cluster of virtual appliances. So that's great news. We don't have separate appliances for NFXmanager and the NSX controller anymore. So now that we know that they are all built into a single cluster of virtual appliances, let's talk a little bit about what the NSX controller does. First thing first, it handles logical switching. Essentially, what the NSX controller is going to do is it's going to track the Mac addresses of your virtual machines and it's going to identify which Mac addresses exist on which transport nodes. So, for example, let's say I have virtual machines that are connected to an Nsxtsegment that exists on a single ESXi host. Well, we have to know how to get to those Mac addresses. If a new VM powers up, how do we know where it is? That's what the NSX controllers do for logical switching. So when a new virtual machine is created or powers on for the first time, it's up to the transport node. And again, at this point in the course,when you think transport node, think ESXi host. So it's up to that transport node to say, hey, I've got this new virtual machine. It's connected to this layer two segment. You can get to this virtual machine through me. That's where it resides. So they're going to update the NSX controller. They'll also push some information to the other transport nodes as well. Transport nodes have what's called the local control plane, or LCP, and the LCP exists on every transport node. And basically, it's the job of the LCP to tell the CCP what's going on. Like, for example, a new virtual machine, or maybe an avirtual machine has been V-motioned, things like that. The LCP updates the central controlplane on the NSX controller cluster. And again, remember, the NSX controller cluster runs right on the NSX manager. So, logical switching is one of the functions that it performs. It also does logical routing, doing things like learning about dynamic routes, table updates, and pushing those down to the transport nodes. It does the distributed firewall. In NSX manager, for example, I created a new firewall rule. This is going to be used to push those firewall rules down to the transport nodes. And again, if you're accustomed to Nsxv,this is very different than what we're used to dealing with in Nsxv. The controller wasn't really involved in those firewall rules; it was strictly an NSX manager. That's different here in Nsxp. So yeah, we've got the CCP running in theNSX manager, the central control plane, and the LCPrunning in these transport nodes, and that's what they're going to use to communicate with each other. So let's take a look at an example of a V motion. Right now, I've got this virtual machine running on the transport node at the top. And so my NSX controller has thisawareness that basically looks something like this. The NSX controller knows, hey, if I have traffic that is destined for the Mac address of that virtual machine,I'll send that traffic to this transport node, and the transport node can forward it to the VM. Well, what if we now take that virtual machine and we V motion it to a different transport node? We have to communicate those changes to the central control plane. And so that's one of the jobs of the NSX controller, is to ensure that we can track which virtual machines are where we have different VMs on different layers, two segments. We want to be able to keep those tables updated. If a new virtual machine boots up, we need to be able to keep those tables updated so that all of our virtual machines that are connected to those NSX segments are reachable.

5. NSX Controller Plane Sharding

Node cluster for our NSX administrators. So we've got three controllers, one of which is running in each NSX manager virtual appliance. And the number one thing I want to mention is that if the NSX manager fails, or if the NSX controller fails, these things are in the management and control planes,and we're going to talk about those different planes in more detail in a little while. But the NSX controller cluster is in the control plane, so it's not in the data plane. And what that means to us is that if the NSXcontrollers were to fail, they're not going to take down traffic. Traffic may flow less efficiently or changes like route table updates might not happen. But even if all of our NSX controller nodes are down, traffic can still flow. So here we see the three NSX controllers running on our three NSX manager nodes, and these are being used to control all of our transport nodes. And if you've worked with NSV in the past, we used to have something called "slicing," where different logical switches would be controlled by different NSX controller node instances. That's not how it works in NSX. Each transport node is now controlled by one of these NSX controllers. So that's what we mean by the word sharding. The NSX controllers on each of these NSX manager notes are going to basically divide up the transport nodes, and each transport node is going to be controlled by one of these controllers. So let's dig a little bit deeper into this concept. What happens if one of our controllers fails? So, first off, it's important to keep in mind that the controllers have this shared distributed database. So basically, think about it this way. One might be in charge of receiving information from two transport nodes. Let's call one of them Transport Node One. So here in our diagram, this one is Transport Node One. And we've got the central control plane running in the controller and the logical control plane running in the transport node. So where does the controller put all the information that it learns about that node? Well, it's going to put it in the distributed database. So down here at the bottom of our diagram,we see this database that is shared and replicated across all of these controller nodes. So that's the first key concept here. If a controller node fails, there is no data loss because all of the data is replicated acrossdatabase instances for all three of these controllers. So if a node fails, it basically means that the connection between the transport node and the controllers that were managing those transport nodes has now been broken. So in this case, controller one has now failed. At that point, the transport nodes will simply be reassigned to other members of the cluster, and all of the contents of the database are exactly the same for all three of these controllers, so it doesn't result in any sort of data loss or anything like that here. It's just going to basically reestablish things and they're going to function exactly the way they were beforehand. So the nature with which these controllers distribute this workload is significantly different than what we saw in NSXV. Basically, the concept here to keep in mind is that each transport node is controlled by one specific controller and all of the data is stored in that replicated database that's distributed across those controller nodes, so that if a single controller fails, there is no data loss and there's no need to rebuild all of that data.

ExamSnap's VMware 2V0-41.20 Practice Test Questions and Exam Dumps, study guide, and video training course are complicated in premium bundle. The Exam Updated are monitored by Industry Leading IT Trainers with over 15 years of experience, VMware 2V0-41.20 Exam Dumps and Practice Test Questions cover all the Exam Objectives to make sure you pass your exam easily.

Comments (0)

Add Comment

Please post your comments about VMware Exams. Don't share your email address asking for 2V0-41.20 braindumps or 2V0-41.20 exam pdf files.

Add Comment

UP

LIMITED OFFER: GET 30% Discount

This is ONE TIME OFFER

ExamSnap Discount Offer
Enter Your Email Address to Receive Your 30% Discount Code

A confirmation link will be sent to this email address to verify your login. *We value your privacy. We will not rent or sell your email address.

Download Free Demo of VCE Exam Simulator

Experience Avanset VCE Exam Simulator for yourself.

Simply submit your e-mail address below to get started with our interactive software demo of your free trial.

Free Demo Limits: In the demo version you will be able to access only first 5 questions from exam.