VMware 2V0-21.20 Exam Dumps, Practice Test Questions

100% Latest & Updated VMware 2V0-21.20 Practice Test Questions, Exam Dumps & Verified Answers!
30 Days Free Updates, Instant Download!

VMware 2V0-21.20 Premium Bundle
$69.97
$49.99

2V0-21.20 Premium Bundle

  • Premium File: 109 Questions & Answers. Last update: Jan 18, 2023
  • Training Course: 100 Video Lectures
  • Study Guide: 1129 Pages
  • Latest Questions
  • 100% Accurate Answers
  • Fast Exam Updates

2V0-21.20 Premium Bundle

VMware 2V0-21.20 Premium Bundle
  • Premium File: 109 Questions & Answers. Last update: Jan 18, 2023
  • Training Course: 100 Video Lectures
  • Study Guide: 1129 Pages
  • Latest Questions
  • 100% Accurate Answers
  • Fast Exam Updates
$69.97
$49.99

Download Free 2V0-21.20 Exam Questions

File Name Size Download Votes  
File Name
vmware.pass4sure.2v0-21.20.v2022-12-10.by.luca.62q.vce
Size
713.1 KB
Download
96
Votes
1
 
Download
File Name
vmware.prep4sure.2v0-21.20.v2021-09-11.by.colton.65q.vce
Size
633.67 KB
Download
554
Votes
1
 
Download
File Name
vmware.test-king.2v0-21.20.v2021-05-26.by.tyler.65q.vce
Size
633.67 KB
Download
651
Votes
1
 
Download
File Name
vmware.test4prep.2v0-21.20.v2021-01-19.by.gracie.65q.vce
Size
856.69 KB
Download
810
Votes
2
 
Download

VMware 2V0-21.20 Practice Test Questions, VMware 2V0-21.20 Exam Dumps

With Examsnap's complete exam preparation package covering the VMware 2V0-21.20 Practice Test Questions and answers, study guide, and video training course are included in the premium bundle. VMware 2V0-21.20 Exam Dumps and Practice Test Questions come in the VCE format to provide you with an exam testing environment and boosts your confidence Read More.

Managing Storage in vSphere 7

2. VMFS and NFS Datastores on vSphere 7

So let's go with a simple example for VMFS. Let's say that we're deploying an iSCSI storage ring. Iscosy is just one of the possible options that we can choose from fibre channel,fiber channel over Ethernet local disks. Those are all going to be VMFS data stores for our ESXi hosts. So let's start by just moving away from the ESXihost and getting rid of those parts of this diagram and just focusing on the Isguzzi storage array. And so here's our Iscz storage array. We've got Ethernet switches. That's the network that our ESXi host is going to use to communicate with the Iscz storage array. So we're always connecting to an Isczstorage array using an Ethernet network. And then on the storage device itself, we've got a couple of storage processors here. So the storage processors are basically the brains of the operation. They're the CPUs, and they're also the interfaces that we use to communicate with the storage array. And then on the storage array itself,I've got a bunch of storage capacity. These could be traditional magnetic disks. So I could have 7200 rpm Sedadrives or 150 rpm SAS drives. Those are traditional physical disks. We call them hard disc drives, spinning disks, or magnetic disks. They all kind of mean the same thing. Or it could be solid state, which is going to provide us with much higher speeds. But regardless of the actual type of storage technology, we've got a bunch of storage devices. And the capacity of those storage devices is aggregated, which means we take all these storage devices and put them all together. So we would use Raid to accomplish that. We would use Raid Five, Raid Six, or Raid Ten. We would use some raid technology to basically combine all of those storage devices and make them seem like they're one big storage device. And that's called an aggregate. So once we've got that aggregate capacity, the storage administrator can now go into the storage array and they can break that aggregate capacity into smaller chunks of capacity called LUNs, which stands for logical unit numbers. So it's just simply a way to say, hey,we've got a total of five terabytes of capacity. Let's break off a 500 Gig chunk of capacity. We'll call that a lun. Let's break off a 1 TB chunk of capacity. It's a way to break that aggregated storage into smaller logical units. Once these lungs are created, they are basically chunks of raw, UN-formatted disc capacity. And that's the biggest difference between a VMFS storage solution and NFS. This storage array is not going to format the space for me. It's going to be the virtual equivalent of the disc that we just took right out of the box, 100% raw unformatted disc capacity. So with a VMFS storage device, what needs to happen now is target discovery. One way or another, the ESXi host needs to find out about the available data source. So, again, we're just going to focus on Guzzy for the moment. We'll do with Isczzy, we'll configure an Iscze target. The Iscze target is pointing to the IP address of the storage array. And so now the ESXi host can issue what's called a Send Targets request, basically a request to the storage array. Hey, tell me about your lungs. Tell me about the lungs that you have available. That way, the ESXi host can get a comprehensive list of all of the lungs that are available on that storage device. And now the ESXi host can format those discs with the VMFS file system. Again, that's the biggest difference between VMFS and NFS. The ESXi host is going to learn about these lungs. Let's say we want to create a data store on Lunt. great. What the ESXi host is going to do is reach out over the storage network and format Lun Two with the VMFS file system. So the end result of this is that Lun One is essentially a volume that the ESXi host can own. It's a volume that the ESXi host could potentially even boot up from. So if we want to boot an ESXi host but the ESXi host has no local storage, we can do that. We can set up the ESXi host with a DHCP server. The ESXi host can boot up, get configuration data from the DHCP server, and use that information to reach out to the storage array, grab a boot image off of it, and actually boot from it. You can't do that with NFS. Booting from San is something that's only supported with a VMFS data store. Now, NFS is very different from VMFS. ESXi hosts cannot boot up from an NFS device. There's no VMFS involved at all here. ESXi does not own that file system. The actual NFS device does. So the ESXi host, if I create an NFS data store with the ESXi host, is basically doing It's creating a shared folder. It's creating that folder within the file system that was configured by the NFS device. So there's no formatting involved here. The ESXi host cannot boot from an NFS data store. It's basically just allowing you to consume the capacity of that NFS device by creating a shared folder on it. The file system is owned and operated by the NFS server, so just think of it as creating files in a shared folder. Another important feature that NFS does not support is raw device mapping. So, for example, if I wanted to give a virtual machine direct access to physical storage,that's called the raw device mapping. That is something you can do with a Lun. If I have Iscuzzy fibre channel and one of those storage devices, I could potentially create a Rawdevice mapping for a VM, tell it about someLun that's out there, and give this virtual machine direct access to that raw, unformatted disc space. This is known as raw device mapping. We cannot do that with NFS. We can only do that with fibre channel, fiberchannel over Ethernet, Iscuzzy, or local physical disks. So let's take a quick look at how our ESXi host actually connects to the NFS data store. So again, our operating system has no idea that any of this is even going on. So our operating system is going to see a virtual scuzzy controller. And if you watched the prior lesson, where we went through storage fundamentals forvirtualization, you're familiar with this concept. So every virtual machine is going to have a driver with a virtual scuzzy controller. And as that virtual machine issues SCSI commands,those SCSI commands are going to hit something called a storage adapter in the ESXi host. The job of the storage adapter is to take these raw, scuzzy commands that the operating system issues and package them up in a format that can be transmitted across our network. And it's also associated, in this case, with a virtual switch. So the storage adapter takes in those SCSI commands,prepares them for transmission across the network, and sends them to the virtual switch, which in this case has two physical VM necks and two physical adapters. So we are going to use something called an aVM kernel port in the virtual switch to accomplish this data out onto the physical network. The storage adapter will actually be associated with a VM kernel port. And the job with the VM kernel port is to receive these NFS formatted scuzzy commands and pump themon to one of these physical VM necks so that traffic can actually hit the network. And so we can actually use both of these physical adapters as well. There are ways to do that. As we get into more advanced NFS topics, we can learn how to utilise both of those physical adapters. But that's the job of the VM kernel portis simply to accept those NFS formatted storage commands. Those NFS storage commands also include a destination IP address. It's the IP address of the NFS server. So the NFS server has an IP address that we'll have to configure in.Our ESXi hosts set up this NFS target for the storage commands to be sent to. And so once we've got all of these underlying pieces of the puzzle configured, then we can actually create a data store. And the data store is essentially a shared folder on that NFS server. So in review, VMFS is used for Iscussy fibre channel, fibre channel over Ethernet,and also for direct attached storage. So anytime you hear the term direct attachedstorage, think these are the local physical discs of my ESXi host. We're going to use VMFS for any of those storage systems. And within those VMFS-based storage systems, we have an aggregate that's the total disc capacity. Within that aggregate, we can create multiple raw, unforgottenLUNs logical unit numbers that are ESXi host cathened format using that VMFS file system. So with VMFS, it's very much like the myESXi host is getting access to raw, unformatted disc space and then formatting it. NFS is different. NFS has its own file system. The format that capacity is never available on the ESXi host. It's a file system that is owned and operated by the NFS device. And when we create a data store on an NFS device, it's essentially the equivalent of creating a shared folder. It's actually called an export. So when we create an NFS export, we're creating a shared folder that can be used to create a data store.

3. NFS 3 and 4.1

Start by looking at NFS version three. The first characteristic of NFS version three is that the traffic between the ESXi host and the network-attached storage device is all unencrypted. So as my virtual machine is creating SCSI, commands are being pushed out onto this storage network between the ESXi host and the network file system server. And so that traffic is all going to be unencrypted as it traverses the physical network with NFS version three. And so, therefore, with NFS version three, we want to utilise this on a trusted network. NFS version three uses a single connection for IO and only a single IP address can be used for the NFS server. This can make load balancing quite difficult because load balancing is typically accomplished using the IP hash load balancing policy. But if all the traffic is going to a single destination IP address, then if I'm using IP hash loadbalancing, all of that traffic is going to flow over a single physical adapter on my virtual switch. So in this scenario, even though I have multipleVM nicks and multiple physical adapters, if we're using IPhash load balancing, only one of those physical adapters is actually going to get used. And so, therefore, it can make loadbalancing a challenge with NFS version three. And then the other thing that we have to be aware of with NFS version three is that the ESXihost requires root level access to the NFS. So, if I want to create a data store on an NFS server using NFS version three, the ESXi host must have root-level access to the NFS server's file system. This means we have to configure the no root squashoption, which is not the default on many NFS servers. Basically, what an NFS server will typically do is squash attempts to attach to it using the root credentials. We have to disable that on the NFS server. And by nature, that makes the NFS server less secure. Now let's compare that to NFS version four one.Number one, we get improved security. The headers are signed and encrypted as the traffic traverses the physical network. So that gives us an additional level of security above and beyond what we get with NFS version three. The other big feature is that with NFS version four, we can now have multipathing supported using multiple IP addresses. So we can have multiple IP addresses associated with a single NFS data store. Let's think about what that means. As my virtual machine sends traffic towards the NFS storage device, there are two or more IP addresses that can be associated with that data store on the NFS server. So with IP hash load balancing, that is two distinct destinations, and the traffic for one of those destinations will flow out through one physical adapter. The traffic for the other destination IP will flow out through the other physical network adapter. So this is a big enhancement in terms of load balancing traffic over multiple physical VM necks. Another enhancement with NFS version four is that we no longer need root account access to the NFS server,so now we don't have to enable that noroot squash option on the NFS server. So, from a security perspective, kerberos support is included in NFS version four one.That means that we don't need root account access. It also means that encryption is supported in the headers of all traffic between the ESXi host and the NFS device. The credentials that we configure for Kerberos must match all of the ESXi hosts using that data store. So with Kerberos, we can actually configure credentials that will be used to access these NFS version four dot onedata stores instead of just using root access. But we have to make sure that all the hosts that are using those data stores have matching sets of credentials. Active Directory, an active directory domain controller, and a key distribution centre in order to configure Kerberos authentication.

4. Demo: Create an NFS Datastore in vSphere 7

I'm logged into the Vsphere client. And so I want to create a new data store. So I'm going to start by just going to storage here. And you can see the data stores that currently exist in my training virtual data center. And if I right-click the training virtual data centre and go to storage, I have the option to add a new data store. So let's do that. And this is going to be an NFS data store. Now, I have two options here. I can choose either an NFS version three datastore or an NFS version four data store. And we just saw a video breaking down the differences between those two NFS versions. So in this demo, I'm going to create an NFS three data store. I'm just going to call the data store NFS Dash demo. And on my actual NFS device,I've created a folder called Share. So that's my NFS export that I'm going to be creating this data store on. And then I'll just go ahead and put the address of my NFS server here, and I'm going to mount this as read, right? So you'll notice here I have the option, if I want to, to mount this as read only. I'm not going to do that. I'm going to mount it as a read-write data store. So essentially what I'm doing in this scenario is there's already this prebuilt share. There's already this prebuilt folder on anNFS physical storage solution somewhere. And here's the address of that NFS server: So I'm basically just accessing a shared folder. That's really all I'm doing here. I'm not going to specify the size of my data store. My ESXi host isn't going to format that data store with any file system. I'm basically just accessing a shared folder and using that shared folder to store virtual machines using that shared folder as a data store. So I'll go ahead and click Next here. I'm going to make this data store accessible to my one and only ESXi host. I'll go ahead and hit the next term and hit finish. And there it is. So there's my NFS demo data store that I just created. I can go to summary and see exactly how much capacity is available on this data store. And at the moment, you can see that 23 Gigs are being used. 40 gigabytes is the total capacity. I have a free space of around 16.2 gigs. Now you may be thinking to yourself, "Rick, you just created this data store." Why are 23.8 gigs already used? Well, because I'm just simply accessing a shared folder on the NFS device. So let's take a little peek behind the scenes here. Now I don't actually have a physical NFS device in my lab environment. So what I've done is I've created an anNFS server on a Windows Server 2016 system. And so I just want to kind of take you behind the scenes a little bit in the hopes that it will help to reinforce what's actually going on here with NFS. So I'm going to go to File and Storage Services here, and under File and StorageServices, I'm going to click on Shares. And here it is. Here's my NFS share that I've created. So basically, it's a directory on the hard drive of my server 2016 machine here. And the C drive has a 40 gig capacity, 16 gigs free, 23.8 gigs used. That's why my data store looks the way that it does. So this hopefully kind of hits home exactly what we're doing with NFS here. With NFS, we're not using raw physical disks. We're not using LUNs. We're not using VMFS. We're simply accessing a folder that has been shared by some network file server, and that's it. So now we've got this usable data store that we can store virtual machines on, and those virtual machine files and folders are going to simply be created inside that shared folder on that NFS server. Okay, so now that we've created our NFS data store successfully and we've got it all up and running and ready to go, I want to take a closer look at some of the configuration behind the scenes here. Now, I'm showing you an NFS share that I created on Windows Server 2016. But it doesn't really matter what type of system you're using. These configuration options are pretty universal. So here you can see how I share it out on my path. The protocol is NFS. Let's take a look at the authentication configuration. You'll notice I have not required any sort of server authentication I have not configured Kerberos. Kerberos is available in NFS four. One. I haven't set that up here yet. So I'm allowing unauthenticated creation of NFS data stores here. And if we look at the sharepermissions, I'm allowing any machine to access this NFS Share with read/write access. And I'm also allowing root access. This is a requirement for NFS version three. I cannot block root access to this NFS share. So let's click on Edit here and look at some of the changes that we can make to these permissions. I can pick and choose certain machines or certain hosts that I want to allow access to the share. And so I probably want to lock this down a little bit more in real life than I have here. I probably, at a bare minimum, want to lock it down to a certain range of IP addresses. But I do have to allow this root access in order for NFS three to work. Now, one final thing I want to show you before we move on here. I've created this new NFS data store. If I decide that I don't actually need it,and anytime I want, I can right click thisNFS data store and I can unmount it. Also, I could potentially mount this data store to additional hosts. So you see, when I created the data store, I only mounted it on one ESXi host. If I start adding more hosts to my inventory, I'm going to have to mount this data store to those hosts if I want to give them access to it. But what I'm going to do now is go ahead and unmount this data store. So basically, what we're doing here is we're saying, "I don't want access to this data store anymore." We're not deleting any data. We're not deleting any folders. We're not purging any content. What we're basically just doing is unmounting that folder and mounting that NFS data store so my ESXihost will no longer have access to it. So that's what unmounting an NFS data store means. You're not actually destroying any data. You're simply removing that data store from one or more hosts. If those hosts have virtual machines running on that data store, we'll have to address those first before we can unmount it. So unmounting a data store removes that store, but it doesn't actually destroy any of the data contained within it.

5. iSCSI Storage Review

So let's start out by taking a look at the Iscussy storage array and the associated networking. And for the moment, we're going to remove our ESXihost and just focus on the components of the diagram that are specifically part of my Isczor and its network. And so, if you've seen my videos on Fiber channel,many of these concepts are going to be similar. The biggest difference between the two is the network. So here you can see some Ethernet switches that are connected to my storage processors. The storage processors are essentially the brains of the operation. These are the CPUs for my storage array,and they're also the connectivity to my network. And then in the Iscosy storage array itself, I have my aggregate that's all of my storage capacity. So the aggregate could be made up of many solid state drives or many traditional magnetic disks. And we take all that space, we combine it into a Raid array, and then we can carve up smaller chunks of that space called logical unit numbers, or lungs. So those are the basic components of the Isguzzi storage array. And so, how are we actually going to connect our ESXi hosts to this storage array? Well, let's break down this diagram of a software Isczzy initiator. So the starting point is always our virtual machine, and our guest operating system is generating some storage commands. And the guest operating system sends storage commands to the virtual scuzzy controller. The virtual SCSI controller is essentially a driver that the operating system has. And it's a way to take these storage commands out of the operating system and push them into a storage adapter. And that's the job of the hypervisor. My ESXi host is going to receive those storage commands and forward them to the appropriate storage adapter. And in this case, our storage adapter is a softwareIscz initiator that we have configured on this ESXi host. So the job of the Iscz initiatoris is similar to any storage adapter. It's going to receive these storage commands and prepare them to traverse the Iscz network. Another software component that we're going to require with software isczi is a VMkernel port on a Vsphere virtual switch. So there's going to be a VM kernel portbound to the Isczi initiator to essentially act as an entry point into the Ethernet network. And on our virtual switch, we're going to have one or more physical adapters. These are our VM necks that connect the ESXi host to the Ethernet network. So we now have a complete path for our storage commands to follow. So let's point out a few key elements of Iscuzzy software Iscuzzy.Number one, you are going to create a storage adapter in software. You're going to be creating an aVM Colonel port in software. These software components introduce some CPU overhead on the ESXi host. So rather than actually installing dedicated physical hardware, I'm avoiding the cost of dedicated physical hardware. But there's always a tradeoff. And the trade off is more CPU overhead on my ESXi host. And there's one major flaw with mydiagram as it exists right now. I don't have any redundancy. I've got a switch and a VM nick. And if either one of those components fails, my entire storage network is down. So I may want to adjust my hierarchy to account for some level of redundancy. So what I may do is set up multipleVM kernel ports on different virtual switches, each of which has different physical adapters and different VM necks. And what this does is it allows me to configure my storage multipath for my storage adapter to use round robin to send half the storage traffic to each of these VM kernel ports. And so now, if a single VM kernel port were to fail, I would still have the ability to send storage traffic through the other VM kernel port. If a single physical Ethernet switch or storage processor fails, the only physical component that can disrupt my storage connectivity is the ESXi host itself. So the software is cozy. The Initiator is one option for connecting an ESXi host to an Iscuzzi storage array. The second option we're going to talk about is called dependent hardware. I scuzzy initiator, and the main difference between the two is that with a dependent hardware I scuzzyinitiator, we're actually going to install a physical piece of hardware to handle the workload of Isguzzy. So my storage adapter is actually going to be a dedicated physical piece of hardware that receives all the storagecommands, prepares them for transmission across a network, and then from there, it's going to forward those storage commands to one or more VM kernel ports. As a result, I must still configure the VM kernelport with a dependent hardware. is a scuzzy initiator. So I still want to configure multiple VMkernel ports with separate VM nix connected to separate Ethernet switches to ensure that a single device failure doesn't take down my storage network. So that's what we call a dependent hardware iscuzzy initiator. I'm going to buy some physical hardware to be an anIskuzy initiator, but it's still going to rely on the aVM kernel port in my V-Sphere virtual switch. Now, let's compare that to independent hardware. Ice Guzzy is the creator of physical independent hardware. Icecool's initiator comes with a physical port built right into it. So now everything is being performed on hardware. My storage commands hit the storage adapter. It has its own Ethernet interface that we've configured with IP addresses. It handles multi-path and distributes traffic across those physical adapters to these different Ethernet switches. Independent hardware is really the most expensive option, but it greatly reduces the CPU overhead on the ESXi host. Now, the ESXi host doesn't need a VM colonel port,and it also doesn't need a software Iscuzzy initiator. Everything is done in hardware. Okay, so those are the three options to connect an ESXi host to an Iscuzzy network. Now, once we've got the network connected, there's one pretty important step that we still have to accomplish. How does the ESXi host actually learn about all of these lungs that are available on the storage array? In order for us to create a data store,the ESXi hosts need to learn about these lungs. And the way we're going to do this is with a method called dynamic discovery. So the first step is to configure the ESXihost with the IP address of the storage array so that it has an address to query. So that's what we're going to call our Isczitarget, the address of the Iscz storage array. And then what you can do is perform a rescan on the ESXi host. So if you rescan this storage adapter, what it'sbasically doing is telling the storage adapter, hey,reach out to the storage array, issue a SendTargets request to find out about all of the available ones on that storage array. And when the storage array receives that SendTargets request, it responds with the Send Targetsresponse, giving it a list of all the LUNs available on the storage array. Any LUNs that are already in use will get filtered out automatically. So let's assume one of the three is already in use. That one's going to get filtered out, but we can create data stores on any of the other ones. One final note: you can configure CHAP authentication between the ESXi host and the Isczi storage array. Isczi is the only storage technology that you can use with ESXi that supports Chap authentication. You may need to know that for your exam. So this can be used to ensure that both the initiator and the target are legitimate devices. So, in review, Isguzzi uses an Ethernet network to connect your ESXi hosts to the Iscuzzy storage array. It uses dynamic discovery to issue sent Targets requests, send Targets responses, and to discover the lungs that are available on the storage array. We can use Chap to secure our eyes. And again, isGuy is the only storage technology that works with Vsphere that supports Chap authentication.

ExamSnap's VMware 2V0-21.20 Practice Test Questions and Exam Dumps, study guide, and video training course are complicated in premium bundle. The Exam Updated are monitored by Industry Leading IT Trainers with over 15 years of experience, VMware 2V0-21.20 Exam Dumps and Practice Test Questions cover all the Exam Objectives to make sure you pass your exam easily.

Comments (0)

Add Comment

Please post your comments about VMware Exams. Don't share your email address asking for 2V0-21.20 braindumps or 2V0-21.20 exam pdf files.

Add Comment

Purchase Individually

2V0-21.20  Premium File
2V0-21.20
Premium File
109 Q&A
$43.99 $39.99
2V0-21.20  Training Course
2V0-21.20
Training Course
100 Lectures
$16.49 $14.99
2V0-21.20  Study Guide
2V0-21.20
Study Guide
1129 Pages
$16.49 $14.99
UP

LIMITED OFFER: GET 30% Discount

This is ONE TIME OFFER

ExamSnap Discount Offer
Enter Your Email Address to Receive Your 30% Discount Code

A confirmation link will be sent to this email address to verify your login. *We value your privacy. We will not rent or sell your email address.

Download Free Demo of VCE Exam Simulator

Experience Avanset VCE Exam Simulator for yourself.

Simply submit your e-mail address below to get started with our interactive software demo of your free trial.

Free Demo Limits: In the demo version you will be able to access only first 5 questions from exam.