VMware VCAP6-NV 3V0-643 – Physical NICs

  1. Use the hardware compatibility list

I’m going to start off by looking at the physical network adapters to see which physical network adapters are supported by ESXi. I can use the hardware compatibility list to get there when I go to vmware.com. go HCL to access the VMware compatibility guide, also known as the HCL or hardware compatibility list. On the first page, we come to systems and servers, but I want to go down to IO devices as I’m interested in the network cards. From here, I can select a specific release of ESXi that I’m interested in. I can also select, in this case, network drivers for the I/O device type. If I’m looking for a specific brand of adapter, I can select that from the list. There are two types of drivers that are available for ESXi. The first one is the VMware inbox. These are drivers that are included in the box, as in they come with ESXi, and you don’t have to instal them separately.

The second type is partner async. These are drivers that VMware partners release asynchronously from a specific release of ESXi; these drivers must be downloaded and installed either onto the ESXi image before installation using image builder or onto a running system from the VIB. Oftentimes the same network card will have both in-box and async drivers, but they’ll have different features that were supported prior to ESXi. Five of the six drivers that were used by ESXi were actually Linux drivers that were wrapped using VMKLinux so that they were available to the VM kernel.

As of May 5, VMware started working with our hardware partners to produce drivers that were native to ESXi. Most drivers will still be VMK Linux drivers, but you’ll start seeing more and more native drivers available. Over here on the right, I can select the vendor ID and the device ID for a specific card that I’m looking for. The vendor ID and device ID are pieces of information embedded in the card that are provided to the operating system during boot, and the operating system uses this information to determine which driver to load. Let’s go to the command line and take a look at our drivers. I’m logged into an ESXi host using SSH.

I’ve already enabled it on the ESXi host through troubleshooting options. This allows me command-line access to the ESXi host so that I can run troubleshooting and hardware-based commands. I’m going to run lspci, which is going to list out all of my PCI devices. The output here, however, is very large, and I’m only interested in the network devices, so I’m going to pipe that output to Grip, which searches for specific text and networks.

I also want to see the line immediately below the one I’m looking for, so I’m going to do a one, and this lists out all of my network adapters. I have four installed on this ESXi host. The first column here is the PCI slot, and then we have the name of the card. If it already has a driver loaded, it’s going to show name information here. Otherwise, that place is going to be blank. The vendor ID and device ID are listed on the following line. I can take that information and plug it into the hardware compatibility list to see what drivers are available. I can also see how that information maps to specific drivers by going to VMware driver map D in here.

Every time a driver is installed, it instals a map into this folder. I can take a look at, for instance, the E 1000E driver, and this lists all of the vendor IDs, which will always be 80 and 86 for Intel, and then the various device IDs that all use that same driver. The E 1000 e-driver As you can see, there are many different devices that all use the same driver. When a card is detected during boot up, this information is searched. To determine which exact driver to load, let’s go back to my list, and I can see that my card’s vendor ID is 80 86 and the device ID is 10 D 3. We’ll look through the compatibility list for those 80, 86, and 10 D 3. Now there are several cards listed here. This first one is actually an HP-branded card. So it has a subvendor ID and a subsystem ID defined as well.

The rest of them are blank. If I click on this link here, it shows me that the driver is included in the box, which isn’t too surprising considering that it’s already installed on my system. And it shows that it is a VMK Linux driver. Because it’s already included in the box, there’s nothing that I can download from here. In the third column here, I can select a specific feature that I want to verify is supported on my card. For example, I can select VXLAN offload. Let me clear these out so that I don’t get false results. And then I’m going to select Intel for the brand name, and this is going to list all of the Intel cards that support VXLAN Offload. If I select the first one in the list, we can see that this is a partner-sync driver, so it is not included with ESXi. I’ll have to download and instal it. And if I click on the little plus sign here, I can see which features are supported by this specific driver. And then I have a link here to download the driver.

But before I get to that link, I want to talk about the configuration maximums for our ESXi host. There is a limit to how many networking devices can be installed on an ESXi host, and that varies somewhat depending on the specific card that we’re talking about. To define that information, I can refer to the configuration maximums guide. I’m interested in the networking maximums for my ESXi host, and here I can see specific cards and the maximum number of Nixers that are supported. I can verify that the number I have installed on my ESXi host is less than that. I can also see information for VM Direct Path, the number of devices per host, as well as the number of devices per virtual machine. There’s additional information for SR, IOV, and other networking features as well that we’ll talk about later on. I can see additional information in here for my standard and distributed switches as far as the number of ports and the number of active devices that are supported on those. Always verify that your environment is less than these maximums, otherwise you might run into issues. Now let’s take a look at downloading and installing a driver on an ESXi host.

  1. Installing drivers

To download the driver, I click on the link under Footnotes, scroll down, and make sure that I’ve selected the correct driver. Click Download. Now, if you’re not already logged into my VMware, you might have to log in at this point, but then it will download the driver. The driver that gets downloaded is what’s referred to as an “offline bundle.” It’s a compressed folder that includes the vibe itself as well as some additional information. I can use that offline bundle through Image Builder to create a completely new ESXi image that also includes this driver.

That’s necessary because when I instal ESXi, it has to be able to detect at least one network card. If all of the network cards require asyncdrivers, I’ll have to rebuild the image. Most of the time, that’s done by your manufacturer. When you purchase hardware intended for installing ESXi, you’ll get what’s called an OEM image of the software. If, for some reason, it’s not included, then we can use Image Builder to include this driver. For more information on building a new image using Image Builder, see my Learning Vsphere 6:5 class. All I need from this is the VIB. So I’ll go ahead and extract it here. And then inside of here, as we’ll see, is the VIB file. Now I need to get that over to the ESXi host. I can do that through the web client. I select one of my data stores, and then here I’ve created a new directory where I want to put the vibs.

And from here, I can click on the upload button and specify the file that I want. Now, I’ve already uploaded this to my vSAN data store, so let’s go to an ESXi host from here. The command is “ESX CLI software VIB install.” And then the path to the file—I put it in a data store, so those are in VMFS volumes—and then the name of the data store, and then press Enter. It doesn’t take long to instal the VIB, and once done, it will display the changes it made to the system.

Now, in order for that driver to take effect, I would have to reboot the ESXi host. But for me, it doesn’t make any difference because I don’t have any of those cards installed anyway. But we can see that it was actually installed. ESX CLI software VIB list, and then I can grab it for I 40.E if I take a look at a host where the driver has been installed and the host has already been rebooted in the Map D directory. I’ll now see the I-40.E map, and I can take a look at it to see what device IDs it will install. For now, let’s take a look at some of the features that are available on our physical network cards.

  1. Understanding NetQueue and RSS

The goal behind both NetQ and receive-side scaling, or RSS, is the same without either technology. A single queue, and thus a single CPU, handles all traffic entering a physical nick. With NetQ or RSS, multiple queues will be created to handle traffic and spread it out across multiple CPU cores. The primary difference between Net Queue and RSS is how they manage the queues. With netque, all traffic flows into a single default queue initially, and when a particular Macaddress starts receiving a significant amount of traffic, it’ll be moved to a separate queue.

With RSS, all queues are used at first, and traffic is typically hashed based on the IP5 queue, though this can vary between vendors. The exact specifics on the number of queues created by default, the number of maximum queues available, and how to configure them vary from vendor to vendor, so it’s best to consult the vendor documentation. Let us examine net queue and RSS from the command line. NetQ and RSS are both configured via module parameters, so we can look at how it’s currently configured by using the ESX CLI system module parameter list and then m to specify the module. I’m going to look at the ixgbe module, which is an Intel card, and looking through the output here, I can see that I can specify the number of RSS queues (VMDQ), which is the same thing as a net queue.

The default for VMDq is set to eight, but there is currently no value set for this particular nick in RSS. VMDq is enabled by default. RSS is not. To the best of my knowledge, no driver supports NetQ and RSS at the same time. Both of these are set as an array of integers. The reason that it’s an array is that we can set it differently for every VM nick that is installed on the host, and we just specify them in a comma-separated list. So if I had four nicks and I wanted eight queues per nick, I could do eight. Comma eight. Comma eight. Comma eight does the same thing with RSS. I can see my current queues using ESXCLI network nic queue count get, and this will show my available net queues. Because the nic I’m supporting is in a virtualized environment, I don’t have any extra net queues, so each one will show a single queue. And then finally, for statistical information, I can use Vsish undernet Pnics, specify my VM nick, and get my stats. Notice that I see information here for receiving q zero and up here for transmitting q zero. If I had additional queues, they would show up here as well. Keep in mind again that each vendor is going to have a slightly different output here, but you can typically tell which queues are actually active at the moment. In addition, with Net Queue, we’ll see entries in the VM kernel log file. There will be one indicating that a queue is allocated, and then a second one indicating which Mac address is being set for that specific queue. Let’s look at direct path IO now.

  1. Configuring VMDirectPath I/O

DirectPath I/O is a technology that allows me to bypass the V switch and the VM kernel and present PCI hardware directly to the virtual machine. Because we’re eliminating the middleman, the typical use case for this in-networking would be for extremely latency-sensitive applications. DirectPath IO does come with some stringent requirements and limitations. However, on the requirements side, our hardware has to support hardware memory management units.

This is fairly common in newer hardware. for PCI devices. There is not a list of supported devices but rather a list of PCI device requirements outlined in Knowledgebase article 214-2307. Most standard network cards should be supported, but it’s best to review the guidelines. And finally, because we have to allocate portions of memory, a full memory reservation is required for the virtual machine. We are piercing the virtual to physical abstraction layer when we enable Direct Path I/O, which means that some of our virtualization features will no longer be available. V-motion, high availability, and fault tolerance will not work.

We can’t hot-add devices to our virtual machines, we can’t take snapshots, and we can’t suspend or resume the virtual machine. I’m going to demonstrate a special case for direct path I/O in a lab environment. I want to present my onboard Wi-Fi adapters to a virtual machine, as it’s not a piece of hardware that’s supported by ESXi. So if I can present it to a virtual machine, that allows me to connect my lab environment to Wi-Fi___33. I do want to emphasize that while this works, it’s not officially supported by VMware. My first step is to remove the PCI devices from ESXi so that I have the option of presenting them directly to a virtual machine. To do this, I’ll select my ESXi host, go to Configure, scroll all the way down to hardware and PCI devices, and there’s nothing currently showing up in the list that I could present to a virtual machine.

So I’m going to click on Edit, and we can see that there are multiple options here. This is my dual-band wireless notice. There’s also my video card and my audio adapter. This is the Ethernet adapter that I’m currently using to connect to the wired connection, so I don’t want to remove that one. Be very cautious about removing USB, as that would in this case prevent my host from booting because it’s booting off of a USB device. The audio I can, in theory, present to a virtual machine if I want to play audio through it Graphics cards are a little bit more challenging to get to work through Direct Path IO, and as of yet, I haven’t managed to get my graphics card to present properly to a virtual machine. So for now, I’m just going to do my dual-band wireless setup and click OK, and now the device is showing up as “available pending,” meaning that I have to reboot the host before it can take effect.

So that’s my next step. Once my ESXi host comes back online I now demonstrate that this device is accessible to a virtual machine. Now, when I add it, the virtual machine needs to be shut down. It already is. So I can go to edit settings, and I want to select a new device. It’s a PCI device. And there is my dual-band wireless adapter. Notice that it indicates here that I have to reserve all memory, and it gives me a warning about some virtual machine operations being unavailable. Can’t you suspend and migrate with Vmotion and take resource snapshots of such virtual machines? I also cannot move this virtual machine to another host even if it’s shut down, as this device doesn’t exist on the other host. Click on OK. And once that task is finished, I can power on my virtual machine. Now, if I go look at the hardware on this virtual machine, I can see that the wireless adapter now shows up. And I can see the corporate networks that are available here. Now let’s take a look at Direct Path IO’s very close relative, sriov.

  1. Use SR-IOV

Sriov and Direct Path IO are somewhat related technologies, but they function differently and have different requirements. Sriov is a technology that allows for a single PCI device to be presented to multiple virtual machines. Sriov uses something called physical and virtual functions to accomplish this. Physical functions are full PCI functions, and they’re available to the ESXi host, while virtual functions are their simpler counterparts that lack configurability, and they’re presented to the guest OS. As a result, the driver inside the virtual machine needs to know that the device it’s accessing lacks full functionality. This means that a special driver has to be loaded.

Sriov has many of the same requirements as Direct Path IO, plus some additional ones. It still requires hardware MMU. However, Sriov has to be supported and enabled in the BIOS. In addition, Sriov has special guest OS requirements and is supported only on Windows Server and Red Hat Enterprise Linux, with a relatively small number of adapters. This information can be found in Knowledge Base article 204-5704. Once again, a full memory reservation is required, as well as the proper physical function driver on the host and a virtual function driver in the guest. Sriov is tied to the physical hardware and has the same. Limitations of direct path IO. No support for Motion, high availability, or fault tolerance; no ability to hot add virtual devices; no snapshots; and no suspend and resume functionality. Now let’s take a look at some advanced functions in our distributed switch.

img