VMware VCAP6-NV 3V0-643 – vNICs

  1. Understand the types of vNICS

When installing a virtual machine onto a virtual machine, you have three choices. The oldest of these three is referred to as the flexible adapter. This adapter will appear as either a VLAN or a Vmxnet adapter, depending on whether VMware Tools are installed. There’s very little reason to use this adapter unless you’re working with an operating system that doesn’t support any other options. Next is the E 1000 or E 1000.This is an Intel 82545 Em. Nick that has been emulated. This adapter will be an E 1000 E, which is a ten gigabit adapter on hardware versions eight and newer for certain operating systems. While this adapter does have some advanced feature support, it’s still only recommended when you can’t use the VMX Net Three adapter, such as because of an unsupported OS or some other incompatibility.

This brings me to the VMX-net-3 adapter, which is the adapter of choice in almost all situations. The VMX Net-3 is a Para virtualized adapter, which means that it has no physical counterpart and is aware of the fact that it’s running in a virtualized environment, which can significantly improve performance. The only disadvantage of the VMX Net 3 is that the driver is not included by default with many operating systems, and the driver will only be installed when you instal VMware tools. If we look at the features supported by each of the adapters, the VMX net three supports TCP segmentation, offloading large receives with an offload checksum, offloading disabling coalescing receive, side scaling, jumbo frames, and split RX, while the E 1000 does not support large receives, offloading, disabling coalescing receive, side scaling, or split RX. And as you can see, the Flexadapter has very limited feature support. Let’s take a look at each of these features individually.

  1. vmxnet3 features

TCP segmentation offloading, or TSO, is a technology that offloads the segmenting or breaking up of a large stream of data from the operating system to the physical NIC. However, inside the virtual machine, this creates an issue. Without TSO, we take the large stream of data and segment it using the virtual CPUs, which are backed by the physical CPUs, and it’s passed down the rest of the chain as individual packets. If we do this just inside the guest operating system rather than segmenting at the vCPU level, the data stream is handed off to the virtual NIC for segmentation. However, the virtual NIC is also backed by the physical CPUs, so we didn’t gain anything. Instead, when a large data stream is passed to the virtual NIC, we need it to pass the entire data stream onto the physical NIC, thus completely bypassing the physical CPUs and offloading segmentation to the physical NIC. Large-receive offload is essentially TCP segmentation in reverse.

As packets come into the physical nick, the smaller packets are combined to create one larger packet that is handed off to the virtual nick and then to the guest operating system, thus reducing CPU overhead. Keep in mind that large receive offloads can increase latency as it has to wait for all of the packets to arrive before combining them and passing them up to the physical nick. Thus, it should be disabled for latency-sensitive applications. TCP checksum offloading is similar to TSO in that we’re offloading a calculation to the physical network. In this case, it’s the calculation of the data checksum inside the TCP header. Much like TSO, this reduces CPU utilisation, decreases latency, and improves throughput. One thing to be aware of with checksum offloading is that if you do a packet capture from inside the virtual machine, you might see an error because the checksum is invalid. This is simply because the checksum hasn’t been calculated yet. When I was discussing net queue on VMX net three, I mentioned receive side scaling.

This is also referred to as “multiple queue support.” As a reminder, RSS creates multiple queues and load balances among the queues based on the PCP stream. For this to work on a guest OS, the virtual machine does need to have multiple cores configured and supported in the guest operating system. Split RX is another feature of the VMX net three adapter that has some similarities with receive side scaling, and then it allows multiple cores to process packets. However, rather than creating multiple queues, it allows for multiple cores to process multicast and broadcast packets from the same queue in VSphere 5.1 and later.

This feature is automatically enabled for a virtual machine with a VMX Net 3 adapter. This is the only adapter type that supports split RX. When ESXi detects that a single network queue on a physical nic is being heavily utilised and receiving more than 100 broadcast or multicast packets per second, this feature is enabled. In Ethernet, a standard frame has a data portion that is no larger than 1500 bytes in size, and then the header is added onto that.

A jumbo frame is anything larger than this, although typically when we refer to jumbo frames, we’re referring to packets that have a maximum size of 9000 bytes. Keep in mind that since data is still being transmitted serially, we don’t get five times the speed. The increase in speed comes from the fact that we’re sending fewer headers. As you can see, there are five fewer headers in a jumbo frame for the same amount of data. In addition, the CPU and physical switches have to process five fewer packets, thus reducing utilization, which can also lead to an improvement in throughput.

Keep in mind that the entire data path has to support jumbo frames, from the virtual and physical mix to the physical switches and possibly to the physical router if you’re routing your large frames. Interrupt coalescing refers to reducing the number of interrupts that are generated by an application by waiting briefly after receiving a frame to see if another frame arrives. This reduces CPU overhead, but it can also increase latency, so it should be disabled for any latency-sensitive application. The only driver that supports disabling interrupt coalescing is the VMX Net 3 adapter.

img