Chapter 3 Networking in vSphere

2V0-21.19 EXAM OBJECTIVES COVERED IN THIS CHAPTER:

  1. Section 1 - VMware vSphere Architectures and Technologies

    1. Objective 1.1 - Identify the pre-requisites and components for vSphere implementation
    2. Objective 1.8 - Differentiate between VDS and VSS
  2. Section 2 - VMware Products and Solutions

    1. Objective 2.1 - Describe vSphere integration with other VMware products
  3. Section 4 - Installing, Configuring, and Setting Up a VMware vSphere Solution

    1. Objective 4.2 - Create and configure vSphere objects
    2. Objective 4.5 - Configure virtual networking
  4. Section 7 - Administrative and Operational Tasks in a VMware vSphere Solution

    1. Objective 7.1 - Manage virtual networking
    2. Objective 7.8 - Manage resources of a vSphere environment
    3. Objective 7.11 - Manage different VMware vCenter Server objects

This chapter addresses how ESXi hosts and the virtual machines running on them communicate with other systems. While vSphere hosts require networking for management and monitoring, the hosts also utilize networking for key functions, including fault tolerance and vMotion. Virtual machines require networking to exchange information with other clients, servers, and other resources both on the same host and outside the host.

vSphere enables host and virtual machine networking by creating virtual network interfaces for the host and virtual machines and then providing virtual network switches to connect them with the physical network. This network virtualization allows multiple network needs to be met by limited hardware. The exam (and this chapter) will assume you know basic networking concepts such as TCP/IP (including addresses and ports), switching (including VLANs), routing, and an understanding of the network OSI model. If these concepts are new or unfamiliar you might consider a basic networking book or training course.

Understanding vSphere Networking

One ESXi host with a single physical network connection can provide virtual machines access to thousands of separate networks. By adding virtual machines with routing capabilities, you can create complex networks inside of a single physical server.

The ability of vSphere to create virtual network objects includes a variety of options and choices for connectivity, performance, reliability, and security. vSphere networks can be designed to avoid single points of failure or prioritize the network performance of some virtual machines. While vSphere networking provides limited security options, other products-such as VMware NSX-add considerable security capabilities to vSphere virtual networking.

NOTE

vSphere natively only provides switching functions; it provides no routing capabilities natively. For two virtual machines on different VLANs on the same host to communicate, the traffic must leave the host, get routed to the destination network, and return to the host.

vSphere networking utilizes virtual switches to connect a virtual machine's virtual network interface cards (vNICs), the host's physical network interface cards (pNICs), and the host's VMkernel ports, which are used for ESXi management.

A VMkernel port is a virtual network adapter used by the host. At least one VMkernel port is required for host management, which includes vCenter, SSH, DNS, and NTP services for the host. The virtual adapters can also be used for some storage types (iSCSI, NFS, FCoE). Multiple VMkernels can be used to separate optional host functions such as vSAN, vMotion, and fault tolerance onto separate networks.

vSphere offers two types of virtual network switches: vSphere Standard Switches (vSS) and vSphere Distributed Switches (vDS); the latter is also sometimes also called distributed virtual switch (DVS). Standard switches offer basic functionality in all license levels of vSphere (including the free vSphere Hypervisor version of ESXi) and are created and managed on each host. vSphere virtual distributed switches provide advanced functionality and are created and managed by vCenter. Figure 3.1 shows a simple standard switch and a slightly more complex distributed switch.

FIGURE 3.1 Standard switch vs. distributed switch
Screenshot_139

Standard Switches

While standard switches are addressed by the vSphere Foundation exam and do not appear directly on the VCP6.5-DCV exam, we will cover them briefly for completeness.

A newly built ESXi host will have a standard switch created with vmnic0 (what ESXi determines is the lowest-numbered physical network adapter) and a VMkernel port set to use DHCP. When you use the host console to change the management interface, these are the components you are changing. This is the configuration the ESXi console is manipulating in the management interface configuration- and the configuration it will revert to if you choose Restore Network Settings in the ESXi console.

Standard switches and distributed switches vary in a few key ways:

  • Standard switches and the port groups using them are created and configured on each host while distributed switches and port groups are created and configured on the vCenter server.
  • Operations such as vMotion depend on the port groups having the same name on each host. Distributed switches and port groups are identically maintained on the hosts by vCenter while standard switches require either carefully creating the identical port group on each host or using an automated creation method such as host profiles or scripting. See Figure 3.2 for the Network tab's Networks view, which is useful for verifying port group names.

    FIGURE 3.2 Viewing standard switches in the vSphere client
    Screenshot_140
  • With the standard switches being created on each host, vCenter is not required to create or maintain standard switches. However, since standard switches and their port groups are created separately on each host, their settings (such as VLAN tags and teaming) can vary from host to host. These settings are kept in sync with vDS.

  • Port statistics for a standard switch (available via API or command line) are reset when a virtual machine is moved to another host using vMotion. These statistics are maintained per port in a vDS, as shown in Figure 3.3.

    FIGURE 3.3 Distributed switch port statistics
    Screenshot_141

When you add a VMkernel port to a standard switch, it creates a new dedicated port group. No other VMkernel port or virtual machine can connect to that dedicated port group. For distributed switches, VMkernel ports connect to an existing port group.

Standard switches lack many advanced options available to virtual distributed switches. including NetFlow, PVLAN, NIOC, and the ability to export and restore their configuration.

vDS are available in the vSphere Enterprise Plus and vSphere Remote Office Branch Office licenses. Some VMware products such as NSX include a license for distributed switches, which are required for opaque networks-virtual network objects created by other products such as NSX. They may be fully or only partially visible when using the vSphere client and in most cases will be managed by the product that created them.

Virtual Distributed Switches

Virtual switches in vSphere are separated into two logical components: the management plane, where the configuration is created and maintained, and the data plane, where the traffic carried by the switch flows and is manipulated, such as with filters or VLAN tags. For standard switches, both management and data planes exist on the host the switch is created on. With distributed virtual switches, the management plane is located on the vCenter server and the data plane is spread across all the hosts that are attached to that switch. This means virtual machine traffic only travels from the host the VM is running on through the proxy switch (what VMware calls the host's instantiation of the distributed switch) to the physical network switches connected to that host. Virtual machine traffic never flows to the vCenter server.

NOTE

While the distributed switch is a datacenter-level object that spans hosts, VMware refers to the individual instance of a distributed switch on a host as a proxy switch.

With the vCenter server holding the management plane for distributed switches, it must be available for changes to be made to the virtual distributed switch. However, once changes are made, the configuration is pushed to each host. If the vCenter server becomes unavailable, hosts will continue to use the last vDS configuration they received but new hosts cannot be added to the vDS and no changes can be made to the vDS or its port groups. However, virtual machines and VMkernel ports can be added to and removed from vDS port groups using the VMware Host Client as shown in Figure 3.4.

NOTE

Port groups are logical groupings of ports for a virtual switch and are the logical network component that virtual machines and VMkernel ports connect to. Most often used to manage VLAN tagging, there are a number of settings available for port groups that will be discussed in the coming pages.

FIGURE 3.4 ESXi host client network view
Screenshot_142

Creating Distributed Switches

There are several reasons to create distributed switches, including to take advantage of the many features available compared to standard switches, to satisfy the desire for centrally managed network configurations, or to take advantage of a separate product such as NSX or vRealize Network Insight.

NOTE

Remember that to create a distributed switch, you need to have a vSphere license that includes the feature or a separate vDS license such as the one included with NSX.

Distributed switches are created at the datacenter level, and any host in that datacenter can be connected to the same distributed switch, regardless of cluster configuration. Clusters are usually created to consolidate workloads or similarly equipped hosts, and in most cases you would want to ensure that all hosts in a cluster are connected to the same distributed switch(es). Having identical network configurations simplifies host management, but more important, it ensures that virtual machines can be run from any host in the cluster. If a cluster has four hosts and only three are connected to a distributed switch, the host that does not connect to the vDS will not support the same virtual machines as the other three.

Spanning a distributed switch across clusters is a way to ensure a consistent network configuration for networks used by both clusters. However, if the virtual machines in the clusters do not share the same network(s), then we would recommend using a separate distributed switch for each cluster. Basic security principles include limiting access to only what is needed and following the Keep It Simple philosophy; it is preferred for all objects grouped together to be as identical as possible. Those two ideas combine to suggest that clusters with different network requirements use separate distributed switches (Figure 3.5).

FIGURE 3.5 A shared vDS for the infrastructure with all hosts connected and separate vDS for the virtual machines in each cluster
Screenshot_143

Configuring all of the hosts with similar network configurations (Figure 3.6) requires all hosts to have access to the same networks even if they are not used. However, if some networks are shared, for example the infrastructure networks for host management, then a shared distributed switch could be created for those networks and separate switches for the networks unique to the cluster.

FIGURE 3.6 A separate vDS created for each cluster with infrastructure and virtual machine traffic
Screenshot_144

Either of these configurations allows similar physical network configurations for the hosts in each cluster while limiting them to just the networks they require.

Creation of a distributed switch is accomplished from the Networking tab of the vSphere web client. As shown in Figure 3.7, you can create a new vDS using the New Distributed Switch wizard or by using the Import Distributed Switch wizard with a previously saved backup file. The backup file can also be used to reset an existing vDS by using the Restore Configuration Wizard.

FIGURE 3.7 Combining Distributed Switch → New/Import with Settings → Restore
Screenshot_145

To restore the settings of a distributed switch to a previous backup, use the Restore Configuration option from the distributed switch's Actions menu. To create a copy of an existing distributed switch, use the Import Distributed Switch options from the Datacenter Actions menu and do not check Preserve Original Distributed Switch and Port Group Identifiers. There are only a few settings available in the New Distributed Switch configuration wizard and all can be changed later. The settings are as follows:

  • Name The name displayed in the GUI for the virtual distributed switch.

  • Version Tied to major vSphere releases, this determines what features are available for the switch and the minimum version of ESXi a host can be running to connect to the distributed switch.

  • Number of Uplinks Maximum number of physical network adapters any configured host can connect to this switch.

  • Network I/O Control The ability to determine bandwidth availability for different traffic types on the switch. See “Understanding Network I/O Control” later in this chapter.

You also have the option of creating and naming an initial distributed port group for the distributed switch. After a distributed switch has been created, there are a few general distributed switch settings that can be changed, including the number of uplinks (see the section “Adding and Removing Uplink Adapters”) and whether Network I/O Control (NIOC) is enabled (see “Understanding Network I/O Control”).

The following distributed switch settings are also available from the Advanced section of the Settings wizard for a vDS:

  • MTU (Bytes) The maximum transmission unit, or MTU, is the largest packet size allowed on the switch. Most often increased to support VMware NSX or to meet network storage requirements, this setting defaults to the industry standard of 1500 bytes.

    Note that any virtual machine or VMkernel port needing an increased MTU would also need to be changed. MTU is set at the switch and VMkernel or VM, not at the port group.

  • Multicast Filtering Mode Distributed virtual switches default to the Basic multicast filtering mode, where any virtual machine connected receives all multicast packets for the destination MAC address of the multicast group. You could also configure multicast snooping, which, if your virtual machines are configured for it, would restrict the multicast packets they receive to just those destined for their multicast group. Per the VMware documentation, “This mode supports IGMPv1, IGMPv2, and IGMPv3 for IPv4 multicast group addresses, and MLDv1 and MLDv2 for IPv6 multicast group addresses.” This is most likely to be changed to meet a specific application's or vendor's stated needs.

  • Discovery Protocol This is a very useful setting that pulls configuration information from the physical switch connected to the vDS. When this setting is enabled to match the configuration of the physical switch connected, each NIC will report information such as the port the NIC is connected to and the name and IP address of the switch as well as other information.

    This setting defaults to Cisco Discovery Protocol (CDP) if you are using Cisco switches but can be changed to Link Layer Discovery Protocol (LLDP) for switches that support LLDP. It can also be set to Disabled, which could be required by your security team.

  • Operation This changes how the discovery protocol in use works; by default it only listens for information from the physical switch, but you could also set it to only tell the physical switch the settings of the vDS (using Advertise) or set it to Both.

  • Administrator Contact This information is passed along if the discovery protocol is set to Advertise or Both.

Upgrading and Deleting Distributed Switches

If you have a vDS in your environment that is not the most recent version, you can use the Upgrade wizard available from the actions menu of the distributed switch. Note that you will not be able to upgrade a switch to a version above the highest-level host connected. So a distributed switch with 31 ESXi 6.5 hosts and 1 6.0 host will only be able to be at vDS version 6.0.

NOTE

If you decide to upgrade, make sure you back up the switch first. If the upgrade fails, you will be able to quickly recover, and if you change your mind, you will be able to roll back-you cannot downgrade a switch version.

Deleting a distributed switch is accomplished by right-clicking the switch and choosing Delete. (In my experience, the Flash-based web client is more reliable when deleting distributed switches.) However, be aware that you will not be able to delete a virtual switch until all virtual machines and VMkernel ports are disconnected from the port groups on the switch. If you see an error status similar to “The resource ‘14’ is in use” (Figure 3.8), you will need to check what is connected to the virtual switch port (in this case, switch port 14) listed in the error message, which is achieved by selecting the Ports tab of the distributed switch (Figure 3.9) and scrolling down the list.

FIGURE 3.8 Port group delete error
Screenshot_146
FIGURE 3.9 List of ports by port number
Screenshot_147

The object connected to the port could be either a virtual machine or a host's VMkernel port, and the name of the object will be listed in the Connectee column. A VMkernel port will have the name of the port listed after the hostname; in the example in Figure 3.9, the hostname is esx-02a.corp.local and the VMkernel port is vmk1.

The Importance of Physical Network Configurations

Proper physical network configurations are crucial when configuring vSphere networking. An improper physical network switch configuration can cause all kinds of issues with a vSphere environment.

I once installed a new vSphere host with two network uplinks using the default load balancing of Route Based on Originating Virtual Port, which uses a calculation to determine which switch port sends traffic out of which uplink. This generally distributes the virtual machines (VMs) evenly between uplinks. However, as virtual machines were migrated to the host, some VMs using that switch worked (were available on the network) and some VMs did not.

When troubleshooting virtual networking, one of the first steps is to determine if traffic flows between VMs on the same host, which in this case worked fine-traffic was just not leaving the host for some VMs. I tried migrating machines around and found that every other machine I migrated to the host would not send traffic out-but if I took a VM that was not working and migrated to another switch and then migrated it back to the malfunctioning switch, then it might start working.

As it turned out, the physical network ports were not configured identically-one of the ports was not configured for the proper VLANs, and any switch port that used the uplink connected to the misconfigured physical port was not passing traffic.

In the host's Physical Adapters tab, the CDP/LLDP information and the Observed IP Ranges are useful from the vSphere side to start troubleshooting physical network issues, but the physical switches should be checked carefully for configuration issues during implementation.

Screenshot_148

Adding and Removing Hosts from a vDS

When a distributed switch is created, no hosts are connected to it. You are not required to add hosts; however, a virtual machine cannot be connected to the vDS until the host the VM is assigned to has been added to the vDS. Note that when a host is added to a distributed switch, it is not required to connect physical adapters on the host to the switch. However, any VMkernel port or virtual machine on that host would not be able to send traffic outside the host.

Hosts can be added to a distributed switch using the Add and Manage Hosts wizard (Figure 3.10) from the action menu of the switch. The first option of the Add and Manage Hosts wizard is Add Hosts, which allows new hosts to be connected to the vDS, and (optionally) their physical adapters, VMkernel adapters, and virtual machines can be configured when the hosts are added to the switch.

FIGURE 3.10 Add and Manage Hosts wizard
Screenshot_149

The second option, Manage Hosts Networking, allows you to manage physical adapters, VMkernel adapters, and virtual machine connectivity for hosts already connected to the switch.

Hosts can be removed from the virtual distributed switch using the third option (Remove Hosts) or from the Hosts tab of the switch (Figure 3.11).

FIGURE 3.11 Removing the host from the distributed switch
Screenshot_150

NOTE

The host must have its VMkernel adapters and any virtual machines on the host disconnected from the distributed switch before it can be removed from a vDS.

The fourth option, Add Host and Manage Host Networking, allows you to add hosts and modify the new and existing hosts' physical adapter, VMkernel adapters, and virtual machine connectivity all in the same wizard.

EXERCISE 3.1 Add a host to a distributed switch.

  1. Connect to the host using the vSphere web client and open the Networking view.

  2. Right-click the distributed switch and choose Add and Manage Hosts from the Actions menu.

    Screenshot_151
  3. Using the Add hosts task, on the Select Hosts screen select the host to be added to the switch.

    Screenshot_152
  4. On the Select Network Adapter Tasks screen, select Manage Physical Adapters.

    Screenshot_153
  5. On the Manage Physical Network Adapters screen, select an unused adapter on the host and click Assign Uplink.

    Screenshot_154
  6. On the Select an Uplink screen, make sure each physical adapter has an uplink selected. (In a production environment, you will want to make sure all hosts are configured identically, including which pNIC is associated with which uplink.)

    Screenshot_155
  7. Ensure that the uplinks and pNICs are associated correctly before clicking Next.

    Screenshot_156
  8. Click Next on the Analyze Impact screen and Finish on the Ready screen to complete the wizard.

Using dvPort Groups

Distributed virtual port groups, or dvPort groups, are collections of switchports that share the same settings. Port groups are the objects that virtual machines and VMkernel ports connect to-when you look at a virtual machine's network adapter in the vSphere web client, it will list the port group connected. Port groups are most often used to enable VLAN usage by VMs and VMkernel ports, which can improve security by preventing snooping-for instance, by ensuring that vMotion traffic is separate from virtual machine traffic.

Creating and Configuring dvPort Groups

Distributed virtual port groups are created on a distributed virtual switch and only exist on that virtual switch. With standard switches, virtual machines will vMotion between identically named port groups on different switches when changing hosts since each host creates and manages its own switches. However, with distributed switches, the virtual machine remains on the same port in the same distributed port group and distributed switch during vMotion since all of the network objects are created and maintained by vCenter.

Port groups can be created using the New Distributed Port Group option on the Distributed Port Group action menu item of the switch. Note that port group names must be unique among port groups in the datacenter, not just the distributed switch on which you are creating the port group.

After a port group is created, you can edit its settings by right-clicking the port group and choosing Edit Settings. To easily configure identical settings for multiple port groups on a vDS, choose the Manage Distributed Port Groups option on the Distributed Port Group action menu item of the switch (Figure 3.12).

FIGURE 3.12 Launching the Manage Distributed Port Groups wizard
Screenshot_157

This wizard is especially useful to make sure all of your port groups have identical security and failover settings.

Removing a distributed port group is as simple as right-clicking on the distributed port group in the Networking view and choosing Delete. However, all VMkernel ports and virtual machine adapters must be disconnected before the distributed port group can be deleted.

Port group configuration options include the following:

  • Port Binding The default setting is Static, which assigns a virtual machine to a specific port when the VM is connected to the vDS. Dynamic binding has been deprecated (meaning VMware intends to remove the setting in a future release) and assigns the port when the VM first powers on. Ephemeral binding doesn't permanently assign a port; the VM is given a port when it turns on and the port is unassigned when the VM powers off. This setting has implications at scale because there is a maximum of 4096 network ports per host and 60,000 ports per vCenter. A large number of powered-off virtual machines (static binding) could theoretically max out one of those numbers.

  • Port Allocation Leaving the Port allocation default of Elastic will allow the port group to automatically adjust the number of ports based on the connected adapters. If Port allocation is set to Fixed, the Number of ports setting becomes a hard limit on how many connections (virtual machines and VMkernel ports) can be attached to that port group.

  • Number of Ports Defaulting to 8, this number adjusts automatically if Port allocation is set to Elastic. If Port allocation is set to Static, then this represents the maximum number of adapters that can connect to it.

  • Network Resource Pool This can be set if NIOC is configured (see “Understanding Network I/O Control”).

  • Configure Reset at Disconnect Per-port settings (if overridden) are reset if the port is disconnected.

  • Override Policies These options allow the individual ports in the group to have a different setting than the group setting.

  • Promiscuous Mode Often referred to as “turning the switch into a hub.” If set to Accept, this allows all VMs on a port group to receive all packets handled by the switch on that host. This behavior is affected by the port group VLAN settings as a VM will only receive packets for the VLAN configured on the port group unless the port group is set to VLAN Trunking. Promiscuous mode is usually only configured for port groups connected to security virtual machines as directed by the vendor, or for nesting hypervisors.

  • MAC Address Changes Defaulting to Reject, this setting prevents a virtual machine from receiving traffic destined for a MAC address not set in the virtual machine configuration. This would be set to Enabled if the operating system of the VM will need to change the MAC address in the OS.

  • Forged Transmits Defaulting to Reject, this setting prevents a virtual machine from sending traffic from a MAC address not set in the virtual machine configuration. This would be set to Enabled if the operating system of the VM would need to change the MAC address in the OS.

  • Traffic Shaping Ingress and egress traffic shaping can be set on a per-port group basis. Separate from NIOC, the average, peak, and burst values are set for each port in the group. This is useful for limiting chatty virtual machines.

  • VLAN There are four settings:

    • None: Also called External Switch Tagging (EST) mode. This requires an access port on the physical switch. Any traffic outbound from the port group will not receive a VLAN tag.

    • VLAN: Virtual Switch Tagging (VST) mode, which requires a VLAN number to be set on the port group. Any traffic outbound from the port group to the physical network will receive this VLAN tag. Any traffic inbound from a physical switch with the same VLAN tag will be passed to this port group after the VLAN tag is removed. A VLAN trunk port is required on the physical switch.

    • VLAN Trunking: Virtual Guest Tagging mode (VGT). A VLAN range or VLAN set is also configured with this option. Inbound traffic tagged with any VLAN ID in the set will be passed to this switch and the port group will not manage any VLAN tags. This mode is usually used for security virtual appliances along with Promiscuous mode. A VLAN trunk port is required on the physical switch.

    • Private VLAN: See the section “Network Isolation” later in this chapter.

  • Load Balancing See the section “Load Balancing and Failover Policies” later in this chapter.

  • Network Failure Detection Defaults to Link Status Only which detects only if there is any signal on the network cable but won't detect physical switch issues or configuration problems. Beacon probing will send beacon probes out every second to help determine if valid connections are available between the network adapters. However, there must be at least three active or standby NICs on the port group to ensure accurate response; if there are only two NICs and the beacon fails, it can't determine which uplink is the problem.

  • Notify Switches This will alert the connected switch if a failover occurs. This setting defaults to Yes but should be changed to No if directed by an application vendor to support a specific application.

  • Failback Defaults to Yes; this will allow an Active NIC that failed to be immediately used when it comes back up. You might change this to No during testing or troubleshooting to avoid “flapping,” where the adapter is repeatedly going up and down.

  • NetFlow NetFlow is a monitoring protocol used to send traffic metadata to a monitoring tool such as vRealize Network Insight. Disabled by default, this would be set to Enabled when the NetFlow settings on the vDS are edited to send the flows to the monitoring tool.

  • Traffic Filtering and Monitoring This is configured when you need network packets dropped or tagged for QoS or need the QoS packets retagged. It can be set for specific MAC or IP addresses or ranges or even different types of host traffic such as vMotion and vSAN. Traffic can be ingress, egress, or both.

  • Block All Ports This setting will stop all traffic in and out of all of the ports on the distributed port groups.

Adding and Removing Uplink Adapters

Physical NICs on each host connect to the distributed switch using uplinks. On the distributed switch itself, a special group called a dvUplink group exists to manage the global settings for the host uplinks. Only one dvUplink group can exist per distributed switch, and it is primarily used to set the maximum number of physical connections a host can have to that distributed switch as well as a few other optional settings.

By default, a dvUplink group has four connections, so any host could connect four physical NICs or link aggregation groups (which are covered in the section “Link Aggregation” later in this chapter). Hosts are not required to associate any physical connections to the vDS, but no local traffic would be able to leave the host and no external traffic would be delivered to the host's proxy switch. This would result in an outage if a functioning virtual machine connected to the distributed switch was migrated to the host.

The number of uplinks should be adjusted to your environment to meet the maximum number of connections required. Products such as VMware NSX will make configuration settings based on the number of uplinks set for a dvUplink group. Note that the number of uplinks is adjusted on the vDS settings, not on the dvUplink group.

The Add and Manage Hosts wizard and the distributed switch configuration on the host will allow you to add, change, or remove physical NICs associated with a vDS. Care should be taken when migrating NICs for networks that are in use to ensure that outages do not occur. Best practice would ensure two physical connections per switch, which for a migration would allow one to be moved at a time to avoid an outage. Migrate one physical NIC, migrate the VMkernel adapters or virtual machines using the networks carried by the physical NIC, then move the second NIC.

The Add and Manage Hosts wizard in the Flash client has an Add Host task, which is useful for ensuring that all hosts have the same configuration. The Manage Host Networking task provides similar capabilities-without the option of adding new hosts at the same time. Using either task, you can either visually compare the hosts or enable template mode by checking Configure Identical Network Settings on Multiple Hosts (Template Mode) on the Select Hosts tab (Figure 3.13).

After this option is selected, an additional step appears in the task list: Select Template Host (Figure 3.14), allowing you to pick the host with the optimal configuration-or the host you are going to configure and have all other hosts match. Note that you can use the template host to set physical connection (dvUplink), VMkernel ports, or both.

FIGURE 3.13 Configuring identical network settings on multiple hosts
Screenshot_158
FIGURE 3.14 Select Template Host
Screenshot_159

Once Select Template Host is selected, you can approve the template host's configuration or change it and apply that setting to all hosts attached to the vDS (Figure 3.15).

FIGURE 3.15 Manage Physical Network Adapters (Template Mode)
Screenshot_160

Working with Virtual Adapters

While an initial VMkernel port is created during host installation, additional VMkernel ports can be added to isolate host traffic. Host adapters can be added in two ways (Figure 3.16): either from the host's Configure tab under Networking → VMkernel adapters or with the Add and Manage Hosts wizard from the virtual distributed switch action menu. When VMkernel ports are added to standard switches, a dedicated port group is created, but when added to distributed virtual switches, VMkernel ports are assigned to existing distributed port groups.

FIGURE 3.16 Two ways to add host adapters
Screenshot_161

During the creation of the VMkernel adapter, you have the option of choosing IPv4, IPv6, or both to meet your network configuration. You can also choose a TCP/IP stack. There are three stacks initially: Default, Provisioning, and vMotion. The default stack carries all host TCP/IP traffic until you assign traffic to other stacks. If you create a VMkernel adapter and assign it to the vMotion stack, only VMkernel ports assigned to that stack can be used for vMotion. The same is true of the Provisioning stack and Provisioning traffic. These settings are used to ensure that those traffic types are completely separated from other host traffic. If needed, you can create custom TCP/IP stacks to separate other management traffic such as replication.

The following host traffic types can be assigned to VMkernel adapters:

  • vMotion Each host that will participate in a vMotion virtual machine move (including DRS) requires a VMkernel port to be flagged for vMotion traffic.

  • Provisioning This includes the traffic for cold (virtual machine powered off) migrations, cloning, and snapshot migrations.

  • Fault Tolerance Logging Only one VMkernel adapter can be flagged to carry the traffic required to keep fault-tolerant virtual machine instances in sync.

  • Management The only required traffic type. The first VMkernel adapter created is tagged for management traffic. Traffic types include vCenter, fat client, and SSH.

  • vSphere Replication/vSphere Replication NFC These two options handle incoming and outgoing replication data when vSphere Replication is in use on the host. The NFC (Network File Copy) traffic type is for incoming replication traffic.

  • vSAN Each host participating in a virtual storage area network (vSAN) cluster must have a VMkernel port flagged for vSAN.

An additional traffic type that cannot be specifically assigned using the VMkernel settings is TCP/IP storage-both NFS and iSCSI. NFS traffic will typically use the lowest-numbered VMkernel port that can access the NFS file server. If you want to dedicate a VMkernel port to NFS, make sure it is on the same VLAN as the NFS server because the host will use IP-adjacent VMkernel adapters before trying to route to NFS over the default TCP/IP stack's gateway. For iSCSI, there is a method covered in Chapter 4, “Storage in vSphere,” that will dedicate NICs for iSCSI storage traffic.

An adapter can be configured to carry any or all of the traffic, unless it has been configured as dedicated to iSCSI traffic or unless the vMotion or Provisioning stacks have been assigned to other VMkernel adapters.

However, best practice is to assign one VMkernel port per distributed port group, one type of network traffic per VMkernel port, and one type of network traffic per stack other than Default.

The default TCP/IP stack has a default gateway that is set on the host in the TCP/IP configuration section of the Networking menu. When you create a VMkernel adapter, you are given the opportunity to use the default gateway or set a custom gateway.

You might notice (see Figure 3.17) that if you set a static IP when you configure the host, vmk0 (the first VMkernel port) is set to override the default gateway for the adapter even though the same gateway IP is set for the default stack and vmk0.

FIGURE 3.17 VMkernel adapter default gateway and override option
Screenshot_162

When configuring subsequent VMkernel ports, you may choose to use different default gateways to take advantage of alternate routing capabilities on your management network. You can also use the vmkping command from the command line of the host to ensure the VMkernel adapters can access their default gateways and other hosts on the same network.

The Add and Manage Hosts wizard also includes the ability to migrate VMkernel networking. Be careful during this process because migrating the VMkernel of a host to an improperly configured vDS could break connectivity to the host and thus keep vCenter from managing the host. If you lose connectivity to the host in a manner that does not trigger vDS rollback (see the section "Automatic Rollback" later in this chapter), you can use the console GUI to reset the network settings or use the console command line to move or edit the VMkernel ports.

NOTE

The Add and Manage Hosts wizard will allow you to edit VMkernel adapters on several hosts at once, which is useful for consistency in the environment.

If you move or edit a VMkernel port that is currently carrying traffic for a vMotion or provisioning activity, that activity (vMotion or provision) will complete successfully.

If the wizard detects that you are moving or changing a VMkernel port or adapter that is dedicated to iSCSI traffic, you will see a message in the Analyze Impact section of the wizard (Figure 3.18). There are three levels of messages: No Impact, Important Impact, and Critical Impact.

FIGURE 3.18 iSCSI impact warning
Screenshot_163

If you need to migrate a VMkernel port to a standard switch, you will need to use the VMkernel adapter tools in the host view because the Add and Manage Hosts wizard cannot move VMkernels to a vSS. Deleting VMkernel ports is pretty straightforward; just be sure you are not deleting the last management port.

Custom TCP/IP Stacks

The ability to create custom stacks is key for advanced networking, as it allows separate configurations for network types beyond the Default, Provisioning, and vMotion stacks. The ability to create custom stacks, while most commonly used by NSX for VLAN traffic, allows for advanced configurations like separate routing tables for replication traffic or a separate DNS server for NFS traffic.

Custom stacks are managed on a per-host basis, and while they are edited using the TCP/IP configuration section of the Networking menu from the Configure tab, they can only be created from the command line of the server using esxcli network ip netstack add - N="stack_name" as shown in Figure 3.19. You can see the new stack in Figure 3.20.

FIGURE 3.19 Adding a new TCP/IP stack from the host command line
Screenshot_164
FIGURE 3.20 The new stack from the web client
Screenshot_165

After the stack is created, you can rename it or set the DNS and routing settings as shown in Figure 3.21. If there is a VMkernel port with DHCP configured, you can use it to set the DNS settings, or you can configure then manually.

FIGURE 3.21 TCP/IP stack settings
Screenshot_166

You can also change the TCP congestion algorithm between CUBIC and New Reno (New Reno is the default), but the differences are beyond the scope of this book.

EXERCISE 3.2 Create a new TCP/IP stack and create a VMkernel adapter to use it. Enable jumbo frames.

  1. Connect to an ESXi host using SSH and log in as root.

  2. Run the command esxcli network ip netstack add -N="NAS" to create the new TCP/IP stack on the host.

    Screenshot_167
  3. Connect to the host using the vSphere web client and open the VMkernel Adapters menu under the Configure tab.

  4. Click the Add Host Networking button to launch the wizard.

  5. Select the VMkernel Network Adapter on the first screen of the wizard and choose an existing network for the second screen (here we are using the port group we created in Exercise 3.1):

    Screenshot_169
    Screenshot_170
  6. Under TCP/IP Stack, choose the new stack created in step 2.

    Screenshot_171
  7. Leave all other settings at the default. Click Next twice, then Finish.

  8. Identify the new VMkernel adapter using the custom TCP/IP stack. Click the adapter and then click Edit.

  9. On the second page of the Edit wizard, set the MTU to 9000.

    Screenshot_172
  10. Click OK to complete the exercise. Note that if you do not use the switch from Exercise 3.1, you will need to set the switch to use jumbo frames also.

Long-Distance vMotion

Long-distance vMotion supports live migration across links of at least 250 Mbps and up to 150 ms of round-trip time. If the vMotion traffic needs to be routed, you should enable the vMotion TCP/IP stack for the VMkernel ports responsible for the vMotion traffic.

vCenter makes multiple checks to ensure that vMotion will work, such as, for instance, migrating to a switch without a NIC. However, vCenter doesn't check to make sure the broadcast domain in use by the virtual machine exists at the destination, so it is possible for the virtual machine to lose connectivity if you do not ensure that the destination is working on the correct network.

NOTE

You cannot vMotion from a distributed switch to a standard switch, but you can always transfer to a distributed switch.

Migrating Virtual Machines to or from a vDS

Virtual machines can be migrated to or from a vDS using the Migrate VMs to Another Network or the Add and Manage Hosts wizard on the Distributed Switch action menu or on the virtual machine settings window.

The Migrate VMs to Another Network wizard can move one or many virtual machines to or from standard switches or distributed switches, but only one port group can be the source and only one port group can be the destination. This wizard is best when a single network is being moved.

The Add and Manage Hosts wizard can move any virtual machine connected to the hosts selected to any port group on the distributed virtual switch being configured. This wizard is best during the initial adoption of the distributed switch.

The virtual machine configuration window allows you to move the virtual machine to any network connected to the host. If the destination port group is configured correctly, moving it between networks will have no more impact than a vMotion of that VM.

Performance and Reliability

Distributed switches provide a couple of options for improving bandwidth and ensuring that there is no single point of failure. However, you need to make sure the hosts are configured correctly to take advantage of the distributed switch settings. Also, be aware that LAG assignments, load balancing, and failover policies are set per distributed port group although the Manage Distributed Port Groups wizard can be used to ensure that all have the same settings.

Link Aggregation

To increase network bandwidth, multiple physical NICs can be grouped together into link aggregation groups (LAGs), which use the Link Aggregation Control Protocol (LACP) to manage load balancing and dynamic handling of the links making up the LAG. In an actual production environment, you would use your switch manufacturer's documentation to configure LAGs because their naming conventions and settings could differ from VMware vendor-agnostic guides.

NOTE

Virtual distributed switches starting with version 5.5 feature Enhanced LACP mode. If you upgrade a version 5.1 vDS that is configured with LAGs to version 6.5 or later, LAGs should be upgraded to enhanced mode during the distributed switch upgrade. There is a manual upgrade available if needed.

LAGs are created from the LACP menu found under the distributed virtual switch's Configure tab → Settings section (Figure 3.22).

FIGURE 3.22 LACP menu
Screenshot_173

The number of ports selected for the LAG should match the ports configured on the physical switch for the LAGs and the number of NICs allocated to the LAG on each host.

A port group can only be configured to use a single active LAG ( Figure 3.23 ); all other LAGs and stand-alone uplinks must be set to Unused. An Active LAG group will send LACP packets to the switch for negotiation while Passive will only receive LACP packets. This should be set according to the switch vendor's guidelines.

FIGURE 3.23 Single active LAG
Screenshot_174

As noted in Figure 3.23, the failover settings of the LAG group override the failover settings of the port group.

WARNING

A couple of important notes of caution regarding LAGs:

LAGs are not compatible with software iSCSI initiator multipathing.

LAGs are not compatible with host profiles and thus are not available to Auto Deploy configurations.

Load Balancing and Failover Policies

There are five load balancing options for distributed virtual switches, configured on each port group (Figure 3.24). These settings determine which uplink is used for each virtual machine's traffic-assuming there is more than one uplink.

FIGURE 3.24 Load balancing choices
Screenshot_175
  • Route Based on IP Hash This hashes the source and destination IP addresses of each packet, which could send packets for the same virtual machine out several uplinks. This load balancing method requires your physical switch to be configured for Etherchannel or IEEE 802.3ad and should be configured according to your switch vendor's documentation.

  • Route Based on Source MAC Hash The switch uses the MAC address of the VM and the number of uplinks to calculate which port to use. If the VM changes switch ports, it will still use the same uplink as the MAC doesn't change.

  • Route Based on Originating Virtual Port The default load balancing algorithm. The virtual switch uses the port ID to determine which uplink is used. This generally provides a roundrobin effect, distributing the ports evenly between the uplinks. However, actual traffic or load is not taken into account and you could find that the uplinks vary greatly in the volume of traffic they handle.

  • Use Explicit Failover Order Most commonly used for iSCSI port binding, this setting allows you to manually determine which uplinks are active, passive (only used if no active uplinks are available), or standby (not used at all). If your server had mismatched NICs (for instance, 10 GB, 1 GB) available, you could assign the faster NICs to active and the slower NICs to standby.

  • Route Based on Physical NIC Load The switch tests the actual load on the physical NICs on each host every 30 seconds to determine virtual machine uplink usage. If a physical NIC's usage exceeds 75%, the uplink for the virtual machine using the most traffic is changed.

Traffic Shaping

When configuring port groups, you can enable Traffic shaping, which allows either ingress or egress network traffic limits to be set. Note that the average, peak, and burst values are per port, not for the whole port group. If you want to limit a virtual machine that is sending (or receiving) too much traffic, this is one option to restrict it. (Since ingress happens after traffic has already been received at the host, you're only saving the virtual machine from being swamped with packets; the host still has to process the incoming packets and restrict those over the caps you have set.) The Ingress setting could help virtual machines struggling to keep up with traffic, the Egress for VMs that send too much traffic, and both could be configured to mimic a restricted environment, such as replicating a 1 GB connection when your host has 10 GB uplinks.

TCP Segmentation Offload

TCP segmentation offload (TSO) is a way to push some network tasks (the breaking up of large packets into smaller ones) onto the physical network card, reducing the CPU load of the host. Both virtual machines and VMkernel ports can take advantage of TCP segmentation offload.

The physical NICs installed in the host must be capable of TSO and be configured to use TCP Offload (Figure 3.25). This can be checked on the host with the esxcli command.

FIGURE 3.25 Checking TSO status using esxcli
Screenshot_174

By default, hosts will use TSO if it is supported by the physical adapters. TSO is enabled on VMkernel ports by default and is also enabled on VMXNET 2 and 3 adapters connected to virtual machines. Note that this requires VMware Tools to be installed on the guests. Windows guests can disable TSO by disabling Large Send Offload V2 (IPv4) and Large Send Offload V2 (IPv6) from the Advanced setting of the VMXNET adapter. Linux guests can disable TCP Offload by running ethtool -K eth0 tso off.

Jumbo Frames

The default MTU size on a network is 1500 bytes. Packets larger than 1500 bytes are considered “jumbo.” Jumbo frames are used to improve network efficiency by reducing overhead-each packet has the same header size (the part of the packet before the data), so increasing the amount of data reduces the ratio of data-to-header. However, not all workloads support or take advantage of jumbo frames. Jumbo frame sizes are most often configured for IP-based storage, backups, and network products such as VMware NSX.

All devices that handle traffic on the network must be configured for jumbo frames for proper functionality. Common settings for jumbo frames are 1600 and 9000. Whoever is requesting jumbo frame support-your storage team, network admin, or application owner- should provide the proper value for the MTU.

To enable jumbo frames for VMkernel ports, the MTU size must be set in the port properties of the VMkernel port (Figure 3.26).

FIGURE 3.26 VMkernel MTU
Screenshot_177

To enable jumbo frames for virtual machines, the MTU must be set in the properties of the network adapter. The virtual switch that the VMkernel port and/or virtual machines connect to must also be configured for jumbo frames. This configuration can be found in the Advanced settings of the switch (Figure 3.27).

FIGURE 3.27 MTU switch
Screenshot_178

EXERCISE 3.3 Create a new distributed switch and enable jumbo frames.

  1. Connect to the host using the vSphere web client and open the Networking view

  2. Right-click the datacenter and choose New Distributed Switch under Distributed Switch in the Actions menu.

    Screenshot_179
  3. Enter a name for the new distributed switch and click Next. On the third screen, enter a port group name and click Next.

  4. Verify the settings and click Finish.

    Screenshot_180
  5. Right-click the new switch and choose Settings → Edit Settings.

    Screenshot_181
  6. On the Advanced page, change the MTU (Bytes) to 9000 and choose OK.

    Screenshot_182

Network Isolation

Virtual local area networks (VLANs) and private virtual local area networks (PVLANs) are methods to isolate different networks that are utilizing the same network switches. When VLANs and PVLANs are implemented with distributed virtual switches, it is critical to have the physical switches configured properly to ensure traffic is handled properly.

When VLANs are used on a physical switch, the switch generates a table listing the ports that are participating in each network. The ports in each network are configured with the same VLAN ID and traffic is allowed to pass between them on the switch. If other ports on the same switch are configured with different VLAN IDs, traffic would need to be routed by a network device with access to both VLANs in order to flow between the two VLANs.

When the switch needs to connect to another network device, it can use a trunk connection and send the traffic for multiple VLANs on the same wire. To ensure that the traffic is handled properly, each Ethernet frame has a VLAN ID appended at the beginning of it. This way the destination device knows which network each packet belongs to and will handle them appropriately. Only packets traveling between trunk connections receive the VLAN tag.

Private VLANs add an additional level of information, where one VLAN is configured with one or more secondary VLAN IDs. While all of the VLANs (primary and secondary) are considered part of the same network, the network devices will treat the packets differently. Secondary PVLANs can be configured as Community or Isolated, while the primary PVLAN is always promiscuous. Ports tagged with a Community secondary PVLAN can communicate with other ports with the same secondary PVLAN or any port configured with the primary PVLAN ID. Ports tagged with an Isolated secondary PVLAN ID can only communicate with ports tagged with the primary PVLAN ID.

VLANs are configured on port groups so that VMs can participate on the proper networks. PVLANs must be defined on the vDS before they can be configured on port groups to provide additional VLAN isolation. To define the private VLANs, select the vDS, click Configure, and select Private VLAN. When creating private VLANs, keep these guidelines in mind:

  • Only one secondary PVLAN can be set as Isolated. Only one is needed since each port associated with that PVLAN is isolated to only talking to primary VLAN ports.

  • Community PVLANs are useful for VMs that will communicate with each other; isolated PVLANs are useful for VMs that do not need to communicate with any other VM on the same network.

  • The router for the private VLAN must be connected to the primary VLAN so that it can route traffic to/from all VMs on the network, regardless of their community or isolated membership.

Again, it is critical to have the physical switches configured properly to ensure that traffic is handled properly. Note that this only provides VLAN isolation, not true security.

Automatic Rollback

If a network change is made to a host that disrupts the host's ability to communicate with vCenter, the host should automatically roll back the last change to the VMkernel ports. Changes include MTU sizes, VLAN settings, physical NIC speed or duplex, VMkernel IP settings, default gateway changes, and removing the VMkernel port or physical adapters.

After a host detects a vCenter connection loss and rolls back the last change (which usually happens very rapidly), you might see a few alerts to let you know what happened (Figure 3.28).

FIGURE 3.28 A few variations of rollback alerts
Screenshot_183

Distributed switches also have rollback mechanisms in the case of changes such as teaming, MTU, or VLAN causing problems.

If the rollback mechanism does not correct the problem, you may need to restore the vDS from an earlier version or update the network settings directly on the host using the Direct Console User Interface (DCUI) or ESXi shell-or both if you find that the vDS doesn't realize it needs to roll back and continues to push the change to the host, whil sync with the vDS (Figure 3.29).

FIGURE 3.29 Out of sync error
Screenshot_184

If you enable the health check routines for the distributed switch as shown in Figure 3.30, you can get more information on what went wrong (Figure 3.31).

FIGURE 3.30 Configuring the vDS health check
Screenshot_185

From the host's virtual switch menu, you can use the rectify option to resolve this (Figure 3.32).

FIGURE 3.31 Monitoring the health of a vDS
Screenshot_186
FIGURE 3.32 Rectify a vDS from a connected host
Screenshot_187

Monitoring and Mirroring

VMware distributed switches include the capability of mirroring traffic from one port to either another port on the vDS or a remote destination. This is useful for security purposes, but also to examine packets with a utility such as Wireshark for application troubleshooting.

Encapsulated Remote Switched Port Analyzer (ERSPAN) is a means of delivering mirrored traffic to a remote destination, which VMware implements as Encapsulated Remote Mirroring (L3) Source.

To configure port mirroring, in the Port Mirroring section of the virtual distributed switch's Configure tab, select New → Encapsulated Remote Mirroring (L3) Source (Figure 3.33). This will allow sending traffic to an ERSPAN destination.

FIGURE 3.33 Configuring port mirroring
Screenshot_188

Name the session as desired and make sure you set the status to Enabled. You can change the other settings if required by your ERSPAN destination. Note that the default is to send every packet, but the sample rate can be adjusted as needed.

Select the specific ports (VMkernel or virtual machine) you wish to monitor (Figure 3.34), or enter a range of ports.

The ports will default to sending both egress and ingress traffic to the destination, but you can change that on a per-port or per-range basis.

FIGURE 3.34 Selecting the source distributed ports
Screenshot_189

Using NetFlow

One of the advanced settings available on virtual distributed switches is NetFlow. NetFlow is metadata about the traffic on the switch, which is very useful for traffic analysis and is used by utilities such as vRealize Network Insight. To configure NetFlow on your vDS, use the NetFlow section of the switch's Configure tab (Figure 3.35).

FIGURE 3.35 Configuring NetFlow
Screenshot_190

Here you set the IP address of the network analysis tool that will be receiving the NetFlow statistics. The port, domain ID, and advanced settings, if needed, should be obtained from the tool. The switch IP address option is important as it allows all of the data from the switch to be grouped together.

Understanding Network I/O Control

Enabling vSphere Network I/O Control (NIOC) lets you can set shares, reservations, and limits on network bandwidth for system traffic and/or virtual machines. NIOC is enabled by default when you create a distributed virtual switch, and the values are set on a per-distributedswitch basis.

The shares, limits, and reservations work much the same as they do with memory, CPU, and storage settings. However, while the system traffic settings are on a per-host basis, virtual machine settings (specifically, reservations) have implications across hosts. Virtual machine reservations (if configured) are used for Distributed Resource Scheduler virtual machine migration decisions and for HA placement decisions.

NOTE

All calculations for shares, limits, and reservations are on a peradapter basis, and between system traffic and virtual machine traffic, you can only reserve a maximum of 75 percent of the bandwidth of the slowest physical NIC connected.

System traffic setting are configured in the Resource Allocation menu on the Configure tab of the vDS (Figure 3.36).

FIGURE 3.36 NIOC system settings
Screenshot_191

WARNING

Only configure the settings for the system traffic that will actually be carried on this vDS. If the distributed switch has no VMkernel adapters connected to it, you should not change the system settings at all. Reservations set for system traffic can only be used by that type of system traffic, so setting a reservation for vSAN at 1000 Mbit/s on a switch with no VMkernel adapters connected puts an artificial limit on the virtual machine traffic it can carry.

System traffic shares are used to determine how bandwidth is allocated on saturated links. (See the next section, “Configuring NIOC Reservations, Shares, and Limits,” for more details.) Reservations set a guaranteed amount of network bandwidth per adapter for that traffic type. Limits set a maximum amount of bandwidth that a specific traffic type can consume.

Reservations are useful to guarantee a performance level, shares are good for adjusting the balance between traffic types during contention, and limits can be used to address chatty VMs or reduce traffic for known issues-such as replication traffic causing WAN issues when it hits a certain throughput.

vSphere 6.0 introduced Network I/O Control version 3, which allows for bandwidth settings per VM. If you have a vDS created with version 2, you can upgrade it to version 3, but settings such as user-defined network resource pools and CoS tagging for system traffic will be removed. Note that virtual machines connected to a vDS with NIOS v3 cannot use SR-IOV.

Configuring NIOC Reservations, Shares, and Limits

To apply network I/O control to virtual machines, you first need to set a reservation for virtual machine traffic. By default the reservation is 0.

  • If there will not be any system traffic on the vDS, set the reservation to the max (75 percent of the slowest physical link).

  • If there will be system traffic, you will need to decide how much bandwidth you would like to guarantee to virtual machines. The value needs to be at least the total amount you would like to reserve to individual virtual machines.

  • If you will not be reserving bandwidth for individual virtual machines, set the reservation to 1 Mbit/s (Figure 3.37), which is enough to enable network resource pools.

Once the virtual machine traffic reservation is set, you can create network resource pools. These pools will then be assigned to distributed port groups on the vDS to set a reservation quota (really, a limit on the total reservations) for the VMs connected to the port group and to enable the VMs to set individual limits and shares. Multiple port groups can be assigned to the same network resource pool, but all VMs assigned to the pool will share the same reservation quota.

FIGURE 3.37 Virtual Machine Traffic set to 1 Mbit/s
Screenshot_192

NOTE

Shares give you a way to allocate resources during times of contention. Objects allocated more shares receive more resources when the resources are overallocated. Only unreserved resources are shared. The default setting of Normal provides a value of 50; other settings are Low with 25 and High with 75, or you can set a custom share setting of 1-100.

For shares, there are two stages of calculation when an adapter is saturated: the traffic type share and the virtual machine share. First, the shares for the traffic types carried on the adapter have their share values added and each traffic type share setting is divided by the total number of shares in play. By default, each system types has 50 shares and VM traffic has 100 shares. Here are three examples of share calculations:

  • If a 10 Gbit/s adapter is carrying vSAN and virtual machine traffic and the adapter is saturated with traffic and no reservations are set, the virtual machines would be allocated 2/3 of the bandwidth. (50 shares for vSAN plus 100 shares for virtual machines equals 150 shares total. Virtual machines are granted 100 shares of the 150 total shares, 100 / 150 = 2/3 of the shares and thus 2/3 of the traffic under contention.)

  • If a 10 Gbit/s adapter is carrying vSAN with a 1 Gbit/s reservation and virtual machine traffic with no reservation and the adapter is saturated with traffic, the virtual machines would be allocated 2/3 of the bandwidth available after the reservation, or about 6 Gbits/s.

  • If a 10 Gbit/s adapter is carrying vSAN with a 1 Gbit/s reservation and virtual machine traffic with a 4 Gbit/s reservation and the adapter is saturated with traffic, the virtual machines would be allocated 2/3 of the bandwidth available after the reservation, or about 3.3Gbit/s plus the 4 GB reservation for a total of 7.3 Gbit/s to share among the virtual machines.

In each of these examples, each virtual machine has 50 shares by default and thus would share equally in the traffic available to virtual machines.

If a virtual machine has a reservation set, that VM will receive a guaranteed amount of traffic, and traffic sent over the reservation will contend using the share setting. In the last example, if there were a virtual machine with a 1 Gbit/s reservation that was trying to use 1.2 Gbit/s of bandwidth, the last 0.2 Gbit/s would be allocated using the shares and receive equal priority if all VMs have the default of 50 shares.

A virtual machine that only requires 1000 Mbit/s can have a limit set, and the vDS will ensure that the VM only consumes that amount of bandwidth.

Set limits and reservations sparingly. The reservation will be permanent for system traffic or take effect as long as the VM is powered on for virtual machines and will reduce the bandwidth available to other VMs and system resources. Limits set an artificial performance cap on the resource and if not documented could cause troubleshooting headaches later on.

Determining NIOC Requirements

Network I/O Control requires only a distributed virtual switch. For best results, all hosts should have identical NICs and the same number of NICs connected to the vDS. NIOC v3 requires a VDS of version 6.0 or higher.

Traffic shaping settings take precedence-if traffic is restricted to 1000 Mbit/s on a port group where a VM has a reservation of 1500 Mbit/s, it will be limited to 1000 Mbit/s.

Network I/O Control is best monitored from the Resource Allocation menu, where you can see the bandwidth settings under System Traffic and the virtual machines in the resource pools (refer back to Figure 3.36).

EXERCISE 3.4 Configure Network I/O Control on a distributed switch.

  1. Connect to the host using the vSphere web client and open the Networking view.

  2. Select the distributed switch and choose System Traffic from the Configure menu.

    Screenshot_193
  3. Select the Virtual Machine Traffic type and click the edit icon.

  4. Enter a reservation of 1000 Mbit/s and click OK (this will give virtual machines at least 10 percent of each pNIC's bandwidth to share).

    Screenshot_194
  5. Select the vMotion traffic and set the shares to High and the reservation to 500 (this will give vMotion at least 5 percent of each pNIC and higher priority during contention).

    Screenshot_195
  6. Select the NFS Traffic type and set the shares to Low and a limit of 200 Mbit/s (this will restrict NFS access to no more than 2 percent of each pNIC and de-prioritize even that amount during contention).

    Screenshot_196

Summary

This chapter has covered host and virtual machine networking, including advanced host network settings, vSphere Distributed Switches, and network I/O control. Host networking using VMkernel adapters has a variety of options available to meet the networking requirements of the modern datacenter, including allowing multiple default gateways on the same network and providing the host with multiple TCP/IP stacks. VMkernel adapters are configured per host, requiring care to be taken during configuration to ensure that all hosts are set up correctly.

Distributed switches offer an extensive list of enhancements over standard switches, including centralized management, the ability to mirror ports or forward port traffic to a remote destination, the ability to load balance across physical NICs based on the load, and network I/O control.

NIOC brings the concepts of shares, reservations, and limits to networking, allowing you to guarantee bandwidth to some virtual machines or types of host networking and guarantee that during contention, network I/O is distributed evenly-or not, depending on your use case.

Exam Essentials

Know how vSphere Distributed Switches are different from standard switches . Know how the control plane and data plane are different and how the distributed switch model makes management easier. With most settings moved to vCenter control, you can be sure the switch and port group implementation is the same on all hosts. However, uplink and VMkernel ports are configured per host, so be sure to understand where to look to make changes or troubleshoot those components.

Understand how VMkernel adapters are configured for different traffic types . While one VMkernel adapter can carry all of the different traffic types, know how to create a VMkernel adapter for each type of traffic for security and performance and know how to create new TCP/IP stacks and use multiple default gateways for different networking design considerations.

Know how to add hosts to or remove hosts from a vDS. Know how you can manage hosts with the wizard in the Networking view, or manage the hosts' uplinks and VMkernel in the Hosts view or use the Host Client to manage the host components if vCenter is unavailable.

Know the different load balancing options, including LAGs. Know that port groups have four load balancing options, and understand the difference. Also be able to create LAGs using the switch's LACP menu and know the limitations of LAGs.

Understand how automatic rollback works. An automatic rollback will attempt to keep incorrect switch changes from affecting host connectivity. Be aware of what can trigger an automatic rollback and how to identify when it has occurred.

Know how Network I/O Control works and is configured. Network I/O Control adds the concepts of shares, reservations, and limits to host and virtual machine networking. Understand how to configure NIOC, how shares are calculated, and why shares only matter during times of contention.

Review Questions

  1. A virtual distributed switch with two 10 GB NICs per host has the default system traffic settings set and a resource pool with a quota of 500 Mbit/s. There is one virtual machine in the resource pool with network shares set to Low, reservation set to 250 Mbit/s, and a limit of 500 Mbit/s. What change would improve network performance for that virtual machine at all times?

    1. Set the Virtual Machine Traffic type reservation to 1000 Mbit/s.
    2. Set the Limit on the virtual machine to 1000 Mbit/s
    3. Set the Reservation Quota on the resource pool to 1000 Mbit/s.
    4. Set the Reservation on the virtual machine to 1000 Mbit/s.
  2. What is the simplest way to restrict the traffic for a collection of virtual machines that are all on the same VLAN?

    1. Network I/O Control
    2. Distributed Port Group traffic shaping
    3. Network Protocol Profiles
    4. Traffic filtering and marking
  3. What could account for a virtual machine dropping off the network after moving to a new host via DRS? (Choose two.)

    1. Improper VLAN configuration on the distributed port group
    2. No NIC configured on the virtual machine
    3. Improper VLAN configuration on the physical switch
    4. No NIC configured on the host
  4. What can be used to prevent a virtual machine from communicating with other virtual machines on the same broadcast domain but allow its traffic to route to virtual machines? (Choose two.)

    1. Private VLAN
    2. Virtual switch with no uplinks
    3. Traffic filtering and marking
    4. Network I/O Control
  5. A virtual distributed switch with two 10 GB NICs per host has the default system traffic settings set, a resource pool with a quota of 500 Mbit/s. There is one virtual machine in the resource pool with network shares set to Low and a reservation set to 250 Mbit/s. What two changes would improve network performance for that virtual machine at all times? (Choose two.)

    1. Set the Virtual Machine Traffic type reservation to 1000 Mbit/s.
    2. Set the Limit on the virtual machine to 1000 Mbit/s
    3. Set the Reservation Quota on the resource pool to 1000 Mbit/s.
    4. Set the Reservation on the virtual machine to 1000 Mbit/s.
  6. Which are valid services you can enable for a VMkernel adapter?

    1. NFS
    2. iSCSI
    3. vSAN
    4. NIOC
  7. Which VMkernel service is responsible for incoming vSphere replication traffic?

    1. Management
    2. vSphere Replication
    3. vSphere Replication NFC
    4. vSphere Replication Appliance
  8. What can be used to ensure that a host with two NICs has the proper connectivity? (Choose two.)

    1. LLDP
    2. vmkping
    3. Beacon probing
    4. NetFlow
  9. An administrator wishes to improve the performance of virtual machine cloning. Which option could be one step in improving that performance?

    1. Configure traffic shaping on the VM's port group.
    2. Create a new VMkernel adapter for Provisioning traffic.
    3. Upgrade the distributed switch to version 6.5.0.
    4. Set the virtual machine traffic to High in NIOC.
  10. Which load balance option is not available when using software iSCSI initiator multipathing?

    1. Route Based on IP Hash
    2. Explicit
    3. Route Based on Physical NIC Load
    4. LACP LAG
  11. A datacenter has separate networks for management, iSCSI, and Network Attached Storage (NAS) traffic. Both management and NAS traffic requires routing to remote networks, but those networks do not route to each other. What option would allow an ESXi host to use these networks?

    1. Custom TCP/IP stacks
    2. Override default gateway
    3. NIOC
    4. Traffic filtering and marking
  12. Consider the figure here. A datacenter has separate networks for management and Network Attached Storage (NAS) traffic. Both management and NAS traffic requires routing to remote networks, and both have gateways on the same network. What option would allow an ESXi hosts to use these networks with the fewest steps possible?

    Screenshot_197
    1. Custom TCP/IP stacks
    2. Override default gateway
    3. NIOC
    4. Traffic filtering and marking
  13. Which object sets the maximum number of uplinks a host can use for a virtual distributed switch?

    1. Port group on the vDS
    2. Virtual switch on the host
    3. Uplink group on the vDS
    4. vSphere Distributed Switch (vDS)
  14. Which port group mode should be used for a network-monitoring virtual appliance that needs access to Ethernet frames with VLAN headers?

    1. External Switch Tagging (EST)
    2. Virtual Switch Tagging (VST)
    3. Virtual Guest Tagging (VGT)
    4. Private VLAN (PVLAN)
  15. Which port group option should be used when the pNIC is connected to an access port?

    1. External Switch Tagging (EST)
    2. Virtual Switch Tagging (VST)
    3. Virtual Guest Tagging (VGT)
    4. Private VLAN mode (PVLAN)
  16. Which port group settings should be enabled to allow any VM in the port group to receive packets not intended for it?

    1. Traffic filtering and marking
    2. Promiscuous mode
    3. Virtual Guest Tagging (VGT) mode
    4. Private VLAN (PVM) mode
  17. If you want a particular port group to only use one NIC regardless of fail conditions, which object and setting would you choose?

    1. Uplink group, Use Explicit Failover Order
    2. Port group, Use Explicit Failover Order
    3. Port group, Route Based on Physical NIC
    4. Uplink group, Route Based on Physical NIC
  18. Several VMs on different hosts connected to the same port group lose connectivity during a network test. All hosts have two NICs connected to the switch. What setting could cause the problem?

    1. Route Based on IP Hash with the switches in Etherchannel mode
    2. TSO Offload not enabled on all hosts
    3. Explicit Failover configured using Unused
    4. Jumbo frames enabled on the physical switches but not the vDS
  19. What would prevent you from setting the Virtual Machine Traffic reservation above 0 Mbit/s?

    1. NIOC not enabled
    2. NIOC version 2
    3. No pNICs connected
    4. No port groups created
  20. Which settings on the physical switches could cause VMs to behave differently on different hosts? (Choose two.)

    1. Jumbo frames
    2. CDP/LLDP
    3. VLAN tagging
    4. NetFlow
UP

LIMITED OFFER: GET 30% Discount

This is ONE TIME OFFER

ExamSnap Discount Offer
Enter Your Email Address to Receive Your 30% Discount Code

A confirmation link will be sent to this email address to verify your login. *We value your privacy. We will not rent or sell your email address.

Download Free Demo of VCE Exam Simulator

Experience Avanset VCE Exam Simulator for yourself.

Simply submit your e-mail address below to get started with our interactive software demo of your free trial.

Free Demo Limits: In the demo version you will be able to access only first 5 questions from exam.