VMware 2v0-731 VCP Cloud Management – VMware on AWS Services Overview
VMware Software-Defined Data Center (SDDC) overview When it comes to the software-defined data centre and the implementation on the VMware cloud on AWS, it’s important to understand that this SEDC software is essentially delivered as an on-demand service. This is powered by the VMwareCloud Foundation, which integrates Vsphere, vSAN, Van, and NSX. It is optimised to run on dedicated elastic bare metal AWS infrastructure, as well as VMware Vcenter management capabilities. The VMware-defined data centre stack from VMware includes VMware vSphere, VMware vSAN, and VMware NSX virtualization technologies. This stack…
When it comes to the software-defined data centre and the implementation on the VMware cloud on AWS, it’s important to understand that this SEDC software is essentially delivered as an on-demand service. This is powered by the VMwareCloud Foundation, which integrates Vsphere, vSAN, Van, and NSX. It is optimised to run on dedicated elastic bare metal AWS infrastructure, as well as VMware Vcenter management capabilities.
The VMware-defined data centre stack from VMware includes VMware vSphere, VMware vSAN, and VMware NSX virtualization technologies. This stack is essentially maintained by VMware, and from a customer perspective, all of these areas of the stack are fully managed and maintained by VMware. The software-defined approach extends VSphere’s virtualization well beyond compute to the network and storage, making data centre services as easy, expensive, and easy to configure and manage as virtual machines. It is also purpose-built, unified, policy-driven, and essentially a technology that automates and manages its services across heterogeneous cloud implementations.
When it comes to choosing the right cluster configuration on your VMware cloud on EWS, it’s important to understand that there are two supported VMware cluster configurations. The first is the AWS base cluster configuration, and the second is the AWS scaled-out cluster configuration. When you’re looking at a specific configuration, it’s important to understand the scalability requirements that your VMware SDDC is going to require. You should consider the amount of memory that your applications will require.
You also want to look at the number of hosts that it will take to support that memory, as well as the amount of storage that you may require as well. Each of the configurations scales from 18 cores up to 144 for the base and 18 cores to 576 for the scale cluster. Once again, you’ll need to look at the current VMware-supported clusters at the time of deployment since these are subject to change at any time with this managed service. Also included as part of the cluster features included in the subscription are high availability, DRS, and vSAN. These are fully supported and integrated into the SCD service. This is managed by VMware when it comes to cluster features: VSIR DRS and Vsphere Haas are enabled and configured to provide the best availability and resource utilization. This is managed by VMware, and therefore, this is not something that you as a customer need to be concerned with.
A vSAN cluster is also configured for storage. On a note about vSAN, it’s important to understand that if you do need to scale out your storage requirements, this is done by adding additional hosts to your clusters. When you’re adding hosts to your clusters, these need to be done in pairs. Basically, two, four, or six. You can’t add an odd number of hosts. Essentially, host failure remediation is the responsibility of VMware cluster pools. The cloud. SEDC is configured with two DRS pools. One resource pool contains the management VMs needed to operate the cloud SDDC. The second pool is essentially created to manage customer workloads. This approach to a configuration for DRS enables VMware to provide you with the best agility and scalability, but also reliability. Customers do have the option to create additional child resource pools. Remember that when you create a child pool that is dependent on the parent pool, there are additional add-on services that VMware will support. This is an extra charge to your subscription. When considering VMware SiteRecovery or VMware Hybrid Cloud Extension support, it’s critical to understand the proper use cases for these additional services.
VMware Site Recovery is used to essentially provide you with an expansive and simplified disaster recovery operation for your VMware environment. This is optimised for your VMware cloud as well. The other capability that is an add-on is the VMware Hybrid Cloud Extension. Now the Cloud Extension is a software-as-a-service solution that delivers secure and seamless application mobility as well as infrastructure hybrid connectivity across V-series five plus zero on premises and in the cloud. The goal of the extension is to abstract your on-premises and cloud resources and then present them to the apps as a continuous hybrid resource.
VSAN in this module. Here we’ll discuss how vSAN is implemented on the VMware cloud on AWS. vSAN is essentially a private storage platform. It uses flash SSDs on a bare metal host. Each STDC cluster includes a vSAN all-flash array, and each host provides a total of ten terabytes of raw capacity for the virtual machines to consume. Deduplication, compression, and erasure coding are all included in vSAN. When it comes to the configuration of a cluster, it’s important to understand that the default cluster is essentially providing 40 terabytes of raw capacity. If you want to scale up or scale down the raw capacity, you need to adjust the number of hosts in the cluster.
To accomplish that, the capacity consumption of the virtual machine depends on the configured storage policy. You can adjust the policies if you want. By default, it uses the default policy that is specified in the VMware storage policy. By default, a raid-one-fault-tolerant method is applied. You can also specify Raid Five and Raid Six as available with vSAN. Each ESXi host contains eight NVMe devices. These eight devices are distributed across two vSAN disc groups. Within that disc group, the right caching tier leverages one NVMe device with 1.7 terabytes of storage with vSAN security. In the first release, the datastore encryption was not available. Also, the VM-level encryption with Vsphere was not available. This, however, is now available in the second release. What is available in the first release was provided by AWS and is essentially local storage and VME devices that are encrypted at the firmware level. When it comes to VCM data stores, there are two types that are specified.
You have the Workload Data Store, which is managed by the cloud administrator, and you have the VSAM Data Store, which is managed by VMware. The Workload Data Store essentially provides storage for your workload VM templates, ISO images, and any other files that you choose to upload to your SCVC. You also have full permission to browse the datastore, create folders, upload files, delete files, et cetera. The vSAN data store, however, is used to provide storage for the management VMs in your SCDC, such as your Vcenter server, NSX controllers, et cetera. However, the management and troubleshooting of this specific vSAN storage are handled by VMware. You cannot adjust the vSAN cluster settings or monitor the vSAN cluster. This is handled by VMware Virtual Machine Policies. VSAPI policy is defined by the default VM storage policy. You can define additional policies and assign them to either data store. Once again, you need to be aware that the storage policy can be configured at the cluster level.
NSX is the foundational networking capability of the VMware cloud on AWS. NSX is essentially the SCDC virtualization platform as a service, which is the way it’s utilized in this implementation. It is primarily designed to provide VMnetworking in the cloud (SCDC). It effectively abstracts the Amazon virtual private cloud network as well. VMware NSX is originally a combination of VCNS, which was VMware’s original networking platform, and NVP, which VMware purchased from Nissira.
At the time of the initial release, users connected to the VMware cloud on AWS via a layer-3 VPN connection. Direct Connect was not supported at the time of release. However, with newer versions, this is now supported; previously, cross-cloud V motions were not supported, but they are now. From a default configuration standpoint, NSX supports 10-gigabit per second NICs on bearer metal. It features many capabilities, such as switching, routing, firewalling VPN, and load balancing. With NSX in the VMware cloud on AWS, the customer cannot change the compute gateway or the DLR configurations. The customer can, however, supply the IP and subnet ranges that are going to be used for their configuration. With NSX security, it’s essentially micro-segmentation for cloud workloads.
You define the security policy once and then apply it to all workloads. NSX is essentially logical networks’ firewall security capabilities. It supports NAT and VPN as well. With this specific implementation on the VMware Cloud on AWS, essentially, there is a networking mode called Simplified Networking Mode. This allows you to access the console and essentially “procreate,” which means VMware will create a default network for you. Essentially, this provides network connectivity by performing tasks such as establishing virtual private network connectivity and configuring firewall access rules. This is a prescriptive consumption model to ease networking learning curves. Again, from a consumption model perspective, this allows you, the customer, to essentially not have to worry about provisioning networking at a highly technical level.
Let’s go ahead and discuss the operational model of the VMware on AWS service. The VMware Cloud on AWS service is sold and operated as a service. This means that VMware essentially manages and supports the service. You, as the customer, do not have to worry about maintaining NSX or vSAN. For example, VMware manages the systems exclusively. In other words, VMware fully maintains the systems. VMware is a sole contact point for customers. In other words, if there’s a support issue, you call VMware, not AWS. The VMware Cloud on AWS service retains administrator privileges. In other words, VMware is going to maintain full administrator privileges for maintaining the infrastructure that is managed in the AWS cloud. Remember that this is a managed service, and VMware is reselling this AWS service. Essentially, this is a hosted service as well.
The operational model is as follows: VMware uses VsphereDRS and VsphereHA, which are enabled and configured to provide the best availability and resource utilization. The vSAN Cluster is also configured for storage if there are host failures. Remediation is the responsibility of VMware. VMware Cloud on AWS is responsible for the cloud SCD, software patching, updates, and monitoring of resources, but also for hardware and software failures as well. In addition to the traditional Vcenter server user model, the VMware Cloud on AWS service introduces a new cloud administrator role. The Cloud Administrator Role will be covered in greater detail in the following modules. The customer cloud administrator has full control over their workload while having a read-only view of the management workloads and the infrastructure view itself. The customer cloud administrator cannot reconfigure appliances due to the prescriptive service being provided. In other words, the consoles, appliances, and other servers, for example, will be maintained by VMware and not the customer. Customers cannot use root access, install Vibes, or make other administrative requests.
With VMware cloud on AWS, there is a new link mode called “Hybrid Link Mode.” It’s critical to understand the advantages of hybrid link mode and why you might want to use this new feature. The hybrid link mode enables the management of resources on your premises and in the cloud through a single pane of glass. Basically, without hybrid link mode, you would need to create two separate VCenter instances. This of course would enable you to view both resources at the same time, and therefore, without link mode, you would have to spin up two separate instances and view them separately in two different windows. The difference between hybrid link mode and the new enhanced link mode is that they are different, but essentially, Elm is Vcenter enhanced link mode. This was introduced in Vsphere 60 and now replaces the existing link mode capabilities. Hybrid Link Mode is meant for providing a consistent experience between your VMC and the on-premises infrastructure.
With hybrid link mode, you can certainly login to Vcenter server instances in your SDC as well as using your on-premises credentials. You can also view and manage the inventories of your VMware cloud resources on AWS as well as your on-premises resources. You could migrate workloads, and you could also share tags and tag categories across VCenter instances. Some important notes around hybrid link mode are that SSO domains will be different between on-premises and the VMC, but they are a one-to-one relationship. This essentially means that you will have two domains, but you will be able to replicate your onsite to your VMC. You will need to log in to your VMCVCenter server for a single pane of glass. To do this, you would need to use the HTML client that the VMC trusts, and that data is synchronized unidirectional on premises to the VMC.
Some other important notes are that it can be installed or uninstalled at any time. Roles are not replicated as well. However, the cloud admin for VMC, as well as the configuration, are built in. When it comes to additional prerequisites, do ensure that you meet the following prerequisites before configuring HLM: that you add an identity source to the domain, you add a cloud admin group, you link it to your on-premises environment, and you can also unlink an SSO domain as well. For example, you could use the “h5” client, as it’s known, which is the HTML 5 client, and go over to the administration hybrid cloud linked domain menus and then go through the wizard to essentially set up your hybrid link mode configuration duration. It’s essentially three steps that have been categorized for you to easily set this up in less than two or three minutes.
migration of virtual machines to the VMware cloud on EWS. There are several factors in several ways that you’ll want to consider to migrate virtual machines to the VMC or the VMware cloud. Essentially, the three types of migrations are “motion,” “bulk migration,” and “cold migration.” Now, Vmotion is essentially migrating your virtual machines by having them powered on and not having any kind of downtime. This is considered a “hot migration” or “live migration.” This is generally the best option for migrating small workloads without having to accept any downtime during your migration process. Bulk migration is essentially performed in a lot of cases when you need to migrate your host and you have different configurations to deal with, or you can migrate with a minimal amount of downtime.
Essentially, you may need to turn them off or on, and bulk migration allows you to customise that process. Cold migration is essentially moving your powered-off VMs from one host to another. Cold migration is a good option, especially when you can tolerate downtime to make the migration process simple. when it comes to HCX. This is the VMware hybrid cloud extension. This is typically used for bulk migration. When you use the HCX extension to perform a bulk migration, you can basically migrate those hosts and move them to a larger scale VM between your on-premises data centre and the cloud SDDC. When it comes to V motion, it’s important to understand that there are several restrictions and configuration challenges you’ll want to appreciate before you consider using V motion.
When it comes to V motion, understand that you’ll want to be aware of specific restrictions such as EVC mode, virtual switches, and also the ability to use Broadwell chipsets as well. When it comes to these limitations, keep in mind that VMs that use standard virtual switches, for example, cannot typically be migrated back to the on-premises data center after being migrated to the SCDC, the Broadwell chipsets. It’s important to note that when you’re migrating VMs with Broad well chipsets, these chipsets may have to be power cycled, and you can’t migrate these back to on-premises hosts or with a cluster from the STDC. So look at these limitations and restrictions and make sure that you’re aware of these limitations.
With Vmotion, when it comes to cold migrations, there are no limitations to be aware of, so plan your migration accordingly. When it comes to migration options, choose the best one based on two criteria. The main factor is: how many VMs do you want to migrate? In general, if you’re only moving a few VMs, such as less than 20 or 30, V motion may be a good option. A lot of the concern with Vmotion is not just understanding the number of VMs you need, but also the amount of time and bandwidth that you have available as well. The second migration option factor to consider is the interface that you want to use if you’re going to use Command Line, APIs, or Power CLI, for example, and whether V Motion will work for you. If you want to use HCX, you’ll need to use bulk migration, or if you want to use ACX, perhaps the API or power CLI. You can also look at a cold migration as well. However, as you may be aware, when you’re migrating with Vmotion, you’ll want to use the user interface.
In this module, I’m over at the VMware cloud on AWS Hands Lab. What we want to do is essentially show you how to sign up and log into the Handson Lab. Let’s proceed. You can see that we have the VMware Cloud on AWS Lab here. And if you have trouble finding the VMware cloud on AWS, the easiest way is to go over to your search engine and type in “VMware cloud on AWS hands-on lab,” and it should bring you directly to the page. If you haven’t signed up for a VMware Hands-On Lab account, you would need to do that before you could use the lab.
Go over here to register, fill in your name and information, and go through the process. Essentially, it’s a few steps. In this case, I already have an account, and it’s fairly straightforward, so there’s no need to really walk you through the whole process. You do need to agree to the end-user license. And the good news is that no credit card is required as well. Let’s go ahead and go over here to sign in. I’m already pretty much signed in. Let’s go ahead and select Login. Now, when I select StartLab, this will bring up the Handson Lab console. It is loading the console, and now it is fully up and operational in this lab. Before we begin, we want to make certain that you understand the layout and operation of the lab. The VMware hands-on lab gives you an hour and 30 minutes. However, if you need more time, you can certainly extend up to eight times in 1-hour increments when it comes to understanding the workflow of the lab itself.
What you want to do is go over and look at the manual here, look at the different subsections, and I encourage you to go through each one to make sure you understand the content and how you get started. When you go to the lab scenario, you will see that it gives you a scenario about a company. And during the lab, you’ll need to setup the specific configuration of the AWS Cloud to meet the requirements of the customer, go through the solution, and read the documentation. A lot of it is basic, so it’s pretty quick to read through. You’ll see that it goes through all the documentation that you might want to look at. And if I keep on scrolling over to get to the start of the lab, which is somewhere around 30 or so, and unfortunately there is no easy way to search, and if you just keep on clicking, and of course, they would encourage you to read all this, but in this case, for time’s sake, we want to just go over to the lab. So we want to do that here and go to the student check-in. The student checking in wants you to go through and check into the lab. In the next demo, we’ll go through the VMware Cloud on AWS setup by initiating the student check-in process.