Tagnested

Nested ESXi with OpenStack

For all of you who want to run VMware’s ESXi 5.x on an OpenStack cloud running vSphere as the hypervisor, I have a tiny little tip that might save you some researching: The difficulty I faced was “How do I enable nesting (vHV) for an OpenStack deployed instance?”. I was almost going to write a script to add

featMask.vm.hv.capable="Min:1"
vhv.enable="True"

and run it after the “nova boot” command, and then I found what I am going to show you now.

Remember that uploading an image into Glance you can specify key/value pairs called properties? Well, you are probably already aware of this:

root@controller:~# glance image-show 9eb827d3-7657-4bd5-a6fa-61de7d12f649
+-------------------------------+--------------------------------------+
| Property                      | Value                                |
+-------------------------------+--------------------------------------+
| Property 'vmware_adaptertype' | ide                                  |
| Property 'vmware_disktype'    | sparse                               |
| Property 'vmware_ostype'      | windows7Server64Guest                |
| checksum                      | ced321a1d2aadea42abfa8a7b944a0ef     |
| container_format              | bare                                 |
| created_at                    | 2014-01-15T22:35:14                  |
| deleted                       | False                                |
| disk_format                   | vmdk                                 |
| id                            | 9eb827d3-7657-4bd5-a6fa-61de7d12f649 |
| is_public                     | True                                 |
| min_disk                      | 0                                    |
| min_ram                       | 0                                    |
| name                          | Windows 2012 R2 Std                  |
| protected                     | False                                |
| size                          | 10493231104                          |
| status                        | active                               |
| updated_at                    | 2014-01-15T22:37:42                  |
+-------------------------------+--------------------------------------+
root@controller:~#

At this point, take a look at the vmware_ostype property, which is set to “windows7Server64Guest”. This value is passed to the vSphere API when deploying an image through ESXi’s API (VMwareESXDriver) or the vCenter API (VMwareVCDriver). Looking at the vSphere API/SDK API Reference you can find valid values and since vSphere 5.0 we find “vmkernel4guest” and “vmkernel5guest” in the list representing ESXi 4.x and 5.x respectively. According to my testing, this works with Nova’s VMwareESXDriver as well as VMwareVCDriver.

This is how you change the property in case you set it differently:

# glance image-update --property "vmware_ostype=vmkernel5Guest" IMAGE

And to complete the pictures, this is the code in Nova that implements this functionality:

  93 def get_vm_create_spec(client_factory, instance, name, data_store_name,
  94                        vif_infos, os_type="otherGuest"):
  95     """Builds the VM Create spec."""
  96     config_spec = client_factory.create('ns0:VirtualMachineConfigSpec')
  97     config_spec.name = name
  98     config_spec.guestId = os_type
  99     # The name is the unique identifier for the VM. This will either be the
 100     # instance UUID or the instance UUID with suffix '-rescue' for VM's that
 101     # are in rescue mode
 102     config_spec.instanceUuid = name
 103 
 104     # Allow nested ESX instances to host 64 bit VMs.
 105     if os_type == "vmkernel5Guest":
 106         config_spec.nestedHVEnabled = "True"

You can see that vHV is only enabled if the os_type is set to vmkernel5Guest. I would assume that like this you cannot nest Hyper-V or KVM but I haven’t validated.

Pretty good already. But what I am really looking for is running ESXi on top of KVM as I need nested ESXi combined with Neutron to create properly isolated tenant networks. The most current progress with this can probably be found in the VMware Community.

Nesting ESXi on ESXi 5.1 in vCloud Director 5.1

Nesting hypervisors – especially ESXi – is becoming more and more popular with löab/testing and/or development in mind. I just set up such an environment using vCloud Director and found some bad bad news:

We used vCloud Director 5.1 with vSphere 5.1 underneath and everything running on Distributed vSwitch version 5.1, too (has to be as vCloud Director needs VXLAN). Deploying ESXi 5.1 in vCloud Director 5.1 has become quite easy as the everything made its way into the GUI to enable virtual virtualization hardware assist.

Networking

But there are two issues with networking:

  1. The virtual port group to connect virtual ESXi hosts to, has to have promiscuous mode enables.
  2. That same port group has to have forged transmits enabled,too!

While issue number one can be resolved by editing the vCloud Director database (read http://www.virtuallyghetto.com/2011/10/missing-piece-in-creating-your-own.html for more information), problem number two is very bad news. Why? Well, the latest version of Distributed vSwitch rejects all three security policies by default. That means promisuous mode, MAC address changes and forged transmits are set to “reject”. In earlier version those where set to reject/accept/accept, remember!? (refer to http://kb.vmware.com/kb/2030982 to see how default settings evolved) So, forged transmits were accepted already.

Well, as a result, editing the vCloud Director database to enable promiscous mode on provisioned port groups is not enough anymore. Right now, the only solution is to manually or scriptedly reconfigure port groups every time a vApp with virtual ESXi hosts was started.

That really sucks, my friends! And I think we need a solution here quick! So in case you know one, please share!

VM Configuration

SCSI Adapter

I followed the instructions on http://www.virtuallyghetto.com/2011/10/missing-piece-in-creating-your-own.html and executed the following SQL command to add “ESXi 5.x” as an OS type in vCloud Director:

INSERT INTO guest_osfamily (family,family_id) VALUES ('VMware ESX/ESXi',6);
INSERT INTO guest_os_type (guestos_id,display_name, internal_name, family_id, is_supported, is_64bit, min_disk_gb, min_memory_mb, min_hw_version, supports_cpu_hotadd, supports_mem_hotadd, diskadapter_id, max_cpu_supported, is_personalization_enabled, is_personalization_auto, is_sysprep_supported, is_sysprep_os_packaged, cim_id, cim_version) VALUES (seq_config.NextVal,'ESXi 4.x', 'vmkernelGuest', 6, 1, 1, 8, 3072, 7,1, 1, 4, 8, 0, 0, 0, 0, 107, 40);
INSERT INTO guest_os_type (guestos_id,display_name, internal_name, family_id, is_supported, is_64bit, min_disk_gb, min_memory_mb, min_hw_version, supports_cpu_hotadd, supports_mem_hotadd, diskadapter_id, max_cpu_supported, is_personalization_enabled, is_personalization_auto, is_sysprep_supported, is_sysprep_os_packaged, cim_id, cim_version) VALUES (seq_config.NextVal, 'ESXi 5.x', 'vmkernel5Guest', 6, 1, 1, 8, 3072, 7,1, 1, 4, 8, 0, 0, 0, 0, 107, 50);

This sets the diskadapter_id value to 4 which refers to the LSI Logic SAS adapter. The problem with this adapter and virtual ESXi hosts is, that disks on this controller appear as remote disks to ESXi:

Screenshot-from-2013-04-03-113552

This might not seem to be a big problem as ESXi still installs on the HDD it can see and will boot properly. But working with Host Profiles you will run into trouble: Grabbing a Host Profile from this ESXi host, the profile will include this disk and will not be able to find the disk as you apply the profile to a different host. As a result, you would have to dig into the Host Profile disabling sub profiles to see the other host in compliance.

To avoid this problem, use LSI Logic Parallel instead. If you haven’t already executed the big SQL statements above, execute this instead:

INSERT INTO guest_osfamily (family,family_id) VALUES ('VMware ESX/ESXi',6);
INSERT INTO guest_os_type (guestos_id,display_name, internal_name, family_id, is_supported, is_64bit, min_disk_gb, min_memory_mb, min_hw_version, supports_cpu_hotadd, supports_mem_hotadd, diskadapter_id, max_cpu_supported, is_personalization_enabled, is_personalization_auto, is_sysprep_supported, is_sysprep_os_packaged, cim_id, cim_version) VALUES (seq_config.NextVal,'ESXi 4.x', 'vmkernelGuest', 6, 1, 1, 8, 3072, 7,1, 1, 3, 8, 0, 0, 0, 0, 107, 40);
INSERT INTO guest_os_type (guestos_id,display_name, internal_name, family_id, is_supported, is_64bit, min_disk_gb, min_memory_mb, min_hw_version, supports_cpu_hotadd, supports_mem_hotadd, diskadapter_id, max_cpu_supported, is_personalization_enabled, is_personalization_auto, is_sysprep_supported, is_sysprep_os_packaged, cim_id, cim_version) VALUES (seq_config.NextVal, 'ESXi 5.x', 'vmkernel5Guest', 6, 1, 1, 8, 3072, 7,1, 1, 3, 8, 0, 0, 0, 0, 107, 50);

Should those entries already be in your database, execute this to change the diskadapter_id value from 4 to 3:

UPDATE guest_os_type SET diskadapter_id=3 WHERE display_name='ESXi 4.x';
UPDATE guest_os_type SET diskadapter_id=3 WHERE display_name='ESXi 5.x';

Don”t forget to restart the vcd service after that:

$ service vmware-vcd restart

IP Assignment

Every virtual NIC attached to a vCloud Director controlled VM, is most likely going to be connected to a virtual network. Once connected, an IP address has to be assignd. This assignment can be done in either of the following ways:

  • Static – IP Pool
  • Static – Manual
  • DHCP

But where is the “None”-option? Some reasons for having “None” as an option:

  1. What if I want to use NIC teaming on a vSwitch for the mangement network? In that case the same IP would be valid for both NICs which cannot be configured.
  2. With ESXi, no NIC ever has an IP address directly configured to it, so none of the options above would apply!
  3. For VM networks, we probably use NICs only to forward VM traffic. ESXi itself might not even have an IP address in that network.

So far, I used DHCP for NICs that should not have an IP address at all and “Static – Manual” for the first NIC of a NIC teaming group that carries VMkernel Port traffic. Works – but its not perfekt.

Hope that helped!

© 2019 v(e)Xpertise

Theme by Anders NorénUp ↑