TagNested Virtualization

Nested ESXi with OpenStack

For all of you who want to run VMware’s ESXi 5.x on an OpenStack cloud running vSphere as the hypervisor, I have a tiny little tip that might save you some researching: The difficulty I faced was “How do I enable nesting (vHV) for an OpenStack deployed instance?”. I was almost going to write a script to add

featMask.vm.hv.capable="Min:1"
vhv.enable="True"

and run it after the “nova boot” command, and then I found what I am going to show you now.

Remember that uploading an image into Glance you can specify key/value pairs called properties? Well, you are probably already aware of this:

root@controller:~# glance image-show 9eb827d3-7657-4bd5-a6fa-61de7d12f649
+-------------------------------+--------------------------------------+
| Property                      | Value                                |
+-------------------------------+--------------------------------------+
| Property 'vmware_adaptertype' | ide                                  |
| Property 'vmware_disktype'    | sparse                               |
| Property 'vmware_ostype'      | windows7Server64Guest                |
| checksum                      | ced321a1d2aadea42abfa8a7b944a0ef     |
| container_format              | bare                                 |
| created_at                    | 2014-01-15T22:35:14                  |
| deleted                       | False                                |
| disk_format                   | vmdk                                 |
| id                            | 9eb827d3-7657-4bd5-a6fa-61de7d12f649 |
| is_public                     | True                                 |
| min_disk                      | 0                                    |
| min_ram                       | 0                                    |
| name                          | Windows 2012 R2 Std                  |
| protected                     | False                                |
| size                          | 10493231104                          |
| status                        | active                               |
| updated_at                    | 2014-01-15T22:37:42                  |
+-------------------------------+--------------------------------------+
root@controller:~#

At this point, take a look at the vmware_ostype property, which is set to “windows7Server64Guest”. This value is passed to the vSphere API when deploying an image through ESXi’s API (VMwareESXDriver) or the vCenter API (VMwareVCDriver). Looking at the vSphere API/SDK API Reference you can find valid values and since vSphere 5.0 we find “vmkernel4guest” and “vmkernel5guest” in the list representing ESXi 4.x and 5.x respectively. According to my testing, this works with Nova’s VMwareESXDriver as well as VMwareVCDriver.

This is how you change the property in case you set it differently:

# glance image-update --property "vmware_ostype=vmkernel5Guest" IMAGE

And to complete the pictures, this is the code in Nova that implements this functionality:

  93 def get_vm_create_spec(client_factory, instance, name, data_store_name,
  94                        vif_infos, os_type="otherGuest"):
  95     """Builds the VM Create spec."""
  96     config_spec = client_factory.create('ns0:VirtualMachineConfigSpec')
  97     config_spec.name = name
  98     config_spec.guestId = os_type
  99     # The name is the unique identifier for the VM. This will either be the
 100     # instance UUID or the instance UUID with suffix '-rescue' for VM's that
 101     # are in rescue mode
 102     config_spec.instanceUuid = name
 103 
 104     # Allow nested ESX instances to host 64 bit VMs.
 105     if os_type == "vmkernel5Guest":
 106         config_spec.nestedHVEnabled = "True"

You can see that vHV is only enabled if the os_type is set to vmkernel5Guest. I would assume that like this you cannot nest Hyper-V or KVM but I haven’t validated.

Pretty good already. But what I am really looking for is running ESXi on top of KVM as I need nested ESXi combined with Neutron to create properly isolated tenant networks. The most current progress with this can probably be found in the VMware Community.

vSphere Testing vSAN in a Nested vSphere Environment

The new vSAN feature coming up soon is REALLY exciting me! That is such a cool feature 8-)! Of course, I was interested in testing it in my lab right away when I could get access to it and faced a problem: vSAN requires the availability of a locally attached SSD drive. Well, I guess most of us build virtual (nested) labs and right now vSphere cannot be configured to present a virtual SSD to a VM – damn! Fortunately, there is a way to fake one icon wink Testing vSAN in a Nested vSphere Environment I remembered that is was possible to mark a disk as SSD in case ESXi didn’t recognize it. So I thought maybe we can use this to fake an SSD for virtual ESXi hosts and guest what – yes we can!

Recipe:

  1. Get a shell on your ESXi host (ESXi Shell or SSH)
  2. Find the canonical name of the locally attached regular disk
  3. Create a claim rule that tags the drive as SSD.
  4. Reclaim the device.
  5. Verify.

Find the canonical name of the locally attached regular disk:

~ # esxcli storage core device list | grep -E "(^\s+Display Name)|(^\s+Size)|SSD"
Display Name: QUADSTOR iSCSI Disk (naa.6ed5603489a66917daa052a5de9197ad)
Size: 204800
Is SSD: false
Display Name: Local VMware Disk (mpx.vmhba1:C0:T1:L0)
Size: 8192
Is SSD: false
Display Name: Local VMware Disk (mpx.vmhba1:C0:T0:L0)
Size: 16384
Is SSD: false
Display Name: Local NECVMWar CD-ROM (mpx.vmhba0:C0:T0:L0)
Size: 300
Is SSD: false
Display Name: QUADSTOR iSCSI Disk (naa.6edd1360f61f663586050a01b6571f84)
Size: 204800
Is SSD: false

Create a claim rule that tags the drive as SSD:

~ # esxcli storage nmp satp rule add --satp VMW_SATP_LOCAL --device mpx.vmhba1:C0:T1:L0 --option=enable_ssd

Reclaim the device:

~ # esxcli storage core claiming reclaim -d mpx.vmhba1:C0:T1:L0

Verify:

~ # esxcli storage core device list | grep -E "(^\s+Display Name)|(^\s+Size)|SSD"
Display Name: QUADSTOR iSCSI Disk (naa.6ed5603489a66917daa052a5de9197ad)
Size: 204800
Is SSD: false
Display Name: Local VMware Disk (mpx.vmhba1:C0:T1:L0)
Size: 8192
Is SSD: true
Display Name: Local VMware Disk (mpx.vmhba1:C0:T0:L0)
Size: 16384
Is SSD: false
Display Name: Local NECVMWar CD-ROM (mpx.vmhba0:C0:T0:L0)
Size: 300
Is SSD: false
Display Name: QUADSTOR iSCSI Disk (naa.6edd1360f61f663586050a01b6571f84)
Size: 204800
Is SSD: false
esxcli storage core claiming reclaim -d mpx.vmhba1:C0:T2:L0 – See more at: http://www.virtuallyghetto.com/2011/07/how-to-trick-esxi-5-in-seeing-ssd.html#sthash.O0jydZb0.dpuf

Screenshot-08302013-124859-PM

Tadaaa! Like this we can test vSAN and other cool stuff already around: Swap to SSD and Host Caching in VMware Horizon View VDI environments! Awesome!

For simulating the vSAN feature in a nested environment just use the detailed guide from David Hill at virtual-blog.

Once you followed the mentioned steps, you will se a shared vSAN datastore and the corresponding Storage Provider entries.

vSAN_datastore

Now you are able to create a VM Storage Policy out of the vendor specific vSAN capabilities.

vSAN_Storage_Profile

Just attach the newly created profiles to your Virtual Machine hard disk and voila…. That’s it icon wink Testing vSAN in a Nested vSphere Environment Have fun of testing this really cool new feature in your nested environment.

Register for the public beta right here

© 2019 v(e)Xpertise

Theme by Anders NorénUp ↑