Tagglance

Loading VMware Appliances into OpenStack

Today, I want to talk about the challenges of loading VMware appliances into OpenStack Glance and give you a recipe of how to do it.

I migrated my lab to OpenStack but need to be able to test the latest VMware products in order to keep myself up to speed. As VMware provides more and more of its software as virtual appliances in OVF or OVA format it makes sense to have them in Glance for provisioning on OpenStack.

The following procedure is illustrated at the example of vCAC Appliance 6.0:

 

Challenge 1: Format Conversion

If you get to download the appliance as OVF you are already one step ahead. OVF is simply an XML-based configuration and does not include any information required to run the VM on OpenStack.

OVAs on the other hand need to be unpacked first. Luckily, OVA is nothing but TAR:

$ file VMware-vCAC-Appliance-6.0.0.0-1445145_OVF10.ova
VMware-vCAC-Appliance-6.0.0.0-1445145_OVF10.ova: POSIX tar archive (GNU)
$

So we continue extracting the archive:

$ tar xvf ../VMware-vCAC-Appliance-6.0.0.0-1445145_OVF10.ova
VMware-vCAC-Appliance-6.0.0.0-1445145_OVF10.ovf
VMware-vCAC-Appliance-6.0.0.0-1445145_OVF10.mf
VMware-vCAC-Appliance-6.0.0.0-1445145_OVF10.cert
VMware-vCAC-Appliance-6.0.0.0-1445145-system.vmdk
VMware-vCAC-Appliance-6.0.0.0-1445145-data.vmdk

The .ovf, .mf and .cert files can be deleted right away. We will not need those anymore. After that the VMDK files must be converted to QCOW2 or RAW:

$ qemu-img convert -O qcow2 VMware-vCAC-Appliance-6.0.0.0-1445145-system.vmdk VMware-vCAC-Appliance-6.0.0.0-1445145-system.img
$ qemu-img convert -O qcow2 VMware-vCAC-Appliance-6.0.0.0-1445145-data.vmdk VMware-vCAC-Appliance-6.0.0.0-1445145-data.img

 

Challenge 2: Multi-Disk Virtual Machines

Unfortunately, OpenStack does not support images existing of multiple disks. Bad luck that VMware has the habit of distributing their appliances with a system disk and a separate data disk (*system.vmdk and *-data.vmdk). To still load this appliance into Glance we need to merge the disks into a single one:

First, we use guestfish to get some information on the filesystems inside the disk images:

$ guestfish

Welcome to guestfish, the guest filesystem shell for
editing virtual machine filesystems and disk images.

Type: 'help' for help on commands
'man' to read the manual
'quit' to quit the shell

> add VMware-vCAC-Appliance-6.0.0.0-1445145-system.img
> add VMware-vCAC-Appliance-6.0.0.0-1445145-data.img
> run
>
> list-filesystems
/dev/sda1: ext3
/dev/sda2: swap
/dev/sdb1: ext3
/dev/sdb2: ext3
>

It is safe to assume that the EXT3 on /dev/sda1 is the root file system, so we mount it and have a look into /etc/fstab to see where the other filesystems should be mounted:

> mount /dev/sda1 /
> cat /etc/fstab
/dev/sda2 swap swap defaults 0 0
/dev/sda1 / ext3 defaults 1 1
proc /proc proc defaults 0 0
sysfs /sys sysfs noauto 0 0
debugfs /sys/kernel/debug debugfs noauto 0 0
devpts /dev/pts devpts mode=0620,gid=5 0 0
/dev/sdb1 /storage/log ext3 rw,nosuid,nodev,exec,auto,nouser,async 0 1
/dev/sdb2 /storage/db ext3 rw,nosuid,nodev,exec,auto,nouser,async 0 1

>

Next, as we want to get rid of sdb all together, we remove the entries in /etc/fstab

> vi /etc/fstab

and mount /dev/sdb1 to /mnt. This way we can copy the contents over to /storage/log:

> mount /dev/sdb1 /mnt
> cp_a /mnt/core /storage/log/
> cp_a /mnt/vmware /storage/log/

Of course, the same has to happen for /dev/sdb2:

> umount /mnt
> mount /dev/sdb2 /mnt
> cp_a /mnt/pgdata /storage/db/

And we are done! Please not it is important to use cp_a instead of cp_r as it will preserve permissions and ownership.

> exit

 

Challenge 3: Disk Space

Well, so far so good! But the image that originally served as the system disk now has to hold all the data as well. Therefore, we need more space! So we have to resize the disk image, the partition(s) and filesystem(s). Here is the easiest way I’ve found:

$ qemu-img create -f qcow2 newdisk.qcow2 50G
$ virt-resize --expand /dev/sda1 VMware-vCAC-Appliance-6.0.0.0-1445145-system.img newdisk.qcow2
Examining VMware-vCAC-Appliance-6.0.0.0-1445145-system.img ...
**********

Summary of changes:

/dev/sda1: This partition will be resized from 12.0G to 47.0G. The
filesystem ext3 on /dev/sda1 will be expanded using the 'resize2fs'
method.

/dev/sda2: This partition will be left alone.

**********
Setting up initial partition table on newdisk.qcow2 ...
Copying /dev/sda1 ...
Copying /dev/sda2 ...
Expanding /dev/sda1 using the 'resize2fs' method ...

Resize operation completed with no errors. Before deleting the old
disk, carefully check that the resized disk boots and works correctly.

$

What’s happening here? First, we create a new empty disk of the desired size with “qemu-img create”. After that, virt-resize copies data from the original file to the new one resizing partitions on the fly. Next, the filesystems are resized using resize2fs.

The image can now be uploaded into Glance. Please make sure to add properties that set the SCSI controller chipset properly. For example, IDE is going to work. The reason for this is the name of disks: using VirtIO we will get /dev/vda1 and would have to adjust those name e.g. in /etc/fstab, too. I have only had luck with ide so far:

glance image-update --property hw_disk_bus=ide 724ad050-a636-4c98-8ae5-9ff58973c84c

Have fun!

Nested ESXi with OpenStack

For all of you who want to run VMware’s ESXi 5.x on an OpenStack cloud running vSphere as the hypervisor, I have a tiny little tip that might save you some researching: The difficulty I faced was “How do I enable nesting (vHV) for an OpenStack deployed instance?”. I was almost going to write a script to add

featMask.vm.hv.capable="Min:1"
vhv.enable="True"

and run it after the “nova boot” command, and then I found what I am going to show you now.

Remember that uploading an image into Glance you can specify key/value pairs called properties? Well, you are probably already aware of this:

root@controller:~# glance image-show 9eb827d3-7657-4bd5-a6fa-61de7d12f649
+-------------------------------+--------------------------------------+
| Property                      | Value                                |
+-------------------------------+--------------------------------------+
| Property 'vmware_adaptertype' | ide                                  |
| Property 'vmware_disktype'    | sparse                               |
| Property 'vmware_ostype'      | windows7Server64Guest                |
| checksum                      | ced321a1d2aadea42abfa8a7b944a0ef     |
| container_format              | bare                                 |
| created_at                    | 2014-01-15T22:35:14                  |
| deleted                       | False                                |
| disk_format                   | vmdk                                 |
| id                            | 9eb827d3-7657-4bd5-a6fa-61de7d12f649 |
| is_public                     | True                                 |
| min_disk                      | 0                                    |
| min_ram                       | 0                                    |
| name                          | Windows 2012 R2 Std                  |
| protected                     | False                                |
| size                          | 10493231104                          |
| status                        | active                               |
| updated_at                    | 2014-01-15T22:37:42                  |
+-------------------------------+--------------------------------------+
root@controller:~#

At this point, take a look at the vmware_ostype property, which is set to “windows7Server64Guest”. This value is passed to the vSphere API when deploying an image through ESXi’s API (VMwareESXDriver) or the vCenter API (VMwareVCDriver). Looking at the vSphere API/SDK API Reference you can find valid values and since vSphere 5.0 we find “vmkernel4guest” and “vmkernel5guest” in the list representing ESXi 4.x and 5.x respectively. According to my testing, this works with Nova’s VMwareESXDriver as well as VMwareVCDriver.

This is how you change the property in case you set it differently:

# glance image-update --property "vmware_ostype=vmkernel5Guest" IMAGE

And to complete the pictures, this is the code in Nova that implements this functionality:

  93 def get_vm_create_spec(client_factory, instance, name, data_store_name,
  94                        vif_infos, os_type="otherGuest"):
  95     """Builds the VM Create spec."""
  96     config_spec = client_factory.create('ns0:VirtualMachineConfigSpec')
  97     config_spec.name = name
  98     config_spec.guestId = os_type
  99     # The name is the unique identifier for the VM. This will either be the
 100     # instance UUID or the instance UUID with suffix '-rescue' for VM's that
 101     # are in rescue mode
 102     config_spec.instanceUuid = name
 103 
 104     # Allow nested ESX instances to host 64 bit VMs.
 105     if os_type == "vmkernel5Guest":
 106         config_spec.nestedHVEnabled = "True"

You can see that vHV is only enabled if the os_type is set to vmkernel5Guest. I would assume that like this you cannot nest Hyper-V or KVM but I haven’t validated.

Pretty good already. But what I am really looking for is running ESXi on top of KVM as I need nested ESXi combined with Neutron to create properly isolated tenant networks. The most current progress with this can probably be found in the VMware Community.

© 2019 v(e)Xpertise

Theme by Anders NorénUp ↑