TagOVF

Loading VMware Appliances into OpenStack

Today, I want to talk about the challenges of loading VMware appliances into OpenStack Glance and give you a recipe of how to do it.

I migrated my lab to OpenStack but need to be able to test the latest VMware products in order to keep myself up to speed. As VMware provides more and more of its software as virtual appliances in OVF or OVA format it makes sense to have them in Glance for provisioning on OpenStack.

The following procedure is illustrated at the example of vCAC Appliance 6.0:

 

Challenge 1: Format Conversion

If you get to download the appliance as OVF you are already one step ahead. OVF is simply an XML-based configuration and does not include any information required to run the VM on OpenStack.

OVAs on the other hand need to be unpacked first. Luckily, OVA is nothing but TAR:

$ file VMware-vCAC-Appliance-6.0.0.0-1445145_OVF10.ova
VMware-vCAC-Appliance-6.0.0.0-1445145_OVF10.ova: POSIX tar archive (GNU)
$

So we continue extracting the archive:

$ tar xvf ../VMware-vCAC-Appliance-6.0.0.0-1445145_OVF10.ova
VMware-vCAC-Appliance-6.0.0.0-1445145_OVF10.ovf
VMware-vCAC-Appliance-6.0.0.0-1445145_OVF10.mf
VMware-vCAC-Appliance-6.0.0.0-1445145_OVF10.cert
VMware-vCAC-Appliance-6.0.0.0-1445145-system.vmdk
VMware-vCAC-Appliance-6.0.0.0-1445145-data.vmdk

The .ovf, .mf and .cert files can be deleted right away. We will not need those anymore. After that the VMDK files must be converted to QCOW2 or RAW:

$ qemu-img convert -O qcow2 VMware-vCAC-Appliance-6.0.0.0-1445145-system.vmdk VMware-vCAC-Appliance-6.0.0.0-1445145-system.img
$ qemu-img convert -O qcow2 VMware-vCAC-Appliance-6.0.0.0-1445145-data.vmdk VMware-vCAC-Appliance-6.0.0.0-1445145-data.img

 

Challenge 2: Multi-Disk Virtual Machines

Unfortunately, OpenStack does not support images existing of multiple disks. Bad luck that VMware has the habit of distributing their appliances with a system disk and a separate data disk (*system.vmdk and *-data.vmdk). To still load this appliance into Glance we need to merge the disks into a single one:

First, we use guestfish to get some information on the filesystems inside the disk images:

$ guestfish

Welcome to guestfish, the guest filesystem shell for
editing virtual machine filesystems and disk images.

Type: 'help' for help on commands
'man' to read the manual
'quit' to quit the shell

> add VMware-vCAC-Appliance-6.0.0.0-1445145-system.img
> add VMware-vCAC-Appliance-6.0.0.0-1445145-data.img
> run
>
> list-filesystems
/dev/sda1: ext3
/dev/sda2: swap
/dev/sdb1: ext3
/dev/sdb2: ext3
>

It is safe to assume that the EXT3 on /dev/sda1 is the root file system, so we mount it and have a look into /etc/fstab to see where the other filesystems should be mounted:

> mount /dev/sda1 /
> cat /etc/fstab
/dev/sda2 swap swap defaults 0 0
/dev/sda1 / ext3 defaults 1 1
proc /proc proc defaults 0 0
sysfs /sys sysfs noauto 0 0
debugfs /sys/kernel/debug debugfs noauto 0 0
devpts /dev/pts devpts mode=0620,gid=5 0 0
/dev/sdb1 /storage/log ext3 rw,nosuid,nodev,exec,auto,nouser,async 0 1
/dev/sdb2 /storage/db ext3 rw,nosuid,nodev,exec,auto,nouser,async 0 1

>

Next, as we want to get rid of sdb all together, we remove the entries in /etc/fstab

> vi /etc/fstab

and mount /dev/sdb1 to /mnt. This way we can copy the contents over to /storage/log:

> mount /dev/sdb1 /mnt
> cp_a /mnt/core /storage/log/
> cp_a /mnt/vmware /storage/log/

Of course, the same has to happen for /dev/sdb2:

> umount /mnt
> mount /dev/sdb2 /mnt
> cp_a /mnt/pgdata /storage/db/

And we are done! Please not it is important to use cp_a instead of cp_r as it will preserve permissions and ownership.

> exit

 

Challenge 3: Disk Space

Well, so far so good! But the image that originally served as the system disk now has to hold all the data as well. Therefore, we need more space! So we have to resize the disk image, the partition(s) and filesystem(s). Here is the easiest way I’ve found:

$ qemu-img create -f qcow2 newdisk.qcow2 50G
$ virt-resize --expand /dev/sda1 VMware-vCAC-Appliance-6.0.0.0-1445145-system.img newdisk.qcow2
Examining VMware-vCAC-Appliance-6.0.0.0-1445145-system.img ...
**********

Summary of changes:

/dev/sda1: This partition will be resized from 12.0G to 47.0G. The
filesystem ext3 on /dev/sda1 will be expanded using the 'resize2fs'
method.

/dev/sda2: This partition will be left alone.

**********
Setting up initial partition table on newdisk.qcow2 ...
Copying /dev/sda1 ...
Copying /dev/sda2 ...
Expanding /dev/sda1 using the 'resize2fs' method ...

Resize operation completed with no errors. Before deleting the old
disk, carefully check that the resized disk boots and works correctly.

$

What’s happening here? First, we create a new empty disk of the desired size with “qemu-img create”. After that, virt-resize copies data from the original file to the new one resizing partitions on the fly. Next, the filesystems are resized using resize2fs.

The image can now be uploaded into Glance. Please make sure to add properties that set the SCSI controller chipset properly. For example, IDE is going to work. The reason for this is the name of disks: using VirtIO we will get /dev/vda1 and would have to adjust those name e.g. in /etc/fstab, too. I have only had luck with ide so far:

glance image-update --property hw_disk_bus=ide 724ad050-a636-4c98-8ae5-9ff58973c84c

Have fun!

Fixing Failed OVF Deployment Due to Integrity Error

We were just about to deploy the latest version of VCE’s UIMP when vSphere Client failed with an error message about a failed integrity check of the VMDK file. Although not the best idea, the fix was easy: When an integrity check is performed at all, OVF brings an additional file: the manifest file (.mf). In our case, it contained SHA1 checksum information about the .ovf and the .vmdk file:

mathias@x1c:/media/mathias/Volume$ ls -hl UIMP*
-rw------- 1 mathias mathias 2,6G Mär  5 19:25 UIMP-4.0.0.2.359-Install-Media.iso
-rw------- 1 mathias mathias 1,3G Mär  6 09:28 UIMP_OVF10-disk1.vmdk
-rw------- 1 mathias mathias  133 Mär  6 10:54 UIMP_OVF10.mf
-rw------- 1 mathias mathias 125K Jan  4 23:13 UIMP_OVF10.ovf
mathias@x1c:/media/mathias/Volume$ cat UIMP_OVF10.mf
SHA1(UIMP_OVF10.ovf)= 881533ff36aebc901555dfa2c1d52a6bd4c47d99
SHA1(UIMP_OVF10-disk1.vmdk)= f175f150decb2bf5a859903b050f4ea4a3982023

Interestingly, the whole OVF data was contained in an ISO file which passed the MD5 check after download. So something must have gone wrong packaging the appliance. I calculated the SHA1 checksum for the file and compared it to the one in the manifest:

mathias@x1c:/media/mathias/Volume$ sha1sum UIMP_OVF10.ovf
881533ff36aebc901555dfa2c1d52a6bd4c47d99  UIMP_OVF10.ovf
mathias@x1c:/media/mathias/Volume$ sha1sum UIMP_OVF10-disk1.vmdk
f175f150decb2bf5a859903b050f4ea4a3982023  UIMP_OVF10-disk1.vmdk
mathias@x1c:/media/mathias/Volume$

VMDK: f175f150decb2bf5a859903b050f4ea4a3982023
MF: 9371305140e8c541b0cea6b368ad4af47221998e

Strange 😀 Well, the fix was to simply edit the manifest with the correct SHA1 checksum. But please keep in mind that just because we just forged the checksum doesnt mean the data is not corrupt. In this case, as the MD5 check of the ISO was successful we assumed that the data is probably fine an decided to give it a try. In the end, the file was still corrupt and brought errors mounting the root file system.u

Using OVFtool via Powercli with a session ticket – lessons learned

Powercli is a really great tool for automation. Nevertheless from time to time we need other tools as well to fullfil our needs. If you want to automize the distribution of templates in your environment the OVFtool is a really nice way to achieve this. Since I wanted to deploy the template to multiple Clusters in an environment, the following would do the trick.

Having 2 arrays which stores all the vCenter and all the Clusters in the environment the following would do the trick.

function DistributeTemplates 
(
	[Parameter(Mandatory=$True)]
	[String]$templateLocation
)
 
$ovftool = "C:\Program Files\VMware\OVFtool\ovftool.exe"
 
foreach ($vCenter in $vCenterlist) {
    Connect-VIServer $vCenter
    foreach ($cluster in $clusterList) {
 
        $arglist = ' --name=TemplateName --network=NetworkName -ds=DatastoreName $($templateLocation) vi://vCenterUser:Password@$($vCenter)/DCNAME/host/$($cluster)'  		 		
        $process = Start-Process $ovftool -Argumentlist ($arglist) -wait
    }
}

Eventhough it was working I was not happy about how the authentication mechanism was used (Password in cleartext…nooooo way).

Luckily I found a post at geekafterfive.com who explained how we can use a ticketing automatism in OVFTool.

$Session = Get-View -Id Sessionmanager
$ticket = $session.AcquireCloneTicket()
Unfortunatley struggeld with two things:

1. Make sure you are only connected to one vCenter, otherwise

 $ticket = $session.AcquireCloneTicket()

will throw an error Method invocation failed because [System.Object[]] doesn’t contain a method named ‘AcquireCloneTicket’.”

powercli_error2. I could only upload the templates to the first cluster. The second one was always failing. It seems that my ticket was not valid anymore. Luckily a closer look to the vSphere SDK Programming guide told me that “A client application executing on behalf of a remoteuser can invoke the AcquireCloneTicket operation of SessionManager to obtain a onetime user name and password for logging on without entering a subsequent password” …. ahhh…one time password…I thought the ticket will be valid for multiple operations once I’m connected to a vCenter. But since my thoughts don’t count on this topic (*yeahyeah…what a rough world) I needed to create a new ticket before every OVFTool operation.

So the following script was completly satisfying my (template-automation) needs.

function DistributeTemplates 
(
	[Parameter(Mandatory=$True)]
	[String]$templateLocation
)
 
$ovftool = "C:\Program Files\VMware\OVFtool\ovftool.exe"
 
foreach ($vCenter in $vCenterlist) {
    Connect-VIServer $vCenter
    foreach ($cluster in $clusterList) {
        $Session = Get-View -Id Sessionmanager
        $Ticket = $Session.AcquireCloneTicket()
        $arglist = ' --I:targetSessionTicket=$($Ticket) --name=TemplateName --network=NetworkName -ds=DatastoreName $($templateLocation) vi://$($vCenter)/DCNAME/host/$($cluster)'  		 		
        $process = Start-Process $ovftool -Argumentlist ($arglist) -wait
    }
}

Now a ticket of the vCenter authentication is generated after the usage and I haven’t had to deal with storing any credentials while deploying a template to the whole wide world :)…yeha…

Import vCenter Operations (vCOps) as an OVF into a vCloud Director environment

Having a standard in the virtualization world like OVF or OVA is a real cool thing. Nevertheless from time to time I stumble about some issues with it. Just happened while trying to import vCenter Operation to a vCloud environment at a customer site.

Even though you can import OVF (unfortuneatly no OVA files -> extract the OVA first, e.g. with the ovftool or tar) it hasn’t worked in the vCloud environment, since the vCOPS vAPP consists of two virtual machines and wants to have additional information (EULA, network, etc.) during the deployment process.

So my next idea was to deploy the OVF in the vCenter just to import it afterwards into the vCloud director.

0005

After the importing process i wasn’t able to start the VM either

001

Unable to start vAPP ….., Invalid vApp properties: Unknown property vami.ip0.VM_2 referenced in property ip.

0021

To solve this problem, I moved the two virtual machines (analytics and UI) out of the vApp in the vCenter and deactivated their vApp properties in the virtual machine settings.

003

If any error message occur during this configuration part and prompt wants you to refresh the browser, ignore this by clicking NO (worked only in Chrome for me)

004

Now you can navigate to the vCloud Director, import both virtual machines to your vApp.

0005

and voila. The vCenter Operations VMs can boot in your vCloud environment.

006

Keep in mind that you eventually need to change the IP addresses of your vCenter Operations components. Just follow the instructions on virtualGhetto

© 2019 v(e)Xpertise

Theme by Anders NorénUp ↑