Page 3 of 8

CPU ready spikes & Host Power Management

We just observed some strange frequently occurring CPU ready spikes in our environment (screenshot).

cpu ready spikes

This effect occurred very frequently on each virtual machine. What caused it?

-> The Host Power Management mode which was balanced. After setting it to high-performance the spikes disappeared.

I know this might have to do with the specific hardware we are using, but since I heard about such effects from time to time, think about disabling the power management mode on the ESXi once you observe strange performance symptoms. IMO server are deserving high performance and nothing less 😉

I will change my mind once I have measured the financial benefit that might occur with the balanced power management mode. So if you have any concrete facts and number. Please post it here.



IMO: #VMworld 2014 recap VMware vCloud Air (part 3)

This is part 3 (and the first as a VCAP-DCD 🙂 ) of my IMO #VMworld wrap up. Read my about thoughts of a new product called vCloud Air.

IMO: #VMworld 2014 recap on VMware NSX (part 1)

IMO: #VMworld 2014 recap VMware EVO:RAIL (part 2)

IMO: #VMworld 2014 recap VMware vCloud Air (part 3)

IMO: #VMworld 2014 recap vSAN and vVol (part 4)

IMO: #VMworld 2014 recap Automation & Orchestration (part 5)


A big part of the keynote during VMworld was about vCloud Air and the progress VMware is doing in creating new datacenters all over the world offering public cloud services. The idea of having a hybrid cloud is from a top-level approach a really good one. IT services need to be delivered quicker with changing workloads and so on. Instead of increasing the capex and might risk that we invest in unused resources we can transform the capex into an opex by just ordering Infrastructure resources on demand from a public cloud provider and pay-as-we-go. A great thing from a management perspective and also from observing the use-cases logically it makes kind of sense to transform long-term into a hybrid solution.

So what do we need for a hybrid solution? An integration between our local datacenter and a public provider. Using the same technologies within both datacenters, ours and vCloud Air, we are able to seamless connect to each other. vCloud Automation center….pardon I meant vRealize Automation Center, NSX, long distance vMotion are hybrid-cloud enabler from a technological perspective. With all those technologies the hybrid-cloud is starting to get reality (honestly, how many of you have heard or was involved in a fully functional hybrid-cloud integration project?)

Buuuuuuut IMO I honestly doubt that VMware’s vision will be successful in the short-/midterm here in Germany (maybe in Europe at all). With all the potential $$$-signs (I know, I know… Server virtualization is moving into get commodity as well and we need to find/grow into new markets if VMware wants to be successful) in their eyes there is one thing that was forgotten or at least not communicated well (honestly not communicated at all during VMworld).

What about data privacy?

And I am not talking about securing the boarders of your datacenter from an unauthorized access. I talk about authorized access of US organizations like NSA and so on.

As a trainer and consultant I thing I have a good feeling about the mood “on the streets”. You get to know and discuss a lot with many people from different companies, backgrounds with several use-cases and and and. The big driver of not using public services is the following: the data/information we have in our datacenter is our capital. It is the driver and enabler of our business and we need to protect it.

I don’t want to get into any conspiracy theories, but organizations like the NSA has a reputation that they are also involved in economic espionage. No matter if this is a fact or not, it is a general believe in IT organizations now-a-days. So the general opinion is. “We are not giving another organization a key directly to our valuable data”.

Sanjay Poonen mentioned during the keynote that VMware is proud about creating a datacenter for vCloud Air in Germany in compliance with our (Germany’s) pretty strict data privacy rules. This would be only as long a valid argument if our data-privacy rules cannot be leveraged by certain US-rules/laws.

Microsoft is currently fighting in front of US courts to make sure that data physically located in a non-US country MUST NOT be opened to specific US organizations.

The result of this process will accelerate or slow down (I don’t say enable or disable on purpose…the transformation to public services will happen anyway) the usage of public-/hybrid cloud solutions in our region.

IMO it’s a funny thing that Microsoft (as a competitor of VMware) will be responsible for a success of vCloud Air (of course Microsoft is doing this to enable/accelerate their own Azure business). What I would have wanted would be a statement by VMware about this specific situation and how the are going to deal with it. Don’t talking about things like data privacy is something that won’t work on a ‘conservative’ market like Germany. And since I heard sooo many German-speaking guys at the VMworld, I can’t imagine that this market can be ignored by VMware.

Microsoft vs. US law:

vCloud Air overview:

vCloud Air elearning:

IMO: #VMworld 2014 recap VMware EVO:RAIL (part 2)

This is part 2 of my IMO #VMworld wrap up. Read my about thoughts of a new product called EVO:RAIL

IMO: #VMworld 2014 recap on VMware NSX (part 1)

IMO: #VMworld 2014 recap VMware EVO:RAIL (part 2)

IMO: #VMworld 2014 recap VMware vCloud Air (part 3)

IMO: #VMworld 2014 recap vSAN and vVol (part 4)

IMO: #VMworld 2014 recap Automation & Orchestration (part 5)



EVO:RAIL is a pretty cool so called hyper-converged solution provided by VMware and partner vendors like (DELL, EMC, Fujitsu, INSPUR, net one, SUPERMICRO, HP, Hitachi). Summarized Evo:Rail delivers a complete vSphere-suite (including vCenter, vSAN, Enterprise+ & vRealize suite) bundled with 4 computing nodes which is from a technical perspective ready to be productive in less than 30minutes (the record at the EVO:RAIL challenge was <16 minutes).

Such a solution is a thing I thought about a long time ago (it was one of the outcomes of my master-thesis on the software-defined datacenter in small-/medium sized enterprises) especially for small environments where the admins want to focus on operating the running systems (or better: delivering an IT-service) rather than implementing, installing and configuring basic infrastructure (Yeah I know this is going to be a shift in the future for me as a trainer who delivers a lot of install, configure manage classes and did installations as part of my consultancy/implementation jobs).

IMO VMware did a very smart move not to get into the role of a hardware vendor and did a cooperation with existing and well-known partners to deliver the solution specified/managed via the EVO:RAIL engine by VMware. The established sales channel to customer and companies can be used. Especially small- and medium sized business will be attracted by this solution as long as the pricing/capex ist affordable for them. Which means from a business perspective the following: VMware delivers the software (vSphere, vRealize and the EVO-engine) and the vendor delivers the hardware & support. The business-management (#beersherpa) guy inside of me says…. perfect… everyone stays at its core competencies and bundle the power together to bring a much better solution for the customer (One contact point for support, a completely integrated and supported virtualization stack, shortest implementation times).

I believe for the big x86 vendors this solution is just a next step in becoming a commodity. Isn’t the whole software-defined datacenter thing about decoupling software from hardware, creating/using a smart VMware controlled control plane and a commodity data plane which is responsible for the concrete data processing based on the control plane logic? We don’t or will not care anymore if the hardware (switch, storage, computing nodes) is HP, Cisco, Juniper, IBM, etc. We will care about the control plane.

With EVO:RAIL it will get even tougher for the hardware vendors to differentiate from each other and the competition in the end can only be won by the price (in the small/medium sized market). I want to add that I missed the chance in the EVO:RAIL demo room to have a discussion about this topic from a vendor perspective (damn you VEEAM party 😉 ), so if you have done anything similar or have own opinions please comment on this post or contact me directly.

The use cases of EVO:RAIL can vary (Management Clusters, DMZ, VDI, small production environments) a lot and I believe that this is a product is a pretty good solution which will be triggered from a bottom-up perspective within the companies (I am referring to my bottom-up / top-down approach of bringing innovation in companies at the NSX post (link)). Administrators will love to reduce the setup time of a complete vSphere environment.

Especially for VDI solutions I can imagine a brilliant use case for the EVO:RAIL, which means next step… VMware please bundle the VMware Horizon View licence into EVO:RAIL and integrate the View setup into the Evo- engine :-).

Useful links around EVO:RAIL:

IMO: #VMworld 2014 recap on VMware NSX (part 1)

It’s really long ago that I have put any content on this blog, but the amount of discussions during VMworld Europe this year have lead to the situation that I somehow need to get out my thinkings/opinion (IMO) on all this new trending VMware topics. Feel free to confront me with my statements and I would love to defend or readjust them. (That’s how knowledge expansion works, does it?!)

While writing the several parts I have realized that it was suddenly much more content that I had in mind the first place, so I separated the articles in several parts. All articles are reflecting my personal opinion (IMO) and are differing a little from the other posts we have published so far on

IMO: #VMworld 2014 recap on VMware NSX (part 1)

IMO: #VMworld 2014 recap VMware EVO:RAIL (part 2)

IMO: #VMworld 2014 recap VMware vCloud Air (part 3)

IMO: #VMworld 2014 recap vSAN and vVol (part 4)

IMO: #VMworld 2014 recap Automation & Orchestration (part 5)

VMware NSX

NSX is the newest technology by VMware trying to enable the software-defined network (and be a part of the software-defined datacenter). I put a lot effort on NSX over the last days and must admit: this is a really cool concept and solution. We create a logical switch within and across all of our datacenter. You can define rule based networks (who can communicate with whom (DMZ, Multi-Tier Application) and have it integrated inside of the VMkernel (e.g. IP-traffic routed inside the ESXi instead of touching the physical devices).

Pat Gelsinger described it very well during his keynote. “The datacenter today is like o a hard boiled egg – hard on the outside, soft on inside”. NSX will enable to deliver security mechanisms within the virtualized datacenter as well integrated in the VMkernel of the ESXi.

NSX will offer us a great flexibility managed in a central point (NSX Manager) via an UI or API which can be used by orchestration engines likes vCO.

From a technological perspective this is definitely awesome, but will we see a similar development of NSX like we have seen with the x-86 virtualization products? IMO I don’t think so on a short- to mid-term.

The advantages of NSX will come to play in very large environments with high flexibility and security requirements (financial services, IT-provider, e.g.) which means I don’t see a mass market currently out there in the next years. This does not mean it won’t be a financial benefit for VMware (good things never come for free), but only a few of us will be confronted with a NSX installation or customer who are going to implement it.

The second thing I see is that those large enterprises will get faced with organizational challenges when implementing NSX. From my experiences and chats I had during VMworld, large enterprises typically have different organization units for Network and Virtualizations. Technologies like NSX will have a huge impact on the guys from the network team and from my personal feeling (I know a lot of network guys and had chats around those topics) I doubt that the network guys do want this product out of their own conviction.

This lead to the fact that with the implementation of a software-defined network an organizational transformation in the companies will be mandatory. Network and Virtualization (Storage and Programmers of course as well) team would need to re-organized as a single…(yes I hate buzzwords, but I think this describes it best) software-defined datacenter unit.

This means that the (software-defined) evolution inside the datacenter needs to be top-down driven by the management, which might lead to a high resistance in current organization and time-intensive process-changes (Network processes matured a lot during all the years). VMware will need to convince their customer on a much higher (organizational) level, than probably for vSAN/EVO:Rail which are IMO products wanted by the admins.

That should not mean I don’t believe in NSX. I believe that this is a great technology, but we should be aware of that the transformation to a software-defined network is not only a technical thing we are implementing and which will be automatically adopted by the network admins (which would be something like a bottom-up innovation). An adoption on the technical and organizational level will be crucial for the success of NSX.

I wish VMware good luck on this task, since I would love to get involved in some NSX projects in 2015.

Useful links around NSX:

Loading VMware Appliances into OpenStack

Today, I want to talk about the challenges of loading VMware appliances into OpenStack Glance and give you a recipe of how to do it.

I migrated my lab to OpenStack but need to be able to test the latest VMware products in order to keep myself up to speed. As VMware provides more and more of its software as virtual appliances in OVF or OVA format it makes sense to have them in Glance for provisioning on OpenStack.

The following procedure is illustrated at the example of vCAC Appliance 6.0:


Challenge 1: Format Conversion

If you get to download the appliance as OVF you are already one step ahead. OVF is simply an XML-based configuration and does not include any information required to run the VM on OpenStack.

OVAs on the other hand need to be unpacked first. Luckily, OVA is nothing but TAR:

$ file VMware-vCAC-Appliance-
VMware-vCAC-Appliance- POSIX tar archive (GNU)

So we continue extracting the archive:

$ tar xvf ../VMware-vCAC-Appliance-

The .ovf, .mf and .cert files can be deleted right away. We will not need those anymore. After that the VMDK files must be converted to QCOW2 or RAW:

$ qemu-img convert -O qcow2 VMware-vCAC-Appliance- VMware-vCAC-Appliance-
$ qemu-img convert -O qcow2 VMware-vCAC-Appliance- VMware-vCAC-Appliance-


Challenge 2: Multi-Disk Virtual Machines

Unfortunately, OpenStack does not support images existing of multiple disks. Bad luck that VMware has the habit of distributing their appliances with a system disk and a separate data disk (*system.vmdk and *-data.vmdk). To still load this appliance into Glance we need to merge the disks into a single one:

First, we use guestfish to get some information on the filesystems inside the disk images:

$ guestfish

Welcome to guestfish, the guest filesystem shell for
editing virtual machine filesystems and disk images.

Type: 'help' for help on commands
'man' to read the manual
'quit' to quit the shell

> add VMware-vCAC-Appliance-
> add VMware-vCAC-Appliance-
> run
> list-filesystems
/dev/sda1: ext3
/dev/sda2: swap
/dev/sdb1: ext3
/dev/sdb2: ext3

It is safe to assume that the EXT3 on /dev/sda1 is the root file system, so we mount it and have a look into /etc/fstab to see where the other filesystems should be mounted:

> mount /dev/sda1 /
> cat /etc/fstab
/dev/sda2 swap swap defaults 0 0
/dev/sda1 / ext3 defaults 1 1
proc /proc proc defaults 0 0
sysfs /sys sysfs noauto 0 0
debugfs /sys/kernel/debug debugfs noauto 0 0
devpts /dev/pts devpts mode=0620,gid=5 0 0
/dev/sdb1 /storage/log ext3 rw,nosuid,nodev,exec,auto,nouser,async 0 1
/dev/sdb2 /storage/db ext3 rw,nosuid,nodev,exec,auto,nouser,async 0 1


Next, as we want to get rid of sdb all together, we remove the entries in /etc/fstab

> vi /etc/fstab

and mount /dev/sdb1 to /mnt. This way we can copy the contents over to /storage/log:

> mount /dev/sdb1 /mnt
> cp_a /mnt/core /storage/log/
> cp_a /mnt/vmware /storage/log/

Of course, the same has to happen for /dev/sdb2:

> umount /mnt
> mount /dev/sdb2 /mnt
> cp_a /mnt/pgdata /storage/db/

And we are done! Please not it is important to use cp_a instead of cp_r as it will preserve permissions and ownership.

> exit


Challenge 3: Disk Space

Well, so far so good! But the image that originally served as the system disk now has to hold all the data as well. Therefore, we need more space! So we have to resize the disk image, the partition(s) and filesystem(s). Here is the easiest way I’ve found:

$ qemu-img create -f qcow2 newdisk.qcow2 50G
$ virt-resize --expand /dev/sda1 VMware-vCAC-Appliance- newdisk.qcow2
Examining VMware-vCAC-Appliance- ...

Summary of changes:

/dev/sda1: This partition will be resized from 12.0G to 47.0G. The
filesystem ext3 on /dev/sda1 will be expanded using the 'resize2fs'

/dev/sda2: This partition will be left alone.

Setting up initial partition table on newdisk.qcow2 ...
Copying /dev/sda1 ...
Copying /dev/sda2 ...
Expanding /dev/sda1 using the 'resize2fs' method ...

Resize operation completed with no errors. Before deleting the old
disk, carefully check that the resized disk boots and works correctly.


What’s happening here? First, we create a new empty disk of the desired size with “qemu-img create”. After that, virt-resize copies data from the original file to the new one resizing partitions on the fly. Next, the filesystems are resized using resize2fs.

The image can now be uploaded into Glance. Please make sure to add properties that set the SCSI controller chipset properly. For example, IDE is going to work. The reason for this is the name of disks: using VirtIO we will get /dev/vda1 and would have to adjust those name e.g. in /etc/fstab, too. I have only had luck with ide so far:

glance image-update --property hw_disk_bus=ide 724ad050-a636-4c98-8ae5-9ff58973c84c

Have fun!

Deleting vCloud Director Temporary Data

Recently, I learned a bit about vCloud Director internal database and table structure which I am going to share with you here.

vCD holds two types of tables worth pointing out: the ORTZ and INV tables. The latter store information about vCenter Server inventory (INV) and are kept up-to-date as the inventory changes. This could be due to changes vCD makes itself or those carried out be an administrator. When the vCD service starts up it connects to vCenter to read in its inventory and update its INV tables. The QRTZ tables are used for heart beating and synchronization between cells (at least from what I understood, no guarantees).

Why am I telling you this? Both types of tables can be cleared without losing any vital data. You can make use of this knowledge whenever you feel your vCloud DB is out of sync with the vCenter inventory. This happens for example in the case where you have to restore your vCenter Database from a Backup without restoring vCloud’s database.

WARNING: Following this procedure is completely unsupported by VMware GSS. Follow the instructions below on your own risk or when you have no support for your installation anyway 😉

  1. Shutdown both vCloud Director cells
  2. Clear the QRTZ and INV tables using “delete from <table name>”
  3. Start one of the cells and watch it start (/opt/vmware/vcloud-director/log/cell.log)
  4. Start the other cells

Here some SQL statements that will automate step 2 for you:

delete from QRTZ_CALENDARS;
delete from QRTZ_TRIGGERS;
delete from QRTZ_JOB_DETAILS;

delete from compute_resource_inv;
delete from custom_field_manager_inv;
delete from cluster_compute_resource_inv;
delete from datacenter_inv;
delete from datacenter_network_inv;
delete from datastore_inv;
delete from datastore_profile_inv;
delete from dv_portgroup_inv;
delete from dv_switch_inv;
delete from folder_inv;
delete from managed_server_inv;
delete from managed_server_datastore_inv;
delete from managed_server_network_inv;
delete from network_inv;
delete from resource_pool_inv;
delete from storage_pod_inv;
delete from storage_profile_inv;
delete from task_inv;
delete from vm_inv;
delete from property_map;


Looking Back at a Day of vShield Manager Troubleshooting

Dear diarrhea diary,

this entire day was f***** up by a VMware product called vShield Manager …

Like this or similar should today’s entry in my non-existing diary look like. It was one of these typical “piece of cake” tasks that turn into nightmares 😀 Literally the task read “Configure VXLAN for the ***** cluster” – easy hmm!?

1. Ok, let’s go: The physical switch configuration turned out easy as it was already done for me 🙂 CHECK.

2. So, naive me, I connected to vShield Manager UI, went Datacenter -> Network Virtualization -> Prepare and added the cluster, gave it the name of the also already existing Distributed Switch and the VLAN ID and let it run. FAIL: “not ready”.


VSM itself doesn’t give a lot of details but I knew that probably the deployment and installation of the VXLAN VIB package failed. Looking at esxupdate.log I could see a “temporary failure in DNS lookup” (exact wording was probably different).  Looking at the ESXi hosts’ DNS configuration: empty. Cool! Fix -> CHECK. Later I found out that I myself blogged about this a while ago 😀

3. Now lets try again, but first we have to “unprepare” the cluster: removed check in VSM: Error. Of course. VSM didn’t create the Port Group nor the VMkernel ports and now tries to remove them … computer logic 😀 At this point, simply hit “Refresh” and it will be gone. Now we can try the preparation process once more: Error:

domain-c3943 already has been configured with a mapping.

Grrrr … luckily, found this: To be honest the sentence “VMware support was able to help… and I suggest unless you don’t care about your cluster or vShield implementation that you call them to solve it” scared my a bit, BUT to balls to gain (wait is that right?). WORKS! PARTYYY! But once again: preparation failed (devil)

4. I can’t quite remember which error message or log entry helped me find VMware KB 2053782.  Following the steps sounds simply but hey, why should anything work today?! 😀 Check my other blog post about this particular one. After applying the – I like to call it – “curl”-hack to VSM (see the step before) again, I prepared the cluster one more time and finally the VXLAN VIB could be deployed, BUT …

5. … The Port Group was not created … f*** this sh***. After a 30ish minutes of blind flight through VSM and vCD, I figured out that other clusters could not deploy VXLANs anymore. Due to this insight and a good amount of despair, I just rebootet VSM. Then unprepare, “curl”-hack, prepare … and: WORKS!


Portgroup is there. BUT:

6. No VMkernel Ports were created (I ran out of curses by that time). Another 30min passed until I unprepared, “curl”-hacked and prepared the cluster one last time before the VMkernel Ports were then magically created. THANK GOD! So I went ahead creating a Network Scope for the cluster.

I tested creating VXLAN networks over VSM a couple of times and it seemed properly create additional Port Groups. You think the days was over, yet? WROOONG!

7. Next, I tried through vCloud Director. The weird thing was that a Network Pool for that cluster already existed with a different name than the Network Scope I just created. Had to be some relict from before my time in the project. Trying to deploy a vApp I ran into something I am going to write about tomorrow. As this was fixed, I kept receiving this:


Telling from the error message, vCloud Director tries to allocate a network from a pool for which VSM has no network scope defined. Those thing did not work out:
– Click “Repair” on the Network Pool
– Create a Network Scope with the same name as the Network Pool as vCD uses some kind of ID instead of the name of the Network Scope.

The only possible solutions I could come up with are deleting and re-creating the Provider vCD or going into the vCD database and do some magic there. The only information on this I could find was in the Vmware Comunities: So I am going to open a ticket.

Good night.

VMware vSphere Update Manager causes VXLAN Agent to fail on install and uninstall

The title of this article is that of VMware KB article 2053782. Following the steps seems simple but turned out to provide a couple of pitfalls and inaccuracies:

Mean pitfall:

Open a browser to the MOB at: https://vCenter_Server_IP/eam/mob/

When I opened this URL to my vCenter server, I received the following error message:

The vSphere ESX Agent Manager (vEAM) failed to complete the command.

The thing to point out here is the slash / after “mob” in the URL! Without this slash it won’t work

Unclear Instructions:

b. In the <config> field, change the value from true to false:

Reading this, the way I understood the instructions was to leave the XML data as is but turn “true” into “false” for the bypassVumEnabled element. In the code example they gave, they removed all the other elements but I thought that’s probably because of saving space in the KB article. WRONG! Turned out you have to:

  1. Delete all the XML data from the text field
  2. Paste the code above (config element with nested bypassVumEnabled element) – nothing else!


Hope that helps 🙂


Nested ESXi with OpenStack

For all of you who want to run VMware’s ESXi 5.x on an OpenStack cloud running vSphere as the hypervisor, I have a tiny little tip that might save you some researching: The difficulty I faced was “How do I enable nesting (vHV) for an OpenStack deployed instance?”. I was almost going to write a script to add


and run it after the “nova boot” command, and then I found what I am going to show you now.

Remember that uploading an image into Glance you can specify key/value pairs called properties? Well, you are probably already aware of this:

root@controller:~# glance image-show 9eb827d3-7657-4bd5-a6fa-61de7d12f649
| Property                      | Value                                |
| Property 'vmware_adaptertype' | ide                                  |
| Property 'vmware_disktype'    | sparse                               |
| Property 'vmware_ostype'      | windows7Server64Guest                |
| checksum                      | ced321a1d2aadea42abfa8a7b944a0ef     |
| container_format              | bare                                 |
| created_at                    | 2014-01-15T22:35:14                  |
| deleted                       | False                                |
| disk_format                   | vmdk                                 |
| id                            | 9eb827d3-7657-4bd5-a6fa-61de7d12f649 |
| is_public                     | True                                 |
| min_disk                      | 0                                    |
| min_ram                       | 0                                    |
| name                          | Windows 2012 R2 Std                  |
| protected                     | False                                |
| size                          | 10493231104                          |
| status                        | active                               |
| updated_at                    | 2014-01-15T22:37:42                  |

At this point, take a look at the vmware_ostype property, which is set to “windows7Server64Guest”. This value is passed to the vSphere API when deploying an image through ESXi’s API (VMwareESXDriver) or the vCenter API (VMwareVCDriver). Looking at the vSphere API/SDK API Reference you can find valid values and since vSphere 5.0 we find “vmkernel4guest” and “vmkernel5guest” in the list representing ESXi 4.x and 5.x respectively. According to my testing, this works with Nova’s VMwareESXDriver as well as VMwareVCDriver.

This is how you change the property in case you set it differently:

# glance image-update --property "vmware_ostype=vmkernel5Guest" IMAGE

And to complete the pictures, this is the code in Nova that implements this functionality:

  93 def get_vm_create_spec(client_factory, instance, name, data_store_name,
  94                        vif_infos, os_type="otherGuest"):
  95     """Builds the VM Create spec."""
  96     config_spec = client_factory.create('ns0:VirtualMachineConfigSpec')
  97 = name
  98     config_spec.guestId = os_type
  99     # The name is the unique identifier for the VM. This will either be the
 100     # instance UUID or the instance UUID with suffix '-rescue' for VM's that
 101     # are in rescue mode
 102     config_spec.instanceUuid = name
 104     # Allow nested ESX instances to host 64 bit VMs.
 105     if os_type == "vmkernel5Guest":
 106         config_spec.nestedHVEnabled = "True"

You can see that vHV is only enabled if the os_type is set to vmkernel5Guest. I would assume that like this you cannot nest Hyper-V or KVM but I haven’t validated.

Pretty good already. But what I am really looking for is running ESXi on top of KVM as I need nested ESXi combined with Neutron to create properly isolated tenant networks. The most current progress with this can probably be found in the VMware Community.

Fixing Failed OVF Deployment Due to Integrity Error

We were just about to deploy the latest version of VCE’s UIMP when vSphere Client failed with an error message about a failed integrity check of the VMDK file. Although not the best idea, the fix was easy: When an integrity check is performed at all, OVF brings an additional file: the manifest file (.mf). In our case, it contained SHA1 checksum information about the .ovf and the .vmdk file:

mathias@x1c:/media/mathias/Volume$ ls -hl UIMP*
-rw------- 1 mathias mathias 2,6G Mär  5 19:25 UIMP-
-rw------- 1 mathias mathias 1,3G Mär  6 09:28 UIMP_OVF10-disk1.vmdk
-rw------- 1 mathias mathias  133 Mär  6 10:54
-rw------- 1 mathias mathias 125K Jan  4 23:13 UIMP_OVF10.ovf
mathias@x1c:/media/mathias/Volume$ cat
SHA1(UIMP_OVF10.ovf)= 881533ff36aebc901555dfa2c1d52a6bd4c47d99
SHA1(UIMP_OVF10-disk1.vmdk)= f175f150decb2bf5a859903b050f4ea4a3982023

Interestingly, the whole OVF data was contained in an ISO file which passed the MD5 check after download. So something must have gone wrong packaging the appliance. I calculated the SHA1 checksum for the file and compared it to the one in the manifest:

mathias@x1c:/media/mathias/Volume$ sha1sum UIMP_OVF10.ovf
881533ff36aebc901555dfa2c1d52a6bd4c47d99  UIMP_OVF10.ovf
mathias@x1c:/media/mathias/Volume$ sha1sum UIMP_OVF10-disk1.vmdk
f175f150decb2bf5a859903b050f4ea4a3982023  UIMP_OVF10-disk1.vmdk

VMDK: f175f150decb2bf5a859903b050f4ea4a3982023
MF: 9371305140e8c541b0cea6b368ad4af47221998e

Strange 😀 Well, the fix was to simply edit the manifest with the correct SHA1 checksum. But please keep in mind that just because we just forged the checksum doesnt mean the data is not corrupt. In this case, as the MD5 check of the ISO was successful we assumed that the data is probably fine an decided to give it a try. In the end, the file was still corrupt and brought errors mounting the root file system.u

© 2020 v(e)Xpertise

Theme by Anders NorénUp ↑