IMO: Is SMP fault tolerance even useful? My view on it!

Maish Saidel-Keesing has written a post about the fault-tolerance topic with multiple vCPUs a few weeks ago. He has valid points in his argumentation, but anyway I want to give you a little bit of my view on this topic (IMO).

With fault-tolerance two VMs are running nearly symmetrical on 2 different ESXi hosts with one (primary) processing IO and the other one dropping it (secondary). With the release of vSphere 6.0 VMware will support this feature with a VM of up to 4vCPU and 64 Gbyte memory. [More Details here]

I try to summarize the outcome Maish’s argumentation:

FT is not the big deal feature since it only protects against a hardware failure of the ESXi host without any interruptions in the service of the protected VM. It does NOT detected or deal with a failure at Operating Systems and Application level.

So what Maish think we really need are cluster-mechansims on application level and if legacy applications don’t.

I would in general not disagree with this opinion. In an ideal world all applications would be stateless, scaleable and protectable with a load-balancer in front of them. But we will need 1X or more years until all applications are built in such a new ‘modern’ way. We will not get rid of the legacy applications in the short-term.

Within the last 4 years of beeing an instructor I received one questions nearly every time when delivering a vSphere class:

‘Can we finally protect our SMP-VMs now with Fault Tolerance? No?! Awww :(‘

So I would not say there is a not a need out there for this feature. Being involved in some bidding last year we had very often the requirement to deliver a system for automation-solutions within large building-complexes (airports, factories, etc.).

Software being used in such domains are sometimes legacy application par excelente (ironic) programmed with a paradigm long before agile/restful/virtualization played a role in the tech-world.  Sometimes you can licence a cluster feature (and pay 10 time as much as for a 1-node licence) – sometimes you can’t cluster it and need other ideas or workaround to increase the availability.

Some biddings were not won because of opponents who where able to deliver solutions that can (on the paper) tolerate an hardware outage without any service-/session impact.

For me with SMP-FT typical design-considerations come into play:

  • How does the cluster work? Does it work on application/OS-level or does it only protect for a general outage?
  • What were failure/failover reasons in the past? (e.g. vCenter – in most cases I had a failure here it was because of Database problem [40%], Active Directory / SSO problem [10%], a hardware failure [45%] or rest [5%])  -> A feature like FT would protected against a huge amount of failure experienced in the past. Same considerations can be taken into account for all kind of applications (e.g. virtual load-balancer, Horizon View Connection Server etc.)
  • How much would a suitable solutions cost to make, buy or update?

Sure we need to get rid of legacy applications, but to be honest… this will be a very long road (the business decides and pays it) and once we have gotten to the point where the legacy applications are gone – the next generation of legacy applications is in place that need to be transformed (Docker?! 😉 ).

We should see FT as it is. A new tool within our VMware toolkit to fit specific requirements and protect VMs (legacy/new ones) on a new level with pros- and cons (as always). IMO every tool / feature that gives us more opportunities to protect the IT is very welcome.

VMware EVO:RAIL: How Vendors differentiate (#IMO edition)

During my last #IMO on EVO:RAIL I asked myself the question how vendors are going to differentiate on those standardised / hyper-converged solutions:

  • pricing: Make or Buy – the good old discussion within the economics hits us again. EVO:RAIL Vendors will try to figure out how the market will react on the hyper-converged solution. evo:rail_apex The implementation (cost-) block is always aligned with a certain risk level (bad external service provider, human mistakes). This risk is somehow mitigated by using the EVO-engine for the implementation/configuration task. But since the hyper-converged market is pretty new our vendors and their shareholders are expecting a higher margin out of it. The vendors will learn their lesson and their (EVO) margin will decrease within the next years. Howard Marks did a nice analysis on the pricing part of EVO:RAIL. Let’s wait how the real-price will differ from the list-price.
  • vendor-support
  • vendor specific software and bundles: This is a very interesting topic, since a lot of companies currently try to bundle EVO:RAIL with specific Software packages (for Management/Monitoring) or even hardware (Storage). I try to summarise the main differences within the next lines according to the current public information.


Dell has announced to bundle its EVO:RAIL solution with the software-defined storage (SDS) solution of NexentaStor. NexentaStor offers a storage solution based on the ZFS filesystem.

So the big question is, why combining VMware’s SDS vSAN with a another SDS solution? The Nexenta part should not be seen as a substitute for vSAN. It offers more like an added functionality by offering NFS (v3,v4), SMB, Snapshot, Deduplication solutions integrated into the vSphere Management via NexentaConnect. This might IMO be a useful extension for very specific use cases.

Besides to the Nexenta integration DELL has announced to offer a VDI package for EVO:RAIL (haven’t I asked for it? 😉 ). If this solution will include VMware Horizon View or DELL’s vWorkspace (QUEST) into it is not been officially been announced.

Functional Advantage Level (none-low-medium-high): medium


It was kind of a surprise when NetApp has announced that they are going to provide EVO:RAIL solutions as well. NetApp bundles their FAS solution (based on ONTAP) within the EVO:RAIL solution and offers new storage capabilities to the vSphere environment. As with Nexenta you will be able to extend the EVO:RAIL functionality with features like NFS, SMB, Dedupliaction, etc. But not like Nexenta NetApp is integrating a dedicated FAS-unit into the EVO:RAIL solution.

The big question is: will it bring real benefit to us if we now need to manage two storage systems (ok vSAN doesn’t need to be managed that often) while at the same time the price for the FAS solution will most certain be somehow added to the customer in the end. Without any concrete use-cases I am not that sure if customers are willing to pay a higher amount for the NetApp EVOs

Functional Advantage Level (none-low-medium-high): medium


The EVO:RAIL solution of EMC will be based on EMC’s Phoenix Foundation (Didn’t MacGyver worked for them?) . To differentiate themselves EMC is planning to integrate/bundle their own Data Protection Software (vSphere Data Protection Advance (integration with EMC’s Data Domain?) and/or Recovery Point as a Desaster Recovery technology).

Functional Advantage Level (none-low-medium-high): low


Fujitsu’s EVO:RAIL are based on the CX400 S2 nodes which are advertising themselves with a higher temperature tolerance. According to the official notes the CX400 S2 has an ambient temperature between 10-35 degrees Celsius, which value might increase to 40 degrees Celsius within the next EVO:RAIL generation. This is definitely an interesting approach, since reduced cooling costs within a datacenter are always welcome.

So we see Fujitsu is trying to differentiate themselves with an improvement of their x86 nodes. I am honestly not sure how much money a company will safe in the end by using Fujitsu hardware. Even if I would know that my hardware tolerates higher temperature, I am not that sure if I would decrease the overall temperature within my datacenter.

Functional Advantage Level (none-low-medium-high): none


HP’s is attacking the hyper-converged market with 2 solutions called HP’s ConvergedSystem 200. Based on the same components one version is an EVO:RAIL solution, while the other version based on HP’s own scale-out storage solution Virtual Storage Appliance (VSA).

HP tries to differentiate to the other vendors by integrating EVO:RAIL into their OneView management solution. As a consequence EVO:RAILs can be managed in the same way as the other HP components.

Functional Advantage Level (none-low-medium-high): low-medium

For the following companies I have not found/received further information so far. I will update the post as soon as I have new information on it.


Inspire is the first partner for VMware EVO:RAIL in china. Until now I was not able to gather any further information about the way the way INSPUR will differentiate in the existing eco systems. But since the market in China is more regulated than others, a differentiation is not even that necessary.


More Information are hopefully coming

net one

More Information are hopefully coming


More Information are hopefully coming


Comparing just the functional benefit of each EVO:RAIL vendor I don’t believe that those functionalities will convince customer to pay additional buckets. IMO Vendors will need to work harder on bringing functional benefit to their own EVO:RAIL solution.

In the end I believe that the existing vendor relationship and the pricing will be the most important factor. And this is exactly something the year 2015 will show us how the market will react.


IMO: #VMworld 2014 recap Automation & Orchestration (part 5)

Sitting here at the airport in Bucharest I thought I can finally write down my IMO thoughts about the whole automation/orchestration topic.

As I had more fun in writing about automation instead of vSAN/vVol I did it like George Lucas and mixed the orders of my parts/episodes 😉

IMO: #VMworld 2014 recap on VMware NSX (part 1)

IMO: #VMworld 2014 recap VMware EVO:RAIL (part 2)

IMO: #VMworld 2014 recap VMware vCloud Air (part 3)

IMO: #VMworld 2014 recap vSAN and vVol (part 4)

IMO: #VMworld 2014 recap Automation & Orchestration (part 5)

I visited a lot of breakout sessions regarding automation and scripting. Some of them were really really good with some great core-messages, for other sessions my skill-set of scripting or programming was not honestly not good enough to get it all ;-).

2014 was kind of a PowerCLI year for myself. I was automating a lot of stuff in a huge project with PowerCLI. I did not just used PowerCLI for interacting or automating vSphere object (VM, Clusters, Datastore,…) related things, but also to automate/optimize operational or implementation tasks (vCenter / SQL installation, Automatic Setup …). There are just so many amazing things you can do with Powershell/PowerCLI.

So IMO whoever is going to read this (if you are one of my students you will know this message):

Don’t be afraid of learning automation via scripts because it is related to programming.

In my opinion (and I meet/teach around 100 people a year from all kind of IT-infrastructure background) so many people are afraid because they have never been good at programming. This might be definitely correct, but there is no need to worry. I am definitely not a programmer and to be honest I am not considering me as a powershell/powercli professional as well. Nevertheless Powershell/PowerCLI makes it really easy to get started, because …

  • … the community is so f***** great.
  • … you have some sense of achievement pretty soon (I mean having an output of ‘hello world’ never really made me proud, but creating 50 VMs out of a template within 1-line in 2 minutes is somehow a really cool thing.
  • … the community is so f***** great ;-).

Automation is the future in the IT-infrastructure especially now that we are heading step-by-step towards the software-defined datacenter. Each component in the infrastructure is opening itself up via an API where we can run our code against. So what is the next step for me personally? Evolve from scripting to orchestrating.

During VMworld the session MGT2525 Chasing the White Rabbit all the Way to Wonderland: Extending vCloud Automation Center Requests with vCenter Orchestrator ()had a great outcome which order of automation is the best.

Policy driven (think about vVol/vSAN) things are probably not the things in the nearer future I will implement (I’m not a developer…….yet :P). Anyway I might be able to get much more into the whole orchestration (vCenter/vRealize Orchestrator) topic.

Working a lot in the automation field with script languages like Powershell, I realized the benefits and weaknesses of purely scripted solutions. If you want to have an automation engine done via a script language (e.g. Powershell/PowerCLI) it works pretty fine. But among other features you have to reinvent the wheel all the time. How can an object within a workflow be stored persistently? How can a workflow be pause/resumed? Functionality-extension via standardized plug-ins? How can I scale such an automation engine up? A lot of thinks will come up during the development, which have to be dealt with. Those topics are reasons where I believe that professional Orchestration solution are a much better choice. I will try to find this out and be more specific within the next months ;-).

So do we start learning this stuff? Having some chats after a #vBrownbag Session with Joerg Lew ( @joerglew – He was introduced to myself and is obviously the orchestration guru) he gave me some good advices how to start with when I want to learn about vR/vC Orchestrator.

That’s exactly what I am going to do in the next months…. When 2014 was my year of PowerCLI, 2015 will be my year of Orchestration.

So you wanna see how I am doing learning it? I try to keep you informed right here on this blog…stay tuned…

(And if I have not made any progress on automation at the end of next year…feel free to kick my ass if you see me 😉 )

IMO: #VMworld 2014 recap VMware vCloud Air (part 3)

This is part 3 (and the first as a VCAP-DCD 🙂 ) of my IMO #VMworld wrap up. Read my about thoughts of a new product called vCloud Air.

IMO: #VMworld 2014 recap on VMware NSX (part 1)

IMO: #VMworld 2014 recap VMware EVO:RAIL (part 2)

IMO: #VMworld 2014 recap VMware vCloud Air (part 3)

IMO: #VMworld 2014 recap vSAN and vVol (part 4)

IMO: #VMworld 2014 recap Automation & Orchestration (part 5)


A big part of the keynote during VMworld was about vCloud Air and the progress VMware is doing in creating new datacenters all over the world offering public cloud services. The idea of having a hybrid cloud is from a top-level approach a really good one. IT services need to be delivered quicker with changing workloads and so on. Instead of increasing the capex and might risk that we invest in unused resources we can transform the capex into an opex by just ordering Infrastructure resources on demand from a public cloud provider and pay-as-we-go. A great thing from a management perspective and also from observing the use-cases logically it makes kind of sense to transform long-term into a hybrid solution.

So what do we need for a hybrid solution? An integration between our local datacenter and a public provider. Using the same technologies within both datacenters, ours and vCloud Air, we are able to seamless connect to each other. vCloud Automation center….pardon I meant vRealize Automation Center, NSX, long distance vMotion are hybrid-cloud enabler from a technological perspective. With all those technologies the hybrid-cloud is starting to get reality (honestly, how many of you have heard or was involved in a fully functional hybrid-cloud integration project?)

Buuuuuuut IMO I honestly doubt that VMware’s vision will be successful in the short-/midterm here in Germany (maybe in Europe at all). With all the potential $$$-signs (I know, I know… Server virtualization is moving into get commodity as well and we need to find/grow into new markets if VMware wants to be successful) in their eyes there is one thing that was forgotten or at least not communicated well (honestly not communicated at all during VMworld).

What about data privacy?

And I am not talking about securing the boarders of your datacenter from an unauthorized access. I talk about authorized access of US organizations like NSA and so on.

As a trainer and consultant I thing I have a good feeling about the mood “on the streets”. You get to know and discuss a lot with many people from different companies, backgrounds with several use-cases and and and. The big driver of not using public services is the following: the data/information we have in our datacenter is our capital. It is the driver and enabler of our business and we need to protect it.

I don’t want to get into any conspiracy theories, but organizations like the NSA has a reputation that they are also involved in economic espionage. No matter if this is a fact or not, it is a general believe in IT organizations now-a-days. So the general opinion is. “We are not giving another organization a key directly to our valuable data”.

Sanjay Poonen mentioned during the keynote that VMware is proud about creating a datacenter for vCloud Air in Germany in compliance with our (Germany’s) pretty strict data privacy rules. This would be only as long a valid argument if our data-privacy rules cannot be leveraged by certain US-rules/laws.

Microsoft is currently fighting in front of US courts to make sure that data physically located in a non-US country MUST NOT be opened to specific US organizations.

The result of this process will accelerate or slow down (I don’t say enable or disable on purpose…the transformation to public services will happen anyway) the usage of public-/hybrid cloud solutions in our region.

IMO it’s a funny thing that Microsoft (as a competitor of VMware) will be responsible for a success of vCloud Air (of course Microsoft is doing this to enable/accelerate their own Azure business). What I would have wanted would be a statement by VMware about this specific situation and how the are going to deal with it. Don’t talking about things like data privacy is something that won’t work on a ‘conservative’ market like Germany. And since I heard sooo many German-speaking guys at the VMworld, I can’t imagine that this market can be ignored by VMware.

Microsoft vs. US law:

vCloud Air overview:

vCloud Air elearning:

IMO: #VMworld 2014 recap VMware EVO:RAIL (part 2)

This is part 2 of my IMO #VMworld wrap up. Read my about thoughts of a new product called EVO:RAIL

IMO: #VMworld 2014 recap on VMware NSX (part 1)

IMO: #VMworld 2014 recap VMware EVO:RAIL (part 2)

IMO: #VMworld 2014 recap VMware vCloud Air (part 3)

IMO: #VMworld 2014 recap vSAN and vVol (part 4)

IMO: #VMworld 2014 recap Automation & Orchestration (part 5)



EVO:RAIL is a pretty cool so called hyper-converged solution provided by VMware and partner vendors like (DELL, EMC, Fujitsu, INSPUR, net one, SUPERMICRO, HP, Hitachi). Summarized Evo:Rail delivers a complete vSphere-suite (including vCenter, vSAN, Enterprise+ & vRealize suite) bundled with 4 computing nodes which is from a technical perspective ready to be productive in less than 30minutes (the record at the EVO:RAIL challenge was <16 minutes).

Such a solution is a thing I thought about a long time ago (it was one of the outcomes of my master-thesis on the software-defined datacenter in small-/medium sized enterprises) especially for small environments where the admins want to focus on operating the running systems (or better: delivering an IT-service) rather than implementing, installing and configuring basic infrastructure (Yeah I know this is going to be a shift in the future for me as a trainer who delivers a lot of install, configure manage classes and did installations as part of my consultancy/implementation jobs).

IMO VMware did a very smart move not to get into the role of a hardware vendor and did a cooperation with existing and well-known partners to deliver the solution specified/managed via the EVO:RAIL engine by VMware. The established sales channel to customer and companies can be used. Especially small- and medium sized business will be attracted by this solution as long as the pricing/capex ist affordable for them. Which means from a business perspective the following: VMware delivers the software (vSphere, vRealize and the EVO-engine) and the vendor delivers the hardware & support. The business-management (#beersherpa) guy inside of me says…. perfect… everyone stays at its core competencies and bundle the power together to bring a much better solution for the customer (One contact point for support, a completely integrated and supported virtualization stack, shortest implementation times).

I believe for the big x86 vendors this solution is just a next step in becoming a commodity. Isn’t the whole software-defined datacenter thing about decoupling software from hardware, creating/using a smart VMware controlled control plane and a commodity data plane which is responsible for the concrete data processing based on the control plane logic? We don’t or will not care anymore if the hardware (switch, storage, computing nodes) is HP, Cisco, Juniper, IBM, etc. We will care about the control plane.

With EVO:RAIL it will get even tougher for the hardware vendors to differentiate from each other and the competition in the end can only be won by the price (in the small/medium sized market). I want to add that I missed the chance in the EVO:RAIL demo room to have a discussion about this topic from a vendor perspective (damn you VEEAM party 😉 ), so if you have done anything similar or have own opinions please comment on this post or contact me directly.

The use cases of EVO:RAIL can vary (Management Clusters, DMZ, VDI, small production environments) a lot and I believe that this is a product is a pretty good solution which will be triggered from a bottom-up perspective within the companies (I am referring to my bottom-up / top-down approach of bringing innovation in companies at the NSX post (link)). Administrators will love to reduce the setup time of a complete vSphere environment.

Especially for VDI solutions I can imagine a brilliant use case for the EVO:RAIL, which means next step… VMware please bundle the VMware Horizon View licence into EVO:RAIL and integrate the View setup into the Evo- engine :-).

Useful links around EVO:RAIL:

IMO: #VMworld 2014 recap on VMware NSX (part 1)

It’s really long ago that I have put any content on this blog, but the amount of discussions during VMworld Europe this year have lead to the situation that I somehow need to get out my thinkings/opinion (IMO) on all this new trending VMware topics. Feel free to confront me with my statements and I would love to defend or readjust them. (That’s how knowledge expansion works, does it?!)

While writing the several parts I have realized that it was suddenly much more content that I had in mind the first place, so I separated the articles in several parts. All articles are reflecting my personal opinion (IMO) and are differing a little from the other posts we have published so far on

IMO: #VMworld 2014 recap on VMware NSX (part 1)

IMO: #VMworld 2014 recap VMware EVO:RAIL (part 2)

IMO: #VMworld 2014 recap VMware vCloud Air (part 3)

IMO: #VMworld 2014 recap vSAN and vVol (part 4)

IMO: #VMworld 2014 recap Automation & Orchestration (part 5)

VMware NSX

NSX is the newest technology by VMware trying to enable the software-defined network (and be a part of the software-defined datacenter). I put a lot effort on NSX over the last days and must admit: this is a really cool concept and solution. We create a logical switch within and across all of our datacenter. You can define rule based networks (who can communicate with whom (DMZ, Multi-Tier Application) and have it integrated inside of the VMkernel (e.g. IP-traffic routed inside the ESXi instead of touching the physical devices).

Pat Gelsinger described it very well during his keynote. “The datacenter today is like o a hard boiled egg – hard on the outside, soft on inside”. NSX will enable to deliver security mechanisms within the virtualized datacenter as well integrated in the VMkernel of the ESXi.

NSX will offer us a great flexibility managed in a central point (NSX Manager) via an UI or API which can be used by orchestration engines likes vCO.

From a technological perspective this is definitely awesome, but will we see a similar development of NSX like we have seen with the x-86 virtualization products? IMO I don’t think so on a short- to mid-term.

The advantages of NSX will come to play in very large environments with high flexibility and security requirements (financial services, IT-provider, e.g.) which means I don’t see a mass market currently out there in the next years. This does not mean it won’t be a financial benefit for VMware (good things never come for free), but only a few of us will be confronted with a NSX installation or customer who are going to implement it.

The second thing I see is that those large enterprises will get faced with organizational challenges when implementing NSX. From my experiences and chats I had during VMworld, large enterprises typically have different organization units for Network and Virtualizations. Technologies like NSX will have a huge impact on the guys from the network team and from my personal feeling (I know a lot of network guys and had chats around those topics) I doubt that the network guys do want this product out of their own conviction.

This lead to the fact that with the implementation of a software-defined network an organizational transformation in the companies will be mandatory. Network and Virtualization (Storage and Programmers of course as well) team would need to re-organized as a single…(yes I hate buzzwords, but I think this describes it best) software-defined datacenter unit.

This means that the (software-defined) evolution inside the datacenter needs to be top-down driven by the management, which might lead to a high resistance in current organization and time-intensive process-changes (Network processes matured a lot during all the years). VMware will need to convince their customer on a much higher (organizational) level, than probably for vSAN/EVO:Rail which are IMO products wanted by the admins.

That should not mean I don’t believe in NSX. I believe that this is a great technology, but we should be aware of that the transformation to a software-defined network is not only a technical thing we are implementing and which will be automatically adopted by the network admins (which would be something like a bottom-up innovation). An adoption on the technical and organizational level will be crucial for the success of NSX.

I wish VMware good luck on this task, since I would love to get involved in some NSX projects in 2015.

Useful links around NSX:

© 2020 v(e)Xpertise

Theme by Anders NorénUp ↑