Taghomelab

Enhance your #homelab by re-enabling transparent page sharing

“Sharing is caring!”

I decided to quickly write down a few last words about a widely discussed topic before my 4 week journey to Brazil begins.

Transparent page sharing (TPS)

The concept of transparent page sharing has been widely explained and the impact discussed in all of the years (whitepaper). Short version: Multiple identical virtual memory pages are pointing to the same one page within the host memory.

The general behaviour has changed multiple times with enhancements within AMD’s & Intel’ newer CPU generations where large pages were intensively used (2MB instead of 4KB areas –> Increasing the chance of a TLB-Hit) and the benefits of TPS couldn’t be used until the ESXi host gets under memory pressure and changes it’s memory state to break up with his girlfri….I mean the large pages into small ones.

Last year some security concerns came up with this feature and VMware pro-actively deactivated the TPS feature with all newer ESXi versions and updates (KB).

I don’t want to talk about the impacts and design decisions on productive systems, but on homelab environments instead. It is nearly a physical constant that a homelab always lacks of memory.

By deactivating large pages & the new security mechanisms you can save a nice and predictable amount of consumed/host-backed memory.

And especially in homelab environments the risk of a lower performance (caused by the higher memory access time through the higher probability of TLB-misses) and of security concerns might be mitigated.

What to do?

Change on each ESXi host the advanced system settings

Mem.AllocGuestLargePage=0

advancedsetting1

Mem.ShareForceSalting = 0

advancedsetting2

and wait a couple of minutes until TSP kicks in.

Effect on my homelab

60 minutes after the setting was done the amount of consumed RAM on my cluster (96GB RAM – setup) decreased from 55GB to 44,8GB which means around 20% memory consumption has been saved in my VM constellation (multiple identical Windows 2012 R2 and nested ESXi which have a high ‘shared’ value)

vCenter_TPS_consumed_memory

So if you need a very quick approach to workaround the memory pressure in your homelab and you can live the potential performance loss –> re-activate Transparent page sharing as a first step to optimize your environment. Sure you can also skip the deactivation of large pages and hope that during a change in the memory state the large page breakup process is quick enough and therefore TPS works again. But I preferred a permanent and audit-able approach of monitoring the amount of shared memory in my lab.

2nd step to optimize the memory consumption –> Size your VMs right… but this is nothing I will tell you about since my plane is going to start… my substitute vRealize Operations will do the job 😉

vLenzker #Homelab: quiet, small, scaleable and powerful(!?)

A few months ago a simple thought came into my mind and it didn’t left for several months.

‘ With new and cool software like vSAN, Pernixdata FVP, vROPS, vRAC, vSphere 6.0, vSAN, …. You need a new #Homelab to test this stuff’

Yeah… I somehow felt inception-ized ;-).

In the end of last year I had a phone-call with Manfred Hofer form vbrain.info about his great #homelab posts and design-decisions on his blog. Even though I did not chose one of his proposed designs I really like to thank you Fred for your efforts and great sum up.

Since I was asked by multiple people to document my new hardware, I quickly summarized it here:

I had the following requirements for my #homelab:

  • min. of 3 nodes (for getting vSAN up and running)
  • min. of 96GB RAM
  • low-power
  • low-noise (currently it’s standing close to my office-desk)
  • small
  • min. 2 NICS per node

I didn’t really cared about ECC-support, IPMI, etc… Nothing productive will run there… I just need a suitable performance ( ca. 10-16 cores / 2000-3000 4K 70/30 random IOPS / 2-4 TB Disk) capacity / to do some quick’n dirty testing / customer environment simulations. Intel NUCs would have been a perfect choice, but the lack of 32 GB functionality disqualified them ;-/

In the end I decided to go for the following solutions.

Computing

  1. Shuttle SH87R6
  2. Intel Core i5-4440S
  3. 4×8 GB DDR3 memory
  4. Intel Pro PT 1000 Dual
  5. 1x Crucial CT256MX
  6. 1x Crucial CT512MX

Network

  1. CISCSO SG-300 – 20 ports
  2. Huawei WS311 Wifi-bridge

Storage

  1. Synology DS414 Slim
  2. 1x Crucial CT512MX
  3. Western Digital RED 1 TB

Currently I have vSphere 6.0 running with a vSAN 6.0 datastore. I have also decided to get a dedicated NFS share on the Synology for maintenance/testing reasons so I can easily demote/recreate the vSAN datastore. Having nearly everything on SSD gives me a performance that is suitable for me and gives me a chance to work efficient with new products (even if the local S-ATA controller are limited in their capabilities, but hey… it’s non-productive ;-).

After optimizing some of my cabling and replacing the fan of the Shuttle Barebone I really like the solution on my desk. It’s powerful, small, scaleable enough for the next things I am planning to do. Even if my requirement for hardware is increasing I can scale-up the solution pretty quickly and easy.

homelab_lenzker

So far I was not able to get the embedded Realtek up and running with vSphere 6.0. But to be honest, I haven’t spent much time with it ;-). Once I have an update here, I will let you know.

© 2017 v(e)Xpertise

Theme by Anders NorenUp ↑