Very interesting chestnut came across this week with relation to the above.
So from the top:
ESXi 5.5 cluster (running a variety of U’s 1 and 2 – i know, bad boy should be on the same revision)
Citrix Xenapp 6.5 (HRP06 + recommended patches) published desktops
Standard spec 32gb RAM – 8gb write cache in RAM with overflow)
Got various users coming in complaining of indifferent performance, draggy at times then quick, all the around like network contention/their WIFI setup etc but edgesight/hdx monitor/hdx watcher checked them out to show they were coming in at between 1-80m/s (most at 1m/s in fact!) and no bandwidth issues.
Checking a random bunch of servers against them showed little or nothing other than some spiky CPU. Memory was coming at between 25-50% (mainly chewed up by write cache).
Hypervisor memory – as in we were *slightly* overconsumed on memory – probably to a margin of 10gb per host – so roughly 265gb out of 256gb.
Could see in the resource tab swapping and even at times compression going on – more details on memory management techniques with ESXi see: https://pubs.vmware.com/vsphere-55/topic/com.vmware.ICbase/PDF/vsphere-esxi-vcenter-server-55-resource-management-guide.pdf
Sure. VMware wouldn’t be company it is today if it couldnt deal with a little over-consumption and remember it was only VM|host memory consumption not actual VM memory being throttled in each VM.
So other things aside from different build numbers within the cluster that stood out where
i) Reservations – a number of VM’s had reservations set for memory which was a really bad idea – set back to default
ii) VMware tools – as some of the PVS images the VM’s are using are running out of date VMware tools this is having a knock-on effect to the balloon drivers. Noticing some v.large consumption within VMs for memory which was entirely made up of balloon driver lockup (consumption of memory from last vmotion not being released)