There are a some lesser known things that are enabled as part of vSphere 6’s VM hardware version 11 that I haven’t seen many people talking about, so I thought I would share some details.
Introduced with vSphere 6 in VM hardware version 11 (HW11) is a new USB controller that is properly compatible with USB 3.0. I say “properly” because vSphere 5.5 did have the xHCI virtual controller but it wasn’t enabled by default (and therefore not supported). With vSphere 6, the included xHCI controller has been updated from v0.96 to v1.0 and is available for use with VMs that are at HW11.
By default, HW11 VMs are configured with the new xHCI controller. You still have the ability to add legacy USB controllers to virtual machines, and they can happily co-exist with the new xHCI controller too, but since the new controller is backwards compatible with USB 2.0 I don’t see much of a use case for this. The vSphere 6 xHCI controller supports up to 8 devices or “ports”. Four are reserved for USB 3.0 and four are reserved for USB 2.0 and you can add multiple USB controllers concurrently if you require more USB ports. The vSphere 6 Configuration Maximums does advise the following:
“USB 1.x, 2.x and 3.x supported. One USB host controller of each version 1.x, 2.x, or 3.x can be added at the same time.”
The VMXNET3 driver for Windows based OSs (Win8/2012 and later) now supports Large Receive Offload (LRO). This is a special hardware technique that reduces the work of processing a number of smaller incoming network packets by combining them into a larger single packet. Microsoft calls this Receive Segment Coalescing (RSC), but the technology is the same. RSC is enabled by default within Windows, but you can change the setting with some simple PowerShell commands:
Set-NetOffloadGlobalSetting –ReceiveSegmentCoalescing Disabled Set-NetOffloadGlobalSetting –ReceiveSegmentCoalescing Enabled
You need to keep this in mind when building your new Windows VMs. LRO / RSC can have a small impact to applications if they require or depend on network traffic to hit the VM in a constant stream of small packets. An example of this might be a trading platform where milli and micro-seconds count.
The VMCI (Virtual Machine Communication Interface) allows VMs to communicate with each other or the host, but without needing to traverse the network. If you’ve seen VMware shared folders on VMware Fusion or Workstation, then you’ve seen this in action. NSX also uses VMCI to update the configuration of control VMs and Edge devices. Why would anyone want to use this? Well, if you want to get data in or out of a VM without needing to traverse a network stack, VMCI can achieve nearly 10Gbps!
When using VMCI (which is not enabled by default) you may notice that there is a virtual hardware device labeled “VMCI device” within the VM’s hardware settings. When using HW11 there’s a new “filter” option within this device, this allows you to create firewall rules on a per VM basis. By default, VMCI allows all traffic but you can add rules to restrict how you want VMCI traffic to flow. The VM’s .vmx configuration file will hold not only the VMCI PCI device information, but also the filter configuration. This allows the configuration to move around with the VM as it moves from host to host. You can add, delete, edit and re-sort the order of the VMCI rules and they are applied top down.