Check out the new USENIX Web site. next up previous
Next: Bypassing the Host OS Up: Performance Enhancements Previous: Optimizing the Guest Driver

Modifying the Host OS

Just as expensive virtualization overheads can sometimes be removed by modifying the guest rather than by modifying the VMM, some bumps in the hosted architecture's handling of networking are best smoothed by modifying the host. One promising change is to expand the ways in which the Linux networking stack allocates and handles sk_buffs. Each time the VMApp sends a packet via the VMNet driver, the driver allocates an sk_buff and copies the data from the VMApp into the sk_buff. The Linux kernel profiler shows that a very significant portion of the time spent in the host kernel while running the network transmit workload is due to copying data from the VMApp into an sk_buff.

In Linux, sk_buff creation uses kmalloc() to allocate the data area. If a driver could specify its own data region, then it would be possible to transmit packets via the VMNet driver without the copy. The driver would need to be responsible for making sure that its allocated sk_buffs are neither leaked nor freed too early. However, for the VMnet driver, the backing for the sk_buff data area would come from the memory representing the virtual machine's physical memory. This memory would be at least as persistent as the virtual machine itself, and any packets transmitted via a VMNet would presumably only be interesting as long as their corresponding virtual machine exists.

The primary disadvantage of modifying the host OS is that it requires the cooperation of OS vendors, or, in the case of Linux, the active support of Linux kernel maintainers. Otherwise the optimization will not be available on unmodified off-the-shelf host OSes.


next up previous
Next: Bypassing the Host OS Up: Performance Enhancements Previous: Optimizing the Guest Driver
Beng-Hong Lim 2001-05-01