Check out the new USENIX Web site. next up previous
Next: Related Work Up: Performance Enhancements Previous: Modifying the Host OS

Bypassing the Host OS

As long as actual transmits to and receives from the physical network require a world switch back to the host operating system and the VMApp, an unavoidable latency and CPU overhead will remain. Additionally, the VMM will have to take native IRQs while running, world switch them back to the host, and wait for incoming packets to work through the host and VMApp before they reach the guest. This fundamentally limits the I/O performance of a hosted virtual machine architecture. To truly maximize I/O bandwidth, the VMM must drive the I/O device directly. The guest OS could potentially drive the device directly too, but this requires either hardware support or memory access restrictions to preserve safety and isolation.

With its own device drivers, the VMM can send and receive packets without any mandatory world switches and relay receive IRQs to the guest almost immediately. Additionally, there would be no need for a separate VMNet driver. However, adding device drivers to the VMM represents a major trade-off. Recall that VMware Workstation supports a wide variety of hardware devices because of the hosted architecture. It automatically gains support for new I/O devices and bug-fixes in existing drivers as soon as the host OS does. A VMM that requires its own NIC drivers would require an investment of resources in developing, testing, and updating its hardware support.

As described, each VMM is associated with a single virtual machine. In order to share an I/O device among several virtual machines, the VMM would have to be extended to include a global component that recognizes the individual virtual machines and their VMMs. The global component would effectively be a kernel that is specifically designed for managing VMM worlds. In addition to driving the device, the global component would have to provide software to multiplex more than one VMM onto a single I/O device. This technique is used in VMware ESX Server$^{\sc tm}$, where achieving native I/O performance for high-speed devices is an important requirement.


next up previous
Next: Related Work Up: Performance Enhancements Previous: Modifying the Host OS
Beng-Hong Lim 2001-05-01