Check out the new USENIX Web site. [Next] [Up] [Previous]
Next: Isolation Kernel Design Principles Up: Scale and Performance in Previous: Contributions

  
The Case for Isolation Kernels

Many applications and services would benefit from the ability to push untrusted code into the Internet infrastructure. We outline some of these below, and use them to motivate the properties required by an isolation kernel.

Supporting dynamic content in content delivery systems: a progression of content delivery systems has been introduced in recent years, including CDNs, proxy caches [34], and peer-to-peer networks []. All suffer from the limitation that only static content is supported, whereas a large and increasing fraction of content is dynamically generated [34]. Dynamic content distribution requires the ability to execute and isolate untrusted content generation code.

Pushing Internet services into virtual hosting infrastructure: a ``virtual'' hosting center would allow new Internet services to be uploaded into managed data centers. In addition to supporting commercial services, we believe virtual hosting centers would encourage the emergence of a grassroots development community for Internet services, similar to the shareware community that exists for desktop applications.

Internet measurement and experimentation infrastructure: NIMI [25] and CAIRN [2] have sought to deploy wide-area testbeds to support network measurement research. Recent projects such as Chord [30] would benefit from the ability to deploy research prototypes at scale across the Internet. Whether for measurement or prototyping, the infrastructure must be able to multiplex and isolate mutually distrusting experiments.

Mobile code: deploying mobile code in routers and servers has been proposed by both active networks and mobile agent systems [19].

All of these services and applications share several properties. For the sake of cost-efficiency, multiple services will need to be multiplexed on shared infrastructure. As a result, software infrastructure must exist to isolate multiplexed services from each other: a service must not be able to corrupt another service or the underlying protection system. Additionally, performance isolation is required to bound each service's resource consumption. Finally, the degree of information sharing between these multiplexed services will be small, or entirely non-existent. Because of this, it is reasonable to strengthen isolation at the cost of high sharing overhead.

As we will argue in detail, no existing software system has the correct set of properties to support this emerging class of Internet services. Existing software protection systems (including operating systems, language-based protection techniques, and virtual machine monitors) suffer from some combination of security vulnerabilities, complexity, insufficient scalability, poor performance, or resource management difficulties. We believe that a new software architecture called an isolation kernel is required to address the challenges of hosting untrusted services.



Subsections
[Next] [Up] [Previous]
Next: Isolation Kernel Design Principles Up: Scale and Performance in Previous: Contributions
Andrew Whitaker 2002-10-07