Check out the new USENIX Web site. next up previous
Next: Conclusion Up: Scalable kernel performance for Previous: Data cache effects

Related Work

 

Operating system researchers and vendors have devoted much effort to improving Internet server performance. One early experience that lead to published results was the 1994 California election server [14, 15]; another early study was performed at NCSA [12]. Operating system vendors responded to complaints of performance problems by improving various kernel mechanisms, especially by replacing BSD's linear-time PCB lookup algorithm [13, 21], and by changing certain kernel parameter values. Vendors also provided tuning guides for systems being used as Web servers [6].

In response to observations about the large context-switching overhead of process-per-connection servers, recent servers [5, 16, 22, 24, 25] have used event-driven architectures. Measurements of these servers under laboratory conditions indicate an order of magnitude performance improvement [5, 20].

Maltzahn et. al. [11] reported the poor performance of Squid under real conditions. Fox et al. [7], in describing the Inktomi system, also briefly mention that their event-driven front-ends spend 70% of their time in the kernel, and attribute this to the state-management overhead of a large number of simultaneous connections. However, neither of these papers analyzed the reason for this phenomenon in any detail.



Gaurav Banga
Mon Apr 27 13:10:55 CDT 1998