Check out the new USENIX Web site. USENIX - Summaries


A Revisitation of Kernel Synchronization Schemes

By Christopher Small and Stephen Manley, Harvard University

Summary by Peter Collinson

Having been presented with two filesystem talks in the first session, those attending this talk were given a discussion of the kernel. The talk was given by Chris, with his co-author doing the arduous slide turning work.

The traditional UNIX single-processor kernel contains essentially two layers, or what we now think of as threads of execution. The lower level is driven by hardware interrupts and handles the movement of data to and from peripherals. The top level is run to provide an interface between the user level processes and the hardware level. Because lists or other data structures are used to move data between the levels, there are often critical sections in the top level code where it's necessary to stop the bottom layer from interfering with the manipulation of these data structures.

The early PDP-11-based UNIX systems used the interrupt priority mechanism. Devices were allowed to interrupt only if the processor priority was lower than the level at which the device was set. If the processor priority was higher, the interrupt was queued until the priority was dropped. Critical sections were protected in the UNIX kernel by a call to a routine whose name started with spl, for set priority level. The spl routine would raise the processor priority temporarily, blocking any interrupts that might cause problems. At the end of the critical section, the kernel would then lower the processor priority, allowing pending interrupts to be serviced. Incidentally, it's worth noting that priority interrupt schemes also mean that critical sections can occur in the interrupt code because it can be interrupted by a device with a higher priority.

In recent PC architectures, it's been possible to turn all interrupts off in the CPU chip using the CLI instruction, which can be directly mapped to a cli() routine. Prioritized interrupts are available only by controlling a chip external to the CPU. The time taken to set the external chip is large (typically 300 machine cycles) and can dominate the execution time of the critical section. However, simply using the cli() routine can be bad because interrupts can be missed, which can cause problems with slow data-intensive devices like network connections or IP connections using serial lines.

A further revision of both algorithms is called "optimistic use" by the authors of the paper. The idea is that an interrupt is unlikely to occur in a critical section. A flag is set to mark entry to the section and tested when an interrupt occurs. If the flag is set, then appropriate action is taken to queue the servicing of the interrupt.

The authors tested the four possible variants of the previous methods: "raw" spl use, optimistic spl, "raw" cli, and optimistic cli. They undertook various tests, which showed that the "optimistic" algorithm was a win and that the best combination for PC architectures was the optimistic cli approach. However, this approach did result in significant loss of interrupts, meaning that a typical machine would see buffer overrun on serial lines, for example.

Their view was that none of the four schemes was entirely satisfactory, but they proposed that kernels should use the traditional spl scheme to protect critical sections in the top kernel layer and cli to protect sections in the interrupt layer.

Further information can be obtained from https://www.eecs.harvard.edu/~chris.

Originally published in ;login: Vol. 22, No.2, April 1997.


webster@usenix.org
Last changed: May 28, 1997 pc
Summaries Index
Anaheim Index
Proceedings Index
USENIX home