Check out the new USENIX Web site. next up previous
Next: Restricting Memory Use Up: Motivation Previous: Attributes of Real-Time Tasks


Temporal Concurrency Control

Since RMax includes the time the reader is preempted by higher-priority tasks, it determines the maximum time the writer process may interfere with the reader within the reader's period without the reader missing its deadline. RMax is calculated as follows:


RMax = PR - (C - CR).

Assuming that all deadlines are met, Figure 2 illustrates the worst-case scenario in terms of the maximum number of preemptions of the reader by the writer task. This occurs when the first interfering-write happens as late as possible within the writer's period (first vertical dotted line -- just before the writer's deadline) and the last interfering-write happens as early as possible within the writer's period (second vertical dotted line -- just after the writer is released).

Let NMax denote the maximum number of times the writer might interfere with the reader process during a read operation. NMax can be calculated as:


\begin{displaymath}
N_{Max} = max \left(2,  \left\lceil\frac{R_{Max} - (P_W - D_W)}
{P_W}\right\rceil + 1\right).
\end{displaymath}

Therefore, if we use an (NMax + 1)-deep circular buffer instead of a single message buffer, the writer can post messages cyclically without ever interfering with the reader process, assuming that the real-time constraints are met. This allows the reader and writer to access the message area independently of each other without blocking, using only temporal characteristics guaranteed by the real-time scheduling and a sufficiently-deep circular buffer to manage concurrency. With multiple readers, we simply choose an NMax value large enough to work for all readers, i.e., compute it using the task with largest RMax. Finally, we keep a pointer to the most recently written message. This is updated by the writer, and subsequently used by the readers to retrieve the latest message. This concept was first introduced in [16] and later implemented in the Non-Blocking Write (NBW) protocol [17].

This algorithm is very efficient in terms of execution time, i.e., almost as fast as using global variables with no protection. The only overhead associated with this algorithm is the cost of maintaining the pointer for the most recently written message. Therefore, it is easy to see that it has optimal timing behavior among wait-free algorithms.


next up previous
Next: Restricting Memory Use Up: Motivation Previous: Attributes of Real-Time Tasks
hai huang 2002-03-30