Figure 3 shows a highly simplified example of how one might use the new API to write parts of an event-driven server. We omit important details such as error-handling, multi-threading, and many procedure definitions.
The main_loop() procedure is the central event dispatcher. Each iteration starts by attempting to dequeue a batch of events (here, up to 64 per batch), using get_next_event() at line 9. If the system call times out, the application does its timeout-related processing. Otherwise, it loops over the batch of events, and dispatches event handlers for each event. At line 16, there is a special case for the socket(s) on which the application is listening for new connections, which is handled differently from data-carrying sockets.
We show only one handler, for these special listen-sockets. In initialization code not shown here, these listen-sockets have been set to use the non-blocking option. Therefore, the accept() call at line 30 will never block, even if a race with the get_next_event() call somehow causes this code to run too often. (For example, a remote client might close a new connection before we have a chance to accept it.) If accept() does successfully return the socket for a new connection, line 31 sets it to use non-blocking I/O. At line 32, declare_interest() tells the kernel that the application wants to know about future read and write events. Line 34 tests to see if any data became available before we called declare_interest(); if so, we read it immediately.