Check out the new USENIX Web site. next up previous

Next: Attacks on the monitor Up: Index Previous: Statements

Implementation issues

 

We implemented the Bro event engine and script interpreter in C++, currently about 22,000 lines. In this section we discuss some of the significant implementation decisions and tradeoffs. We defer to § 5 discussion of how Bro defends against attacks on the monitoring system, and postpone application-specific issues until § 6, as that discussion benefits from notions developed in § 5.

Single-threaded design. Since event handling lies at the heart of the system, it is natural to consider a multi-threaded design, with one thread per active event handler. We have so far resisted this approach, because of concerns that it could lead to subtle race conditions in Bro scripts.

An important consequence of a single-threaded design is that the system must be careful before initiating any activity that may potentially block waiting for a resource, leading to packet filter drops as the engine fails to consume incoming traffic. A particular concern is performing Domain Name System (DNS) lookups, which can take many seconds to complete or time out. Currently, Bro only performs such lookups when parsing its input file, but we want in the future to be able to make address and hostname translations on the fly, both to generate clearer messages, and to detect certain types of attacks. Consequently, Bro includes customized non-blocking DNS routines that perform DNS lookups asynchronously.

We may yet adopt a multi-threaded design. A more likely possibility is evolving Bro towards a distributed design, in which loosely-coupled, multiple Bro's on separate processors monitor the same network link. Each Bro would watch a different type of traffic (e.g., HTTP or NFS) and communicate only at a high level, to convey current threat information.gif

Managing timers.   Bro uses numerous timers internally for operations such as timing out a connection establishment attempt. It sometimes has thousands of timers pending at a given moment. Consequently, it is important that timers be very lightweight: quick to set and to expire. Our initial implementation used a single priority heap, which we found attractive since insert and delete operations both require only tex2html_wrap_inline1142 time if the heap contains N elements. However, we found that when the heap grows quite large--such as during a hostile port scan that creates hundreds of new connections each second--then this overhead becomes significant. Consequently, we perceived a need to redesign timers to bring the overhead closer to O(1). To achieve this, Bro is now in the process of being converted to using ``calendar queues'' instead [Br88].

A related issue with managing timers concerns exactly when to expire timers. Bro derives its notion of time from the timestamps provided by libpcap with each packet it delivers. Whenever this clock advances to a time later than the first element on the timer queue, Bro begins removing timers from the queue and processing their expiration, continuing until the queue is empty or its first element has a timestamp later than the current time. This approach is flawed, however, because in some situations--such as port scans--the event engine may find it needs to expire hundreds of timers that have suddenly become due, because the clock has advanced by a large amount due to a lull in incoming traffic. Clearly, what we should do instead is (again) sacrifice exactness as to when timers are expired, and (1) expire at most k for any single advance of the clock, and (2) also expire timers when there has been a processing lull (as this is precisely the time when we have excess CPU cycles available), without waiting for a packet to finally arrive and end the lull. These changes are also part of our current revisions to Bro's timer management.

Interpreting vs. compiling. Presently, Bro interprets the policy script: that is, it parses the script into a tree of C++ objects that reflect an abstract syntax tree (AST), and then executes portions of the tree as needed by invoking a virtual evaluation method at the root of a given subtree. This method in turn recursively invokes evaluation methods on its children.

Such a design has the virtues of simplicity and ease of debugging, but comes at the cost of considerable overhead. From its inception, we intended Bro to readily admit compilation to a low-level virtual machine. Execution profiles of the current implementation indicate that the interpretive overhead is indeed significant, so we anticipate developing a compiler and optimizer. (The current interpreter does some simple constant folding and peephole optimization when building the AST, but no more.)

Using an interpreter also inadvertantly introduced an implementation problem. By structuring the interpreter such that it recursively invokes virtual evaluation methods on the AST, we wind up intricately tying the Bro evaluation stack with the C++ run-time stack. Consequently, we cannot easily bundle up a Bro function's execution state into a closure to execute at some later point in time. Yet we would like to have this functionality, so Bro scripts have timers available to them; the semantics of these timers are to execute a block of statements when a timer expires, including access to the local variables of the function or event handler scheduling the timer. Therefore, adding timers to Bro will require at a minimum implementing an execution stack for Bro scripts separate from that of the interpreter.

Checkpointing. We run Bro continuously to monitor our DMZ network. However, we need to periodically checkpoint its operation, both to reclaim memory tied up in remembering state for long-dormant connections (because we don't yet have timers in the scripting language; see above), and to collect a snapshot for archiving and off-line analysis (discussed below).

Checkpointing is currently a three-stage process. First, we run a new instance of Bro that parses the policy script and resolves all of the DNS names in it. Because we have non-blocking DNS routines, Bro can perform a large number of lookups in parallel, as well as timing out lookup attempts whenever it chooses. For each lookup, it compares the results with any it may have previously cached and generates corresponding events (mapping valid, mapping unverified if it had to time out the lookup, or mapping changed). It then updates the DNS cache file and exits.

In the second stage, we run another instance of Bro, this time specifying that it should only consult the DNS cache and not perform lookups. Because it works directly out of the cache, it starts very quickly. After waiting a short interval, we then send a signal to the long-running Bro telling it to terminate. When it exits, the checkpointing is complete.

We find the checkpointing deficient in two ways. First, it would be simpler to coordinate a checkpoint if a new instance of Bro could directly signal an old instance to announce that it is ready to take over monitoring. Second, and more important, currently no state survives the checkpointing. In particular, if the older Bro has identified some suspect activity and is watching it particularly closely (say, by recording all of its packets), this information is lost when the new Bro takes over. Clearly, we need to fix this.

Off-line analysis. As mentioned above, one reason for checkpointing the system is to facilitate off-line analysis. The first step of this analysis is to copy the libpcap save file and any files generated by the policy script to an analysis machine. Our policy script generates six such files: a summary of all connection activity, including starting time, duration, size in each direction, protocol, endpoints (IP addresses), connection state, and any additional information (such as username, when identified); a summary of the network interface and packet filter statistics; a list of all generated log messages; summaries of Finger and FTP commands; and a list of all unusual networking events.

Regarding this last, the event engine identifies more than 50 different types of unusual behavior, such as incorrect connection initiations and terminations, checksum errors, packet length mismatches, and protocol violations. For each, it generates a conn_weird or net_weird event, identifying the behavior with a predefined string. Our policy script uses a table[string] of count to map these strings to three different values, ``ignore,'' ``file,'' and ``log,'' meaning ignore the behavior entirely, record it to the anomaly file, or log it (real-time notification) and record it to the file. Some anomalies prove surprisingly common, and on a typical day the anomaly file contains on the order of 1,000 entries, even though our script suppresses duplicate messages.

All of the copied files thus form an archival record of the day's traffic. We keep these files indefinitely. They can prove invaluable when we discover a break-in that first occurred weeks or months in the past. In addition, once we have identified an attacking site, we can run it through the archive to find any other hosts it may have attacked that the monitoring failed to detect (quite common, for example, when the attacker has obtained a list of passwords using a password-sniffer).

In addition, after each checkpoint the analysis machine further studies the traffic logs, looking for possible attacks, the most significant being port scans and address sweeps. We intend to eventually move this analysis into the real-time portion of the system; for now, it waits upon adding timers to Bro so we can time out connection state and avoiding consuming huge amounts of memory trying to remember every distinct port and address to which each host has connected.

Finally, the off-line analysis generates a traffic summary highlighting the busiest hosts and giving the volume (number of connections and bytes transferred) due to different applications. As of this writing, on a typical day our site engages in about 600,000 connections transferring 20 GB of data. The great majority (75-80%) of the connections are HTTP; the highest byte volume comes from HTTP, FTP data, NNTP (network news), and, sometimes, X11, with the ordering among them variable.


next up previous
Next: Attacks on the monitor Up: Index Previous: Statements

Vern Paxson
Sat Dec 6 01:53:24 PST 1997