Check out the new USENIX Web site. next up previous
Next: Background Up: Automated Response Using System-Call Previous: Automated Response Using System-Call

Introduction

This paper addresses a largely ignored aspect of computer security--the automated response problem. Previously, computer security research has focused almost entirely on prevention (e.g., cryptography, firewalls and protocol design) and detection (e.g., virus and intrusion detection). Response has been an afterthought, generally restricted to increased logging and administrator email. Commercial intrusion detection systems (IDSs) are capable of terminating connections, killing processes, and even blocking messages from entire networks [3,12,22]; in practice, though, these mechanisms cannot be widely deployed because the risk of an inappropriate response (e.g., removing a legitimate user's computer from the network) is too high. Thus, IDSs become burdens, requiring administrators to analyze and respond to almost every detected anomaly. In an era of expanding connectivity and ubiquitous computing, we must seek solutions that reduce the system administrator's workload, rather than increasing it. That is, our computers must respond to attacks autonomously.

In earlier work, we and others have demonstrated several methods of anomaly detection by which large classes of intrusions can be detected, e.g., [1,27,17,16]. Good anomaly detection, however, comes at the price of persistent false positives. Although more sophisticated methods will no doubt continue to be developed, we believe that it is infeasible to eliminate false positives completely. There are several reasons for this. First, computers live in rich dynamic environments, where inevitably there are new patterns of legitimate activity not previously seen by the system -- a phenomenon known as perpetual novelty (see Hofmeyr [21] for an empirical model of the rate at which new patterns appear in a local area network). Second, profiles of legitimate activity change continually, as computers and users are added or deleted, new software packages or patches are added to a system, and so forth. Thus, the normal state of the system is evolving over time. Finally, there is inherent ambiguity in the distinction between normal and intrusive (or abnormal) activities. For example, changes to system configuration files are legitimate if performed by a system administrator; however, the very same actions are a security violation conducted done by a non-privileged user or an outside attacker. Thus, any automated response system must be designed to account for persistent false-positives, evolving definitions of normal, and ambiguity about what constitutes an anomaly.

We have chosen to focus on automated response mechanisms which will allow a computer to preserve its own integrity (i.e. stay ``alive'' and uncompromised), rather than ones that help discover the source or method of an intrusion. Within this context, we believe that the best way to approach the automated response problem is by designing a system in which a computer autonomously monitors its own activities, routinely making small corrections to maintain itself in a ``normal'' state. In biology, the maintenance of a stable (normal) internal environment is known as homeostasis. All living systems employ a wide range of homeostatic mechanisms in order to survive under fluctuating environmental conditions. We propose that computer systems should similarly have mechanisms which strive to maintain a stable environment inside the computer, even in the face of wide variations in inputs. Under this view, automated response is recast from a monolithic all-or-nothing action (which if incorrect can have dire consequences) to a set of small, continually occurring changes to the state of the system. With this view, occasional false alarms are not problematic, because they have small impact. In earlier papers, we have advocated a view of computer security based on ideas from immunology [16,34,20]. This paper naturally extends that view by recognizing that immune systems are more properly thought of as homeostatic mechanisms than pure defense mechanisms [26].

In the following sections, we describe a working implementation of these ideas--a set of extensions to a Linux kernel which does not interfere with normal operation but can successfully stop attacks as they occur. We call the system pH (short for process homeostasis). To create pH, we extended our earlier intrusion-detection work using system calls [16] by connecting system calls with feedback mechanisms that either delay or abort anomalous system calls.

Delays form a natural basis for interfering with program behavior: small delays are typically imperceptible to a program, and are minor annoyances to a user. Longer delays, however, can trigger timeouts at the application and network levels, effectively terminating the delayed program. By implementing the delays as an increasing function of the number of recent anomalous sequences, pH can smoothly transition between normal execution and program termination.

This paper makes two principal contributions. First, it demonstrates the feasibility of monitoring every active process at the system-call level in real-time, with minimal impact on overall performance. Second, it introduces a practical, relatively non-intrusive method for automatically responding to anomalous program behavior.

The paper proceeds as follows. First, we review our system call monitoring and anomaly detection method. Next, we explain the design and implementation of pH. We then demonstrate pH's effectiveness at stopping attacks, show through benchmarks that it runs with low overhead, and describe what it is like to actually use pH on a workstation. After a review of related work, we conclude with a discussion of limitations and future work.


next up previous
Next: Background Up: Automated Response Using System-Call Previous: Automated Response Using System-Call
Anil B. Somayaji 2000-06-14