Check out the new USENIX Web site. next_inactive up previous


Detecting Targeted Attacks Using Shadow Honeypots

K. G. Anagnostakis (1), S. Sidiroglou (2), P. Akritidis (3), K. Xinidis (3), E. Markatos (3), A. D. Keromytis (2)
   
(1) CIS Department, Univ. of Pennsylvania (3) Institute of Computer Science - FORTH
anagnost@dsl.cis.upenn.edu {akritid,xinidis,markatos}@ics.forth.gr


(2) Department of Computer Science, Columbia University
{stelios,angelos}@cs.columbia.edu

Abstract:

We present Shadow Honeypots, a novel hybrid architecture that combines the best features of honeypots and anomaly detection. At a high level, we use a variety of anomaly detectors to monitor all traffic to a protected network/service. Traffic that is considered anomalous is processed by a ``shadow honeypot'' to determine the accuracy of the anomaly prediction. The shadow is an instance of the protected software that shares all internal state with a regular (``production'') instance of the application, and is instrumented to detect potential attacks. Attacks against the shadow are caught, and any incurred state changes are discarded. Legitimate traffic that was misclassified will be validated by the shadow and will be handled correctly by the system transparently to the end user. The outcome of processing a request by the shadow is used to filter future attack instances and could be used to update the anomaly detector.

Our architecture allows system designers to fine-tune systems for performance, since false positives will be filtered by the shadow. Contrary to regular honeypots, our architecture can be used both for server and client applications. We demonstrate the feasibility of our approach in a proof-of-concept implementation of the Shadow Honeypot architecture for the Apache web server and the Mozilla Firefox browser. We show that despite a considerable overhead in the instrumentation of the shadow honeypot (up to 20% for Apache), the overall impact on the system is diminished by the ability to minimize the rate of false-positives.

1 Introduction

Due to the increasing level of malicious activity seen on today's Internet, organizations are beginning to deploy mechanisms for detecting and responding to new attacks or suspicious activity, called Intrusion Prevention Systems (IPS). Since current IPS's use rule-based intrusion detection systems (IDS) such as Snort [32] to detect attacks, they are limited to protecting, for the most part, against already known attacks. As a result, new detection mechanisms are being developed for use in more powerful reactive-defense systems. The two primary such mechanisms are honeypots [28,13,58,40,20,9] and anomaly detection systems (ADS) [49,53,48,10,19]. In contrast with IDS's, honeypots and ADS's offer the possibility of detecting (and thus responding to) previously unknown attacks, also referred to as zero-day attacks.

Honeypots and anomaly detection systems offer different tradeoffs between accuracy and scope of attacks that can be detected, as shown in Figure 1. Honeypots can be heavily instrumented to accurately detect attacks, but depend on an attacker attempting to exploit a vulnerability against them. This makes them good for detecting scanning worms [3,5,13], but ineffective against manual directed attacks or topological and hit-list worms [43,42]. Furthermore, honeypots can typically only be used for server-type applications. Anomaly detection systems can theoretically detect both types of attacks, but are usually much less accurate. Most such systems offer a tradeoff between false positive (FP) and false negative (FN) rates. For example, it is often possible to tune the system to detect more potential attacks, at an increased risk of misclassifying legitimate traffic (low FN, high FP); alternatively, it is possible to make an anomaly detection system more insensitive to attacks, at the risk of missing some real attacks (high FN, low FP). Because an ADS-based IPS can adversely affect legitimate traffic (e.g., drop a legitimate request), system designers often tune the system for low false positive rates, potentially misclassifying attacks as legitimate traffic.

Figure 1: A simple classification of honeypots and anomaly detection systems, based on attack detection accuracy and scope of detected attacks. Targeted attacks may use lists of known (potentially) vulnerable servers, while scan-based attacks will target any system that is believed to run a vulnerable service. AD systems can detect both types of attacks, but with lower accuracy than a specially instrumented system (honeypot). However, honeypots are blind to targeted attacks, and may not see a scanning attack until after it has succeeded against the real server.
\begin{figure}\begin{center}
\epsfig{file=tradeoff.eps,width=2.9in}\end{center}\end{figure}

We propose a novel hybrid approach that combines the best features of honeypots and anomaly detection, named Shadow Honeypots. At a high level, we use a variety of anomaly detectors to monitor all traffic to a protected network. Traffic that is considered anomalous is processed by a shadow honeypot. The shadow version is an instance of the protected application (e.g., a web server or client) that shares all internal state with a ``normal'' instance of the application, but is instrumented to detect potential attacks. Attacks against the shadow honeypot are caught and any incurred state changes are discarded. Legitimate traffic that was misclassified by the anomaly detector will be validated by the shadow honeypot and will be transparently handled correctly by the system (i.e., an HTTP request that was mistakenly flagged as suspicious will be served correctly). Our approach offers several advantages over stand-alone ADS's or honeypots:

We apply the concept of shadow honeypots to a proof-of-concept prototype implementation tailored against memory-violation attacks. Specifically, we developed a tool that allows for automatic transformation of existing code into its ``shadow version''. The resulting code allows for traffic handling to happen through the regular or shadow version of the code, contingent on input derived from an array of anomaly detection sensors. When an attack is detected by the shadow version of the code, state changes effected by the malicious request are rolled back. Legitimate traffic handled by the shadow is processed successfully, albeit at higher latency.

In addition to the server-side scenario, we also investigate a client-targeting attack-detection scenario, unique to shadow honeypots, where we apply the detection heuristics to content retrieved by protected clients and feed any positives to shadow honeypots for further analysis. Unlike traditional honeypots, which are idle whilst waiting for active attackers to probe them, this scenario enables the detection of passive attacks, where the attacker lures a victim user to download malicious data. We use the recent libpng vulnerability of Mozilla [7] (which is similar to the buffer overflow vulnerability in the Internet Explorer's JPEG-handling logic) to demonstrate the ability of our system to protect client-side applications.

Our shadow honeypot prototype consists of several components. At the front-end of our system, we use a high-performance intrusion-prevention system based on the Intel IXP network processor and a set of modified snort sensors running on normal PCs. The network processor is used as a smart load-balancer, distributing the workload to the sensors. The sensors are responsible for testing the traffic against a variety of anomaly detection heuristics, and coordinating with the IXP to tag traffic that needs to be inspected by shadow honeypots. This design leads to the scalability needed in high-end environments such as web server farms, as only a fraction of the servers need to incur the penalty of providing shadow honeypot functionality.

In our implementation, we have used a variety of anomaly detection techniques, including Abstract Payload Execution (APE) [48], and the Earlybird algorithm [36]. The feasibility of our approach is demonstrated by examining both false-positive and true attack scenarios. We show that our system has the capacity to process all false-positives generated by APE and EarlyBird and successfully detect attacks. We also show that when the anomaly detection techniques are tuned to increase detection accuracy, the resulting additional false positives are still within the processing budget of our system. More specifically, our benchmarks show that although instrumentation is expensive (20-50% overhead), the shadow version of the Apache Web server can process around 1300 requests per second, while the shadow version of the Mozilla Firefox client can process between 1 and 4 requests per second. At the same time, the front-end and anomaly detection algorithms can process a fully-loaded Gbit/s link, producing 0.3-0.5 false positives per minute when tuned for high sensitivity, which is well within the processing budget of our shadow honeypot implementation.

1.0.0.1 Paper Organization

The remainder of this paper is organized as follows. Section 2 discusses the shadow honeypot architecture in greater detail. We describe our implementation in Section 3, and our experimental and performance results in Section 4. Some of the limitations of our approach are briefly discussed in Section 5. We give an overview of related work in Section 6, and conclude the paper with a summary of our work and plans for future work in Section 7.


2 Architecture

The Shadow Honeypot architecture is a systems approach to handling network-based attacks, combining filtering, anomaly detection systems and honeypots in a way that exploits the best features of these mechanisms, while shielding their limitations. We focus on transactional applications, i.e., those that handle a series of discrete requests. Our architecture is not limited to server applications, but can be used for client-side applications such as web browsers, P2P clients, etc. As illustrated in Figure 2, the architecture is composed of three main components: a filtering engine, an array of anomaly detection sensors and the shadow honeypot, which validates the predictions of the anomaly detectors. The processing logic of the system is shown graphically in Figure 3.

The filtering component blocks known attacks. Such filtering is done based either on payload content [52,2] or on the source of the attack, if it can be identified with reasonable confidence (e.g., confirmed traffic bi-directionality). Effectively, the filtering component short-circuits the detection heuristics or shadow testing results by immediately dropping specific types of requests before any further processing is done.

Traffic passing the first stage is processed by one or more anomaly detectors. There are several types of anomaly detectors that may be used in our system, including payload analysis [53,36,17,48] and network behavior [15,56]. Although we do not impose any particular requirements on the AD component of our system, it is preferable to tune such detectors towards high sensitivity (at the cost of increased false positives). The anomaly detectors, in turn, signal to the protected application whether a request is potentially dangerous.

Figure 2: Shadow Honeypot architecture.
\epsfig{file=shadow.eps,width=3.3in}
Figure 3: System workflow.
\epsfig{file=workflow.eps,width=3.1in}

Depending on this prediction by the anomaly detectors, the system invokes either the regular instance of the application or its shadow. The shadow is an instrumented instance of the application that can detect specific types of failures and rollback any state changes to a known (or presumed) good state, e.g., before the malicious request was processed. Because the shadow is (or should be) invoked relatively infrequently, we can employ computationally expensive instrumentation to detect attacks. The shadow and the regular application fully share state, to avoid attacks that exploit differences between the two; we assume that an attacker can only interact with the application through the filtering and AD stages, i.e., there are no side-channels. The level of instrumentation used in the shadow depends on the amount of latency we are willing to impose on suspicious traffic (whether truly malicious or misclassified legitimate traffic). In our implementation, described in Section 3, we focus on memory-violation attacks, but any attack that can be determined algorithmically can be detected and recovered from, at the cost of increased complexity and potentially higher latency.

If the shadow detects an actual attack, we notify the filtering component to block further attacks. If no attack is detected, we update the prediction models used by the anomaly detectors. Thus, our system could in fact self-train and fine-tune itself using verifiably bad traffic and known mis-predictions, although this aspect of the approach is outside the scope of the present paper.

As we mentioned above, shadow honeypots can be integrated with servers as well as clients. In this paper, we consider tight coupling with both server and client applications, where the shadow resides in the same address space as the protected application.

Tight coupling assumes that the application can be modified. The advantage of this configuration is that attacks that exploit differences in the state of the shadow vs. the application itself become impossible. However, it is also possible to deploy shadow honeypots in a loosely coupled configuration, where the shadow resides on a different system and does not share state with the protected application. The advantage of this configuration is that management of the shadows can be ``outsourced'' to a third entity.

Note that the filtering and anomaly detection components can also be tightly coupled with the protected application, or may be centralized at a natural aggregation point in the network topology (e.g., at the firewall).

Finally, it is worth considering how our system would behave against different types of attacks. For most attacks we have seen thus far, once the AD component has identified an anomaly and the shadow has validated it, the filtering component will block all future instances of it from getting to the application. However, we cannot depend on the filtering component to prevent polymorphic or metamorphic [46] attacks. For low-volume events, the cost of invoking the shadow for each attack may be acceptable. For high-volume events, such as a Slammer-like outbreak, the system will detect a large number of correct AD predictions (verified by the shadow) in a short period of time; should a configurable threshold be exceeded, the system can enable filtering at the second stage, based on the unverified verdict of the anomaly detectors. Although this will cause some legitimate requests to be dropped, this could be acceptable for the duration of the incident. Once the number of (perceived) attacks seen by the ADS drop beyond a threshold, the system can revert to normal operation.


3 Implementation

3.1 Filtering and anomaly detection

Figure 4: High-level diagram of prototype shadow honeypot implementation.
\begin{figure}\begin{center}
\epsfig{file=digenis_new.eps,width=3in}\end{center}\end{figure}

During the composition of our system, we were faced with numerous design issues with respect to performance and extensibility. When considering the deployment of the shadow honeypot architecture in a high-performance environment, such as a Web server farm, where speeds of at least 1 Gbit/s are common and we cannot afford to missclassify traffic, the choice for off-the-shelf components becomes very limited. To the best of our knowledge, current solutions, both standalone PCs and network-processor-based network intrusion detection systems (NIDSes), are well under the 1 Gbit/s mark [11,33].

Faced with these limitations, we considered a distributed design, similar in principle to [47,18]: we use a network processor (NP) as a scalable, custom load balancer, and implement all detection heuristics on an array of (modified) snort sensors running on standard PCs that are connected to the network processor board. We chose not to implement any of the detection heuristics on the NP for two reasons. First, currently available NPs are designed primarily for simple forwarding and lack the processing capacity required for speeds in excess of 1 Gbit/s. Second, they remain harder to program and debug than standard general purpose processors. For our implementation, we used the IXP1200 network processor. A high-level view of our implementation is shown in Figure 4.

A primary function of the anomaly detection sensor is the ability to divert potentially malicious requests to the shadow honeypot. For web servers in particular, a reasonable definition of the attack context is the HTTP request. For this purpose, the sensor must construct a request, run the detection heuristics, and forward the request depending on the outcome. This processing must be performed at the HTTP level thus an HTTP proxy-like function is needed. We implemented the anomaly detection sensors for the tightly-coupled shadow server case by augmenting an HTTP proxy with ability to apply the APE detection heuristic on incoming requests and route them according to its outcome.

For the shadow client scenario, we use an alternative solution based on passive monitoring. Employing the proxy approach in this situation would be prohibitively expensive, in terms of latency, since we only require detection capabilities. For this scenario, we reconstruct the TCP streams of HTTP connections and decode the HTTP protocol to extract suspicious objects.

As part of our proof-of-concept implementation we have used two anomaly detection heuristics: payload sifting and abstract payload execution. Payload sifting as developed in [36] derives fingerprints of rapidly spreading worms by identifying popular substrings in network traffic. It is a prime example of an anomaly detection based system that is able to detect novel attacks at the expense of false positives. However, if used in isolation (e.g., outside our shadow honeypot environment) by the time it has reliably detected a worm epidemic, it is very likely that many systems would have already been compromised. This may reduce its usage potential in the tightly-coupled server protection scenario without external help. Nevertheless, if fingerprints generated by a distributed payload sifting system are disseminated to interested parties that run shadow honeypots locally, matching traffic against such fingerprints can be of use as a detection heuristic in the shadow honeypot system. Of further interest is the ability to use this technique in the loosely-coupled shadow server scenario, although we do not further consider this scenario here.

The second heuristic we have implemented is buffer overflow detection via abstract payload execution (APE), as proposed in [48]. The heuristic detects buffer overflow attacks by searching for sufficiently long sequences of valid instructions in network traffic. Long sequences of valid instructions can appear in non-malicious data, and this is where the shadow honeypot fits in. Such detection mechanisms are particularly attractive because they are applied to individual attacks and will trigger detection upon encountering the first instance of an attack, unlike many anomaly detection mechanisms that must witness multiple attacks before flagging them as anomalous.


3.2 Shadow Honeypot Creation

To create shadow honeypots, we use a code-transformation tool that takes as input the original application source code and ``weaves'' into it the shadow honeypot code. In this paper, we focus on memory-violation errors and show source-code transformations that detect buffer overflows, although other types of failures can be caught (e.g., input that causes illegal memory dereferences) with the appropriate instrumentation, but at the cost of higher complexity and larger performance bottleneck. For the code transformations we use TXL [22], a hybrid functional and rule-based language which is well-suited for performing source-to-source transformation and for rapidly prototyping new languages and language processors. The grammar responsible for parsing the source input is specified in a notation similar to Extended Backus-Naur (BNF). In our prototype, called DYBOC, we use TXL for transformations with the GCC front-end.

Figure 5: Example of pmalloc()-based memory allocation: the trailer and edge regions (above and below the write-protected pages) indicate ``waste'' memory. This is needed to ensure that mprotect() is applied on complete memory pages.
\begin{figure}\begin{center}
\epsfig{file=pmalloc.eps,width=2.2in}\end{center}\end{figure}

The instrumentation itself is conceptually straightforward: we move all static buffers to the heap, by dynamically allocating the buffer upon entering the function in which it was previously declared; we de-allocate these buffers upon exiting the function, whether implicitly (by reaching the end of the function body) or explicitly (through a return statement). We take care to properly handle the sizeof construct, a fairly straightforward task with TXL. Pointer aliasing is not a problem with our system, since we instrument the allocated memory regions; any illegal accesses to these will be caught.

Figure 6: Transforming a function to its shadow-supporting version. The shadow_enable() macro simply checks the status of a shared-memory variable (controlled by the anomaly detection system) on whether the shadow honeypot should be executing instead of the regular code.
\begin{figure*}\begin{center}
{\footnotesize {\tt\begin{tabular}{l}
\begin{tabul...
...eturn~0;\\
\}\\
\hline
\end{tabular}\end{tabular}}
}
\end{center}\end{figure*}

For memory allocation, we use our own version of malloc(), called pmalloc(), that allocates two additional zero-filled, write-protected pages that bracket the requested buffer, as shown in Figure 5. The guard pages are mmap()'ed from /dev/zero as read-only. As mmap() operates at memory page granularity, every memory request is rounded up to the nearest page. The pointer that is returned by pmalloc() can be adjusted to immediately catch any buffer overflow or underflow depending on where attention is focused. This functionality is similar to that offered by the ElectricFence memory-debugging library, the difference being that pmalloc() catches both buffer overflow and underflow attacks. Because we mmap() pages from /dev/zero, we do not waste physical memory for the guards (just page-table entries). Memory is wasted, however, for each allocated buffer, since we allocate to the next closest page. While this can lead to considerable memory waste, we note that this is only incurred when executing in shadow mode, and in practice has proven easily manageable.

Figure 6 shows an example of such a translation. Buffers that are already allocated via malloc() are simply switched to pmalloc(). This is achieved by examining declarations in the source and transforming them to pointers where the size is allocated with a malloc() function call. Furthermore, we adjust the grammar to free the variables before the function returns. After making changes to the standard ANSI grammar that allow entries such as malloc() to be inserted between declarations and statements, the transformation step is trivial. For single-threaded, non-reentrant code, it is possible to only use pmalloc() once for each previously-static buffer. Generally, however, this allocation needs to be done each time the function is invoked.

Any overflow (or underflow) on a buffer allocated via pmalloc() will cause the process to receive a Segmentation Violation (SEGV) signal, which is caught by a signal handler we have added to the source code in main(). The signal handler simply notifies the operating system to abort all state changes made by the process while processing this request. To do this, we added a new system call to the operating system, transaction(). This is conditionally (as directed by the shadow_enable() macro) invoked at three locations in the code:

Although we have not implemented this, a similar mechanism can be built around the filesystem by using a private copy of the buffer cache for the process executing in shadow mode. The only difficulty arises when the process must itself communicate with another process while servicing a request; unless the second process is also included in the transaction definition (which may be impossible, if it is a remote process on another system), overall system state may change without the ability to roll it back. For example, this may happen when a web server communicates with a remote back-end database. Our system does not currently address this, i.e., we assume that any such state changes are benign or irrelevant (e.g., a DNS query). Specifically for the case of a back-end database, these inherently support the concept of a transaction rollback, so it is possible to undo any changes.

The signal handler may also notify external logic to indicate that an attack associated with a particular input from a specific source has been detected. The external logic may then instantiate a filter, either based on the network source of the request or the contents of the payload [52].


4 Experimental Evaluation

We have tested our shadow honeypot implementation against a number of exploits, including a recent Mozilla PNG bug and several Apache-specific exploits. In this section, we report on performance benchmarks that illustrate the efficacy of our implementation.

First, we measure the cost of instantiating and operating shadow instances of specific services using the Apache web server and the Mozilla Firefox web browser. Second, we evaluate the filtering and anomaly detection components, and determine the throughput of the IXP1200-based load balancer as well as the cost of running the detection heuristics. Third, we look at the false positive rates and the trade-offs associated with detection performance. Based on these results, we determine how to tune the anomaly detection heuristics in order to increase detection performance while not exceeding the budget alloted by the shadow services.

Figure 7: Apache benchmark results.
\epsfig{file=apache_benc_ndss.eps,width=2.4in}
Figure 8: Normalized Mozilla Firefox benchmark results using modified version of i-Bench.
\epsfig{file=mozilla-perf.eps,width=3.0in}

4.1 Performance of shadow services

4.1.0.1 Apache

To determine the workload capacity of the shadow honeypot environment, we used DYBOC on the Apache web server, version 2.0.49. Apache was chosen due to its popularity and source-code availability. Basic Apache functionality was tested, omitting additional modules. The tests were conducted on a PC with a 2GHz Intel P4 processor and 1GB of RAM, running Debian Linux (2.6.5-1 kernel).

We used ApacheBench [4], a complete benchmarking and regression testing suite. Examination of application response is preferable to explicit measurements in the case of complex systems, as we seek to understand the effect on overall system performance.

Figure 7 illustrates the requests per second that Apache can handle. There is a 20.1% overhead for the patched version of Apache over the original, which is expected since the majority of the patched buffers belong to utility functions that are not heavily used. This result is an indication of the worst-case analysis, since all the protection flags were enabled; although the performance penalty is high, it is not outright prohibitive for some applications. For the instrumentation of a single buffer and a vulnerable function that is invoked once per HTTP transaction, the overhead is 1.18%.

Of further interest is the increase in memory requirements for the patched version. A naive implementation of pmalloc() would require two additional memory pages for each transformed buffer. Full transformation of Apache translates into 297 buffers that are allocated with pmalloc(), adding an overhead of 2.3MB if all of these buffers are invoked simultaneously during program execution. When protecting malloc()'ed buffers, the amount of required memory can skyrocket.

To avoid this overhead, we use an mmap() based allocator. The two guard pages are mmap'ed write-protected from /dev/zero, without requiring additional physical memory to be allocated. Instead, the overhead of our mechanism is 2 page-table entries (PTEs) per allocated buffer, plus one file descriptor (for /dev/zero) per program. As most modern processors use an MMU cache for frequently used PTEs, and since the guard pages are only accessed when a fault occurs, we expect their impact on performance to be small.

4.1.0.2 Mozilla Firefox

For the evaluation of the client case, we used the Mozilla Firefox browser. For the initial validation tests, we back-ported the recently reported libpng vulnerability [7] that enables arbitrary code execution if Firefox (or any application using libpng) attempts to display a specially crafted PNG image. Interestingly, this example mirrors a recent vulnerability of Internet Explorer, and JPEG image handling [6], which again enabled arbitrary code execution when displaying specially crafted images.

In the tightly-coupled scenario, the protected version of the application shares the address space with the unmodified version. This is achieved by transforming the original source code with our DYBOC tool. Suspicious requests are tagged by the ADS so that they are processed by the protected version of the code as discussed in Section 3.2.

For the loosely-coupled case, when the AD component marks a request for processing on the shadow honeypot, we launch the instrumented version of Firefox to replay the request. The browser is configured to use a null X server as provided by Xvfb. All requests are handled by a transparent proxy that redirects these requests to an internal Web server. The Web server then responds with the objects served by the original server, as captured in the original session. The workload that the shadow honeypot can process in the case of Firefox is determined by how many responses per second a browser can process and how many different browser versions can be checked.

Our measurements show that a single instance of Firefox can handle about one request per second with restarting after processing each response. Doing this only after detecting a successful attack improves the result to about four requests per second. By restarting, we avoid the accumulation of various pop-ups and other side-effects. Unlike the server scenario, instrumenting the browser does not seem to have any significant impact on performance. If that was the case, we could have used the rollback mechanism discussed previously to reduce the cost of launching new instances of the browser.

We further evaluate the performance implications of fully instrumenting a web browser. These observations apply to both loosely-coupled and tightly-coupled shadow honeypots. Web browsing performance was measured using a Mozilla Firefox 1.0 browser to run a benchmark based on the i-Bench benchmark suite [1]. i-Bench is a comprehensive, cross-platform benchmark that tests the performance and capability of Web clients. Specifically, we use a variant of the benchmark that allows for scrolling of a web page and uses cookies to store the load times for each page. Scrolling is performed in order to render the whole page, providing a pessimistic emulation of a typical attack. The benchmark consists of a sequence of 10 web pages containing a mix of text and graphics; the benchmark was ran using both the scrolling option and the standard page load mechanisms. For the standard page load configuration, the performance degradation for instrumentation was 35%. For the scrolling configuration, where in addition to the page load time, the time taken to scroll through the page is recorded, the overhead was 50%. The results follow our intuition as more calls to malloc are required to fully render the page. Figure 8 illustrates the normalized performance results. It should be noted that depending on the browser implementation (whether the entire page is rendered on page load) mechanisms such at the automatic scrolling need to be implemented in order to protected against targeted attacks. Attackers may hide malicious code in unrendered parts of a page or in javascript code activated by user-guided pointer movement.

Figure 9: Popularity of different Mozilla versions, as measured in the logs of CIS Department Web server at the University of Pennsylvania.

How many different browser versions would have to be checked by the system? Figure 9 presents some statistics concerning different browser versions of Mozilla. The browser statistics were collected over a 5-week period from the CIS Department web server at the University of Pennsylvania. As evidenced by the figure, one can expect to check up to 6 versions of a particular client. We expect that this distribution will be more stabilized around final release versions and expect to minimize the number of different versions that need to be checked based on their popularity.

4.2 Filtering and anomaly detection

4.2.0.1 IXP1200-based firewall/load-balancer.

We first determine the performance of the IXP1200-based firewall/load-balancer. The IXP1200 evaluation board we use has two Gigabit Ethernet interfaces and eight Fast Ethernet interfaces. The Gigabit Ethernet interfaces are used to connect to the internal and external network and the Fast Ethernet interfaces to communicate with the sensors. A set of client workstations is used to generate traffic through the firewall. The firewall forwards traffic to the sensors for processing and the sensors determine if the traffic should be dropped, redirected to the shadow honeypot, or forwarded to the internal network.

Previous studies [38] have reported forwarding rates of at least 1600 Mbit/s for the IXP1200, when used as a simple forwarder/router, which is sufficient to saturate a Gigabit Ethernet interface. Our measurements show that despite the added cost of load balancing, filtering and coordinating with the sensors, the firewall can still handle the Gigabit Ethernet interface at line rate.

To gain insight into the actual overhead of our implementation we carry out a second experiment, using Intel's cycle-accurate IXP1200 simulator. We assume a clock frequency of 232 MHz for the IXP1200, and an IX bus configured to be 64-bit wide with a clock frequency of 104 MHz. In the simulated environment, we obtain detailed utilization measurements for the microengines of the IXP1200. The results are shown in Table 10. The results show that even at line rate and worst-case traffic the implementation is quite efficient, as the microengines operate at 50.9%-71.5% of their processing capacity. These results provide further insight into the scalability of our design.

Figure 10: Utilization(%) of the IXP1200 Microengines, for forwarding-only (FWD), load-balancing-only (LB), both (LB+FWD), and full implementation (FULL), in stress-tests with 800 Mbit/s worst-case 64-byte-packet traffic.
\begin{figure}\begin{centering}
\epsfig{file=ixp_ue.eps,width=2.8in}\end{centering}\end{figure}

Table 1: PC Sensor throughput for different detection mechanisms.
Detection method Throughput/sensor
Content matching 225 Mbit/s
APE 190 Mbit/s
Payload Sifting 268 Mbit/s


4.2.0.2 PC-based sensor performance.

We also measure the throughput of the PC-based sensors that cooperate with the IXP1200 for analyzing traffic and performing anomaly detection. For this experiment, we use a 2.66 GHz Pentium IV Xeon processor with hyper-threading disabled. The PC has 512 Mbytes of DDR DRAM at 266 MHz. The PCI bus is 64-bit wide clocked at 66 MHz. The host operating system is Linux (kernel version 2.4.22, Red-Hat 9.0).

We use LAN traces to stress-test a single sensor running a modified version of snort that, in addition to basic signature matching, provides the hooks needed to coordinate with the IXP1200 as well as the APE and payload sifting heuristics. We replay the traces from a remote system through the IXP1200 at different rates to determine the maximum loss-free rate (MLFR) of the sensor. For the purpose of this experiment, we connected a sensor to the second Gigabit Ethernet interface of the IXP1200 board.

The measured throughput of the sensor for signature matching using APE and Earlybird is shown in Table 1. The throughput per sensor ranges between 190 Mbit/s (APE) and 268 Mbit/s (payload sifting), while standard signature matching can be performed at 225 Mbit/s. This means that we need at least 4-5 sensors behind the IXP1200 for each of these mechanisms. Note, however, that these results are rather conservative and based on unoptimized code, and thus only serve the purpose of providing a ballpark figure on the cost of anomaly detection.

Figure 11: FPs for payload sifting
\epsfig{file=payload.eps,width=3.2in}
Figure 12: FPs for APE
\epsfig{file=ape.eps,width=3.2in}

4.2.0.3 False positive vs. detection rate trade-offs

We determine the workload that is generated by the AD heuristics, by measuring the false positive rate. We also consider the trade-off between false positives and detection rate, to demonstrate how the AD heuristics could be tuned to increase detection rate in our shadow honeypot environment. We use the payload sifting implementation from [8], and the APE algorithm from [48]. The APE experiment corresponds to a tightly-coupled shadow server scenario, while the payload sifting experiment examines a loosely-coupled shadow honeypot scenario that can be used for worm detection.

We run the modified snort sensor implementing APE and payload sifting on packet-level traces captured on an enterprise LAN with roughly 150 hosts. Furthermore, the traces contain several instances of the Welchia worm. APE was applied on the URIs contained in roughly one-billion HTTP requests gathered by monitoring the same LAN.

Figure 11 demonstrates the effects of varying the distinct destinations threshold of the content sifting AD on the false positives (measured in requests to the shadow services per minute) and the (Welchia worm) detection delay (measured in ratio of hosts in the monitored LAN infected by the time of the detection).

Increasing the threshold means more attack instances are required for triggering detection, and therefore increases the detection delay and reduces the false positives. It is evident that to achieve a zero false positives rate without shadow honeypots we must operate the system with parameters that yield a suboptimal detection delay.

The detection rate for APE is the minimum sled length that it can detect and depends on the sampling factor and the MEL parameter (the number of valid instructions that trigger detection). A high MEL value means less false positives due to random valid sequences but also makes the heuristic blind to sleds of smaller lengths.

Figure 12 shows the effects of MEL threshold on the false positives. APE can be used in a tightly coupled scenario, where the suspect requests are redirected to the instrumented server instances. The false positives (measured in requests to the shadow services per minute by each of the normal services under maximum load) can be handled easily by a shadow honeypot. APE alone has false positives for the entire range of acceptable operational parameters; it is the combination with shadow honeypots that removes the problem.


5 Limitations

There are three limitations of the shadow honeypot design presented in this paper that we are aware of. First, the effectiveness of the rollback mechanism depends on the proper placement of calls to transaction() for committing state changes, and the latency of the detector. The detector used in this paper can instantly detect attempts to overwrite a buffer, and therefore the system cannot be corrupted. Other detectors, however, may have higher latency, and the placement of commit calls is critical to recovering from the attack. Depending on the detector latency and how it relates to the cost of implementing rollback, one may have to consider different approaches. The trade-offs involved in designing such mechanisms are thoroughly examined in the fault-tolerance literature (c.f. [14]).

Second, the loosely coupled client shadow honeypot is limited to protecting against relatively static attacks. The honeypot cannot effectively emulate user behavior that may be involved in triggering the attack, for example, through DHTML or Javascript. The loosely coupled version is also weak against attacks that depend on local system state on the user's host that is difficult to replicate. This is not a problem with tightly coupled shadows, because we accurately mirror the state of the real system. In some cases, it may be possible to mirror state on loosely coupled shadows as well, but we have not considered this case in the experiments presented in this paper.

Finally, we have not explored in depth the use of feedback from the shadow honeypot to tune the anomaly detection components. Although this is likely to lead to substantial performance benefits, we need to be careful so that an attacker cannot launch blinding attacks, e.g., ``softening'' the anomaly detection component through a barrage of false positives before launching a real attack.


6 Related Work

Much of the work in automated attack reaction has focused on the problem of network worms, which has taken truly epidemic dimensions (pun intended). For example, the system described in [56] detects worms by monitoring probes to unassigned IP addresses (``dark space'') or inactive ports and computing statistics on scan traffic, such as the number of source/destination addresses and the volume of the captured traffic. By measuring the increase on the number of source addresses seen in a unit of time, it is possible to infer the existence of a new worm when as little as 4% of the vulnerable machines have been infected. A similar approach for isolating infected nodes inside an enterprise network [41] is taken in [15], where it was shown that as little as 4 probes may be sufficient in detecting a new port-scanning worm. [54] describes an approximating algorithm for quickly detecting scanning activity that can be efficiently implemented in hardware. [34] describes a combination of reverse sequential hypothesis testing and credit-based connection throttling to quickly detect and quarantine local infected hosts. These systems are effective only against scanning worms (not topological, or ``hit-list'' worms), and rely on the assumption that most scans will result in non-connections. As such, they as susceptible to false positives, either accidentally (e.g., when a host is joining a peer-to-peer network such as Gnutella, or during a temporary network outage) or on purpose (e.g., a malicious web page with many links to images in random/not-used IP addresses). Furthermore, it may be possible for several instances of a worm to collaborate in providing the illusion of several successful connections, or to use a list of known repliers to blind the anomaly detector. Another algorithm for finding fast-spreading worms using 2-level filtering based on sampling from the set of distinct source-destination pairs is described in [50].

[55] correlates DNS queries/replies with outgoing connections from an enterprise network to detect anomalous behavior. The main intuition is that connections due to random-scanning (and, to a degree, hit-list) worms will not be preceded by DNS transactions. This approach can be used to detect other types of malicious behavior, such as mass-mailing worms and network reconnaissance.

[17] describes an algorithm for correlating packet payloads from different traffic flows, towards deriving a worm signature that can then be filtered [23]. The technique is promising, although further improvements are required to allow it to operate in real time. Earlybird [36] presents a more practical algorithm for doing payload sifting, and correlates these with a range of unique sources generating infections and destinations being targeted. However, polymorphic and metamorphic worms [46] remain a challenge; Spinelis [39] shows that it is an NP-hard problem. Buttercup [25] attempts to detect polymorphic buffer overflow attacks by identifying the ranges of the possible return memory addresses for existing buffer overflow vulnerabilities. Unfortunately, this heuristic cannot be employed against some of the more sophisticated overflow attack techniques [26]. Furthermore, the false positive rate is very high, ranging from 1.13%% to 0.01%. Vigna et al. [51] discuss a method for testing detection signatures against mutations of known vulnerabilities to determine the quality of the detection model and mechanism. In [52], the authors describe a mechanism for pushing to workstations vulnerability-specific, application-aware filters expressed as programs in a simple language. These programs roughly mirror the state of the protected service, allowing for more intelligent application of content filters, as opposed to simplistic payload string matching.

HoneyStat [13] runs sacrificial services inside a virtual machine, and monitors memory, disk, and network events to detect abnormal behavior. For some classes of attacks (e.g., buffer overflows), this can produce highly accurate alerts with relatively few false positives, and can detect zero-day worms. Although the system only protects against scanning worms, ``active honeypot'' techniques [58] may be used to make it more difficult for an automated attacker to differentiate between HoneyStats and real servers. The Internet Motion Sensor [9] is a distributed blackhole monitoring system aimed at measuring, characterizing, and tracking Internet-based threats, including worms. [12] explores the various options in locating honeypots and correlating their findings, and their impact on the speed and accuracy in detecting worms and other attacks.

Reference [35] proposes the use of honeypots with instrumented versions of software services to be protected, coupled with an automated patch-generation facility. This allows for quick ( < 1 minute) fixing of buffer overflow vulnerabilities, even against zero-day worms, but depends on scanning behavior on the part of worms. Toth and Kruegel [48] propose to detect buffer overflow payloads (including previously unseen ones) by treating inputs received over the network as code fragments; they show that legitimate requests will appear to contain relatively short sequences of valid x86 instruction opcodes, compared to attacks that will contain long sequences. They integrate this mechanism into the Apache web server, resulting in a small performance degradation.

The HACQIT architecture [16,31,29,30] uses various sensors to detect new types of attacks against secure servers, access to which is limited to small numbers of users at a time. Any deviation from expected or known behavior results in the possibly subverted server to be taken off-line. A sandboxed instance of the server is used to conduct ``clean room'' analysis, comparing the outputs from two different implementations of the service (in their prototype, the Microsoft IIS and Apache web servers were used to provide application diversity). Machine-learning techniques are used to generalize attack features from observed instances of the attack. Content-based filtering is then used, either at the firewall or the end host, to block inputs that may have resulted in attacks, and the infected servers are restarted. Due to the feature-generalization approach, trivial variants of the attack will also be caught by the filter. [49] takes a roughly similar approach, although filtering is done based on port numbers, which can affect service availability. Cisco's Network-Based Application Recognition (NBAR) [2] allows routers to block TCP sessions based on the presence of specific strings in the TCP stream. This feature was used to block CodeRed probes, without affecting regular web-server access. Porras et al. [27] argue that hybrid defenses using complementary techniques (in their case, connection throttling at the domain gateway and a peer-based coordination mechanism), can be much more effective against a wide variety of worms.

DOMINO [57] is an overlay system for cooperative intrusion detection. The system is organized in two layers, with a small core of trusted nodes and a larger collection of nodes connected to the core. The experimental analysis demonstrates that a coordinated approach has the potential of providing early warning for large-scale attacks while reducing potential false alarms. Reference [59] describes an architecture and models for an early warning system, where the participating nodes/routers propagate alarm reports towards a centralized site for analysis. The question of how to respond to alerts is not addressed, and, similar to DOMINO, the use of a centralized collection and analysis facility is weak against worms attacking the early warning infrastructure.

Suh et al. [44], propose a hardware-based solution that can be used to thwart control-transfer attacks and restrict executable instructions by monitoring ``tainted'' input data. In order to identify ``tainted'' data, they rely on the operating system. If the processor detects the use of this tainted data as a jump address or an executed instruction, it raises an exception that can be handled by the operating system. The authors do not address the issue of recovering program execution and suggest the immediate termination of the offending process. DIRA [37] is a technique for automatic detection, identification and repair of control-hijaking attacks. This solution is implemented as a GCC compiler extension that transforms a program's source code adding heavy instrumentation so that the resulting program can perform these tasks. The use of checkpoints throughout the program ensures that corruption of state can be detected if control sensitive data structures are overwritten. Unfortunately, the performance implications of the system make it unusable as a front line defense mechanism. Song and Newsome [24] propose dynamic taint analysis for automatic detection of overwrite attacks. Tainted data is monitored throughout the program execution and modified buffers with tainted information will result in protection faults. Once an attack has been identified, signatures are generated using automatic semantic analysis. The technique is implemented as an extension to Valgrind and does not require any modifications to the program's source code but suffers from severe performance degradation.

The Safe Execution Environment (SEE) [45] allows users to deploy and test untrusted software without fear of damaging their system. This is done by creating a virtual environment where the software has read access to the real data; all writes are local to this virtual environment. The user can inspect these changes and decide whether to commit them or not. We envision use of this technique for unrolling the effects of filesystem changes in our system, as part of our future work plans. A similar proposal is presented in [21] for executing untrusted Java applets in a safe ``playground'' that is isolated from the user's environment.


7 Conclusion

We have described a novel approach to dealing with zero-day attacks by combining features found today in honeypots and anomaly detection systems. The main advantage of this architecture is providing system designers the ability to fine tune systems with impunity, since any false positives (legitimate traffic) will be filtered by the underlying components.

We have implemented this approach in an architecture called Shadow Honeypots. In this approach, we employ an array of anomaly detectors to monitor and classify all traffic to a protected network; traffic deemed anomalous is processed by a shadow honeypot, a protected instrumented instance of the application we are trying to protect. Attacks against the shadow honeypot are detected and caught before they infect the state of the protected application. This enables the system to implement policies that trade off between performance and risk, retaining the capability to re-evaluate this trade-off effortlessly.

Finally, the preliminary performance experiments indicate that despite the considerable cost of processing suspicious traffic on our Shadow Honeypots and overhead imposed by instrumentation, our system is capable of sustaining the overall workload of protecting services such as a Web server farm, as well as vulnerable Web browsers. In the future, we expect that the impact on performance can be minimized by reducing the rate of false positives and tuning the AD heuristics using a feedback loop with the shadow honeypot. Our plans for future work also include evaluating different components and extending the performance evaluation.

Acknowledgments

The work of Kostas Anagnostakis is supported by OSD/ONR CIP/SW URI under grants N00014-01-1-0795 and N00014-04-1-0725. The work of P. Akritidis, K. Xinidis and E. Markatos was supported in part by the GSRT project EAR (USA-022) funded by the Greek Secretariat for Research and Technology and by the IST project NoAH (011923) funded by the European Union. P. Akritidis, K. Xinidis and E. Markatos are also with the University of Crete.

Bibliography

1
i-Bench.
https://https://www.veritest.com/benchmarks/i-bench/default.asp.

2
Using Network-Based Application Recognition and Access Control Lists for Blocking the "Code Red" Worm at Network Ingress Points.
Technical report, Cisco Systems, Inc.

3
CERT Advisory CA-2001-19: `Code Red' Worm Exploiting Buffer Overflow in IIS Indexing Service DLL.
https://www.cert.org/advisories/CA-2001-19.html, July 2001.

4
ApacheBench: A complete benchmarking and regression testing suite.
https://freshmeat.net/projects/apachebench/, July 2003.

5
Cert Advisory CA-2003-04: MS-SQL Server Worm.
https://www.cert.org/advisories/CA-2003-04.html, January 2003.

6
Microsoft Security Bulletin MS04-028: Buffer Overrun in JPEG Processing Could Allow Code Execution.
https://www.microsoft.com/technet/security/bulletin/MS04-028.mspx, September 2004.

7
US-CERT Technical Cyber Security Alert TA04-217A: Multiple Vulnerabilities in libpng.
https://www.us-cert.gov/cas/techalerts/TA04-217A.html, August 2004.

8
P. Akritidis, K. Anagnostakis, and E. P. Markatos.
Efficient content-based fingerprinting of zero-day worms.
In Proceedings of the IEEE International Conference on Communications (ICC), May 2005.

9
M. Bailey, E. Cooke, F. Jahanian, J. Nazario, and D. Watson.
The Internet Motion Sensor: A Distributed Blackhole Monitoring System.
In Proceedings of the 12th ISOC Symposium on Network and Distributed Systems Security (SNDSS), pages 167-179, February 2005.

10
M. Bhattacharyya, M. G. Schultz, E. Eskin, S. Hershkop, and S. J. Stolfo.
MET: An Experimental System for Malicious Email Tracking.
In Proceedings of the New Security Paradigms Workshop (NSPW), pages 1-12, September 2002.

11
C. Clark, W. Lee, D. Schimmel, D. Contis, M. Kone, and A. Thomas.
A Hardware Platform for Network Intrusion Detection and Prevention.
In Proceedings of the 3rd Workshop on Network Processors and Applications (NP3), February 2004.

12
E. Cook, M. Bailey, Z. M. Mao, and D. McPherson.
Toward Understanding Distributed Blackhole Placement.
In Proceedings of the ACM Workshop on Rapid Malcode (WORM), pages 54-64, October 2004.

13
D. Dagon, X. Qin, G. Gu, W. Lee, J. Grizzard, J. Levine, and H. Owen.
HoneyStat: Local Worm Detection Using Honepots.
In Proceedings of the 7th International Symposium on Recent Advances in Intrusion Detection (RAID), pages 39-58, October 2004.

14
E. N. Elnozahy, L. Alvisi, Y.-M. Wang, and D. B. Johnson.
A survey of rollback-recovery protocols in message-passing systems.
ACM Comput. Surv., 34(3):375-408, 2002.

15
J. Jung, V. Paxson, A. W. Berger, and H. Balakrishnan.
Fast Portscan Detection Using Sequential Hypothesis Testing.
In Proceedings of the IEEE Symposium on Security and Privacy, May 2004.

16
J. E. Just, L. A. Clough, M. Danforth, K. N. Levitt, R. Maglich, J. C. Reynolds, and J. Rowe.
Learning Unknown Attacks - A Start.
In Proceedings of the 5th International Symposium on Recent Advances in Intrusion Detection (RAID), October 2002.

17
H. Kim and B. Karp.
Autograph: Toward Automated, Distributed Worm Signature Detection.
In Proceedings of the 13th USENIX Security Symposium, pages 271-286, August 2004.

18
C. Kruegel, F. Valeur, G. Vigna, and R. Kemmerer.
Stateful Intrusion Detection for High-Speed Networks.
In Proceedings of the IEEE Symposium on Security and Privacy, pages 285-294, May 2002.

19
C. Kruegel and G. Vigna.
Anomaly Detection of Web-based Attacks.
In Proceedings of the 10th ACM Conference on Computer and Communications Security (CCS), pages 251-261, October 2003.

20
J. G. Levine, J. B. Grizzard, and H. L. Owen.
Using Honeynets to Protect Large Enterprise Networks.
IEEE Security & Privacy, 2(6):73-75, November/December 2004.

21
D. Malkhi and M. K. Reiter.
Secure execution of java applets using a remote playground.
IEEE Trans. Softw. Eng., 26(12):1197-1209, 2000.

22
A. J. Malton.
The Denotational Semantics of a Functional Tree-Manipulation Language.
Computer Languages, 19(3):157-168, 1993.

23
D. Moore, C. Shannon, G. Voelker, and S. Savage.
Internet Quarantine: Requirements for Containing Self-Propagating Code.
In Proceedings of the IEEE Infocom Conference, April 2003.

24
J. Newsome and D. Dong.
Dynamic Taint Analysis for Automatic Detection, Analysis, and Signature Generation of Exploits on Commodity Software.
In Proceedings of the 12th ISOC Symposium on Network and Distributed System Security (SNDSS), pages 221-237, February 2005.

25
A. Pasupulati, J. Coit, K. Levitt, S. F. Wu, S. H. Li, J. C. Kuo, and K. P. Fan.
Buttercup: On Network-based Detection of Polymorphic Buffer Overflow Vulnerabilities.
In Proceedings of the Network Operations and Management Symposium (NOMS), pages 235-248, vol. 1, April 2004.

26
J. Pincus and B. Baker.
Beyond Stack Smashing: Recent Advances in Exploiting Buffer Overflows.
IEEE Security & Privacy, 2(4):20-27, July/August 2004.

27
P. Porras, L. Briesemeister, K. Levitt, J. Rowe, and Y.-C. A. Ting.
A Hybrid Quarantine Defense.
In Proceedings of the ACM Workshop on Rapid Malcode (WORM), pages 73-82, October 2004.

28
N. Provos.
A Virtual Honeypot Framework.
In Proceedings of the 13th USENIX Security Symposium, pages 1-14, August 2004.

29
J. Reynolds, J. Just, E. Lawson, L. Clough, and R. Maglich.
On-line Intrusion Protection by Detecting Attacks with Diversity.
In Proceedings of the 16th Annual IFIP 11.3 Working Conference on Data and Application Security Conference, April 2002.

30
J. C. Reynolds, J. Just, L. Clough, and R. Maglich.
On-Line Intrusion Detection and Attack Prevention Using Diversity, Generate-and-Test, and Generalization.
In Proceedings of the 36th Annual Hawaii International Conference on System Sciences (HICSS), January 2003.

31
J. C. Reynolds, J. Just, E. Lawson, L. Clough, and R. Maglich.
The Design and Implementation of an Intrusion Tolerant System.
In Proceedings of the International Conference on Dependable Systems and Networks (DSN), June 2002.

32
M. Roesch.
Snort: Lightweight intrusion detection for networks.
In Proceedings of USENIX LISA, November 1999.
(software available from https://www.snort.org/).

33
L. Schaelicke, T. Slabach, B. Moore, and C. Freeland.
Characterizing the Performance of Network Intrusion Detection Sensors.
In Proceedings of Recent Advances in Intrusion Detection (RAID), September 2003.

34
S. E. Schechter, J. Jung, and A. W. Berger.
Fast Detection of Scanning Worm Infections.
In Proceedings of the 7th International Symposium on Recent Advances in Intrusion Detection (RAID), pages 59-81, October 2004.

35
S. Sidiroglou and A. D. Keromytis.
A Network Worm Vaccine Architecture.
In Proceedings of the IEEE Workshop on Enterprise Technologies: Infrastructure for Collaborative Enterprises (WETICE), Workshop on Enterprise Security, pages 220-225, June 2003.

36
S. Singh, C. Estan, G. Varghese, and S. Savage.
Automated worm fingerprinting.
In Proceedings of the 6th Symposium on Operating Systems Design & Implementation (OSDI), December 2004.

37
A. Smirnov and T. Chiueh.
DIRA: Automatic Detection, Identification, and Repair of Control-Hijacking Attacks.
In Proceedings of the 12th ISOC Symposium on Network and Distributed System Security (SNDSS), February 2005.

38
T. Spalink, S. Karlin, L. Peterson, and Y. Gottlieb.
Building a Robust Software-Based Router Using Network Processors.
In Proceedings of the 18th ACM Symposium on Operating Systems Principles (SOSP), pages 216-229, Chateau Lake Louise, Banff, Alberta, Canada, October 2001.

39
D. Spinellis.
Reliable identification of bounded-length viruses is NP-complete.
IEEE Transactions on Information Theory, 49(1):280-284, January 2003.

40
L. Spitzner.
Honeypots: Tracking Hackers.
Addison-Wesley, 2003.

41
S. Staniford.
Containment of Scanning Worms in Enterprise Networks.
Journal of Computer Security, 2005.
.

42
S. Staniford, D. Moore, V. Paxson, and N. Weaver.
The Top Speed of Flash Worms.
In Proceedings of the ACM Workshop on Rapid Malcode (WORM), pages 33-42, October 2004.

43
S. Staniford, V. Paxson, and N. Weaver.
How to Own the Internet in Your Spare Time.
In Proceedings of the 11th USENIX Security Symposium, pages 149-167, August 2002.

44
G. E. Suh, J. W. Lee, D. Zhang, and S. Devadas.
Secure program execution via dynamic information flow tracking.
SIGOPS Operating Systems Review, 38(5):85-96, 2004.

45
W. Sun, Z. Liang, R. Sekar, and V. N. Venkatakrishnan.
One-way Isolation: An Effective Approach for Realizing Safe Execution Environments.
In Proceedings of the 12th ISOC Symposium on Network and Distributed Systems Security (SNDSS), pages 265-278, February 2005.

46
P. Ször and P. Ferrie.
Hunting for Metamorphic.
Technical report, Symantec Corporation, June 2003.

47
Top Layer Networks.
https://www.toplayer.com.

48
T. Toth and C. Kruegel.
Accurate Buffer Overflow Detection via Abstract Payload Execution.
In Proceedings of the 5th Symposium on Recent Advances in Intrusion Detection (RAID), October 2002.

49
T. Toth and C. Kruegel.
Connection-history Based Anomaly Detection.
In Proceedings of the IEEE Workshop on Information Assurance and Security, June 2002.

50
S. Venkataraman, D. Song, P. B. Gibbons, and A. Blum.
New Streaming Algorithms for Fast Detection of Superspreaders.
In Proceedings of the 12th ISOC Symposium on Network and Distributed Systems Security (SNDSS), pages 149-166, February 2005.

51
G. Vigna, W. Robertson, and D. Balzarotti.
Testing Network-based Intrusion Detection Signatures Using Mutant Exploits.
In Proceedings of the 11th ACM Conference on Computer and Communications Security (CCS), pages 21-30, October 2004.

52
H. J. Wang, C. Guo, D. R. Simon, and A. Zugenmaier.
Shield: Vulnerability-Driven Network Filters for Preventing Known Vulnerability Exploits.
In Proceedings of the ACM SIGCOMM Conference, pages 193-204, August 2004.

53
K. Wang and S. J. Stolfo.
Anomalous Payload-based Network Intrusion Detection.
In Proceedings of the 7th International Symposium on Recent Advanced in Intrusion Detection (RAID), pages 201-222, September 2004.

54
N. Weaver, S. Staniford, and V. Paxson.
Very Fast Containment of Scanning Worms.
In Proceedings of the 13th USENIX Security Symposium, pages 29-44, August 2004.

55
D. Whyte, E. Kranakis, and P. van Oorschot.
DNS-based Detection of Scanning Worms in an Enterprise Network.
In Proceedings of the 12th ISOC Symposium on Network and Distributed Systems Security (SNDSS), pages 181-195, February 2005.

56
J. Wu, S. Vangala, L. Gao, and K. Kwiat.
An Effective Architecture and Algorithm for Detecting Worms with Various Scan Techniques.
In Proceedings of the ISOC Symposium on Network and Distributed System Security (SNDSS), pages 143-156, February 2004.

57
V. Yegneswaran, P. Barford, and S. Jha.
Global Intrusion Detection in the DOMINO Overlay System.
In Proceedings of the ISOC Symposium on Network and Distributed System Security (SNDSS), February 2004.

58
V. Yegneswaran, P. Barford, and D. Plonka.
On the Design and Use of Internet Sinks for Network Abuse Monitoring.
In Proceedings of the 7th International Symposium on Recent Advances in Intrusion Detection (RAID), pages 146-165, October 2004.

59
C. C. Zou, L. Gao, W. Gong, and D. Towsley.
Monitoring and Early Warning for Internet Worms.
In Proceedings of the 10th ACM International Conference on Computer and Communications Security (CCS), pages 190-199, October 2003.

About this document ...

Detecting Targeted Attacks Using Shadow Honeypots

This document was generated using the LaTeX2HTML translator Version 2002 (1.67)

Copyright © 1993, 1994, 1995, 1996, Nikos Drakos, Computer Based Learning Unit, University of Leeds.
Copyright © 1997, 1998, 1999, Ross Moore, Mathematics Department, Macquarie University, Sydney.

The command line arguments were:
latex2html -split 0 -show_section_numbers -local_icons replay.tex

The translation was initiated by Kostas Anagnostakis on 2005-05-11


next_inactive up previous
Kostas Anagnostakis 2005-05-11