Check out the new USENIX Web site. next up previous
Next: Related work Up: Surviving Internet Catastrophes Previous: Surviving Internet Catastrophes

Introduction

The Internet today is highly vulnerable to Internet epidemics: events in which a particularly virulent Internet pathogen, such as a worm or email virus, compromises a large number of hosts. Starting with the Code Red worm in 2001, which infected over 360,000 hosts in 14 hours [27], such pathogens have become increasingly virulent in terms of speed, extent, and sophistication. Sapphire scanned most IP addresses in less than 10 minutes [25], Nimda reportedly infected millions of hosts, and Witty exploited vulnerabilities in firewall software explicitly designed to defend hosts from such pathogens [26]. We call such epidemics Internet catastrophes because they result in extensive wide-spread damage costing billions of dollars [27]. Such damage ranges from overwhelming networks with epidemic traffic [25,27], to providing zombies for spam relays [30] and denial of service attacks [35], to deleting disk blocks [26]. Given the current ease with which such pathogens can be created and launched, further Internet catastrophes are inevitable in the near future.

Defending hosts and the systems that run on them is therefore a critical problem, and one that has received considerable attention recently. Approaches to defend against Internet pathogens generally fall into three categories. Prevention reduces the size of the vulnerable host population [38,41,42]. Treatment reduces the rate of infection [9,33]. Finally, containment techniques block infectious communication and reduce the contact rate of a spreading pathogen [28,44,45].

Such approaches can mitigate the impact of an Internet catastrophe, reducing the number of vulnerable and compromised hosts. However, they are unlikely to protect all vulnerable hosts or entirely prevent future epidemics and risk of catastrophes. For example, fast-scanning worms like Sapphire can quickly probe most hosts on the Internet, making it challenging for worm defenses to detect and react to them at Internet scale [28]. The recent Witty worm embodies a so-called zero-day worm, exploiting a vulnerability very soon after patches were announced. Such pathogens make it increasingly difficult for organizations to patch vulnerabilities before a catastrophe occurs. As a result, we argue that defenses are necessary, but not sufficient, for entirely protecting distributed systems and data on Internet hosts from catastrophes.

In this paper, we propose a new approach for designing distributed systems to survive Internet catastrophes called informed replication. The key observation that makes informed replication both feasible and practical is that Internet epidemics exploit shared vulnerabilities. By replicating a system service on hosts that do not have the same vulnerabilities, a pathogen that exploits one or more vulnerabilities cannot cause all replicas to fail. For example, to prevent a distributed system from failing due to a pathogen that exploits vulnerabilities in Web servers, the system can place replicas on hosts running different Web server software.

The software of every system inherently is a shared vulnerability that represents a risk to using the system, and systems designed to use informed replication are no different. Substantial effort has gone into making systems themselves more secure, and our design approach can certainly benefit from this effort. However, with the dramatic rise of worm epidemics, such systems are now increasingly at risk to large-scale failures due to vulnerabilities in unrelated software running on the host. Informed replication reduces this new source of risk.

This paper makes four contributions. First, we develop a system model using the core abstraction [15] to represent failure correlation in distributed systems. A core is a reliable minimal subset of components such that the probability of having all hosts in a core failing is negligible. To reason about the correlation of failures among hosts, we associate attributes with hosts. Attributes represent characteristics of the host that can make it prone to failure, such as its operating system and network services. Since hosts often have many characteristics that make it vulnerable to failure, we group host attributes together into configurations to represent the set of vulnerabilities for a host. A system can use the configurations of all hosts in the system to determine how many replicas are needed, and on which hosts those replicas should be placed, to survive a worm epidemic.

Second, the efficiency of informed replication fundamentally depends upon the degree of software diversity among the hosts in the system, as more homogeneous host populations result in a larger storage burden for particular hosts. To evaluate the degree of software heterogeneity found in an Internet setting, we measure and characterize the diversity of the operating systems and network services of hosts in the UCSD network. The operating system is important because it is the primary attribute differentiating hosts, and network services represent the targets for exploit by worms. The results of this study indicate that such networks have sufficient diversity to make informed replication feasible.

Third, we develop heuristics for computing cores that have a number of attractive features. They provide excellent reliability guarantees, ensuring that user data survives attacks of single- and double-exploit pathogens with probability greater than $0.99$. They have low overhead, requiring fewer than 3 copies to cope with single-exploit pathogens, and fewer than 5 copies to cope with double-exploit pathogens. They bound the number of replica copies stored by any host, limiting the storage burden on any single host. Finally, the heuristics lend themselves to a fully distributed implementation for scalability. Any host can determine its replica set (its core) by contacting a constant number of other hosts in the system, independent of system size.

Finally, to demonstrate the feasibility and utility of our approach, we apply informed replication to the design and implementation of Phoenix. Phoenix is a cooperative, distributed remote backup system that protects stored data against Internet catastrophes that cause data loss [26]. The usage model of Phoenix is straightforward: users specify an amount $F$ of bytes of their disk space for management by the system, and the system protects a proportional amount $F/k$ of their data using storage provided by other hosts, for some value of $k$. We implement Phoenix as a service layered on the Pastry DHT [32] in the Macedon framework [31], and evaluate its ability to survive emulated catastrophes on the PlanetLab testbed.

The rest of this paper is organized as follows. Section 2 discusses related work. Section 3 describes our system model for representing correlated failures. Section 4 describes our measurement study of the software diversity of hosts in a large network, and Section 5 describes and evaluates heuristics for computing cores. Section 6 describes the design and implementation of Phoenix, and Section 7 describes the evaluation of Phoenix. Finally, Section 8 concludes.


next up previous
Next: Related work Up: Surviving Internet Catastrophes Previous: Surviving Internet Catastrophes
Flavio Junqueira 2005-02-17