Check out the new USENIX Web site.


Previous Work

Several research groups have recognized the need for a latency service on the Internet. Unfortunately, many current proposals for latency services make a poor trade-off between accuracy and overhead, are not widely deployed, require changes to the network, or have scalability issues. In this section, we provide a survey of latency services and determine their compliance with our design goals.

The simplest latency service gathers all pairs latency information and makes this data available to all nodes via a centralized location, as exemplified by the all pairs ping service [18] on PlanetLab. Such an approach causes a large amount of measurement traffic because every node measures latencies to every other node.

IDMaps [5] is a latency service that attempts to minimize measurement traffic. It uses a network of tracers that proactively measure distances between themselves and to representative nodes from each address prefix. This information is used to create a virtual distance map of the Internet. Since only tracers measure latency, the overhead is kept low but the prediction error is determined by the distribution of tracer locations. Achieving a good distribution is hard because the physical network topology is not known in practice.

The Internet Iso-bar [2] system attempts to remove the requirement of topology knowledge by dividing network nodes into clusters depending on latencies. A node from each cluster is then selected to monitor intra- and inter-cluster latencies and to respond to latency queries. However, the accuracy of the system depends on how amenable the network is to clustering. The cluster size determines the measurement overhead.

Ratnasamy et al. propose a latency service that attempts to reduce the number of network measurements even further [14]. Nodes measure their network distance only to a small number of landmark nodes and use the results to partition themselves into bins. Nodes that fall within the same bin are deemed to be close. Although this scheme vastly reduces the measurement overhead compared to other systems, it also exhibits a high error due to the coarse-grained assignment to a fixed number of bins.

Nakao et al. observe that much of the network information that applications are interested in is already collected by lower network layers. They propose to exploit this through a routing underlay [11], which provides a standardized interface for applications to inspect the state and structure of the network. Although an underlay would provide efficient access to network information already gathered by routers, it requires changes to routers.

Jonathan Ledlie 2005-10-18