an open-source virtual machine for Java
intended as a research framework for efficient
execution of Java bytecode.
The framework is essentially composed
of an extensible bytecode interpreter using state-of-the-art
and innovative techniques.
Written in the C programming language, and assuming
minimal system dependencies, the interpreter emphasizes high-level
techniques to support efficient execution.
In particular, we introduce a biderectional layout for object
instances that groups reference fields sequentially to allow
efficient garbage collection. We also introduce
a sparse interface virtual table layout that reduces the cost
of interface method calls to that of normal virtual calls.
Finally, we present a technique to improve
by eliminating busy-wait in presence of contention.
user-level operating system (OS) can be implemented as a regular user
process on top of another host operating system.
Conventional user-level OSes, such as User Mode Linux, view the
underlying host operating system as a specific hardware architecture.
Therefore, the implementation of a user-level OS often requires porting
of an existing kernel to a new hardware architecture.
This paper proposes a new implementation method of user-level OSes by
using partial emulation of hardware and static rewriting of machine
In this method, privileged instructions and their related non-privileged
instructions in a native operating system are statically translated
into subroutine calls that perform emulation.
The translated instructions of the user-level OS are executed by both
the real CPU and the partial emulator.
This method does not require detailed knowledge about kernel internals.
By using the proposed method, NetBSD and FreeBSD are executed as user
processes on NetBSD and Linux.
describes a programming framework for clean construction of disk
maintenance applications. They can use it to expose the disk activity
to be done, and then process completed requests as they are reported.
The system ensures that these applications make steady forward progress
without competing for disk access with a system's primary applications.
It opportunistically completes maintenance requests by using disk idle
time and freeblock scheduling. In this paper, three disk maintenance
applications (backup, write-back cache destaging, and disk layout
reorganization) are adapted to the system support and evaluated on a
FreeBSD implementation. All are shown to successfully execute in busy
systems with minimal (e.g., less than 2%) impact on foreground disk
performance. In fact, by modifying FreeBSD's cache to write dirty
blocks for free, the average read cache miss response time is decreased
by 12-30%. For non-volatile caches, the reduction is almost 50%.
Track-aligned extents (traxtents) utilize disk-specific
knowledge to match access patterns to the strengths of
modern disks. By allocating and accessing related data
on disk track boundaries, a system can avoid most rotational
latency and track crossing overheads. Avoiding
these overheads can increase disk access efficiency by up
to 50% for mid-sized requests (100Ð500 KB). This paper
describes traxtents, algorithms for detecting track boundaries,
and some uses of traxtents in file systems and video
servers. For large-file workloads, a version of FreeBSD's
FFS implementation that exploits traxtents reduces application
run times by up to 20% compared to the original
version. A video server using traxtent-based requests can
support 56% more concurrent streams at the same startup
latency and buffer space. For LFS, 44% lower overall
write cost for track-sized segments can be achieved.
the design, implementation, and evaluation of D-GRAID, a
gracefully-degrading and quickly-recovering RAID storage array. D-GRAID
ensures that most files within the file system remain available even
when an unexpectedly high number of faults occur. D-GRAID also recovers
from failures quickly, restoring only live file system data to a hot
spare. Both graceful degradation and live-block recovery are implemented
in a prototype SCSI-based storage system underneath unmodified file
systems, demonstrating that powerful "file-system like" functionality
can be implemented behind a narrow block-based interface.
BGP, the current inter-domain routing protocol, assumes that the routing
information propagated by authenticated routers is correct. This assumption
renders the current infrastructure vulnerable to both accidental misconfigurations
and deliberate attacks. To reduce this vulnerability, we present a combination
of two mechanisms: Listen and Whisper. Listen passively
probes the data plane and checks whether the underlying routes to different
destinations work. Whisper uses cryptographic functions along with routing
redundancy to detect bogus route advertisements in the control plane. These
mechanisms are easily deployable, and do not rely on either a public key
infrastructure or a central authority like ICANN.
The combination of Listen and Whisper eliminates a large number of problems
due to router misconfigurations, and restricts (though not eliminates) the
damage that deliberate attackers can cause. Moreover, these mechanisms can
detect and contain isolated adversaries that propagate even a few invalid
route announcements. Colluding adversaries pose a more stringent challenge,
and we propose simple changes to the BGP policy mechanism to limit the damage
colluding adversaries can cause. We demonstrate the utility of Listen and
Whisper through real-world deployment, measurements and empirical analysis.
For example, a randomly placed isolated adversary, in the worst case can
affect reachability to only 10% of the nodes.
games are a rapidly growing Internet application. Because of the cost in
supporting on-line games and the unpredictable load on servers,
companies are moving toward sharing infrastructure for game hosting. To
efficiently provision on-line games, it is important to understand game
workloads and the behavior of game players. In this paper, we present a
comprehensive analysis of a collection of on-line game players and game
workloads using data from several sources including: a 13-month trace of
an extremely busy game server containing over 2.8 million connections, a
two-year trace of the aggregate game populations of over 500 on-line
games, and a 4-month trace of a content-distribution network used to
deliver games. The key findings from our measurement study are: (1)
these gamers are an extremely difficult set of users to satisfy and
unless game servers are properly set up and provisioned, gamers quickly
choose to go elsewhere, (2) the popularity of these games follows a
power law making games difficult to provision at launch time, (3) game
workloads are predictable only over short-term intervals, (4) there are
significant challenges in hosting games on shared infrastructure due to
temporal and geographic synchronization across different games and other
interactive applications, and (5) game software updates are a
significent burden on game hosting that must be planned for. Our results
have implications for both game publishers as well as infrastructure
fundamental problem with many peer-to-peer systems is the tendency for
users to “free ride” to consume resources without contributing to the
system. The popular file distribution tool BitTorrent was explicitly
designed to address this problem, using a tit-for-tat reciprocity
strategy to provide positive incentives for nodes to contribute
resources to the swarm. While BitTorrent has been extremely successful,
we show that its incentive mechanism is not robust to strategic clients.
Through performance modeling parameterized by real world traces, we
demonstrate that all peers contribute resources that do not directly
improve their performance. We use these results to drive the design and
implementation of BitTyrant, a strategic BitTorrent client that provides
a median 70% performance gain for a 1 Mbit client on live Internet
swarms. We further show that when applied universally, strategic clients
can hurt average per-swarm performance compared to today’s BitTorrent
smartcards, capable of sophisticated cryptography, provide a high
assurance of tamper resistance and are thus commonly used in payment
Although extracting secrets out of smartcards requires resources beyond
the means of many would-be thieves, the manner in which they are used
can be exploited for fraud.
Cardholders authorize financial transactions by presenting the card and
disclosing a PIN to a terminal without any assurance as to the amount
being charged or who is to be paid, and have no means of discerning
whether the terminal is authentic or not.
Even the most advanced smartcards cannot protect customers from being
defrauded by the simple relaying of data from one location to another.
We describe the development of such an attack, and show results from
live experiments on the UK's EMV implementation, Chip & PIN.
We discuss previously proposed defences, and show that these cannot
provide the required security assurances.
A new defence based on a distance bounding protocol is described and
implemented, which requires only modest alterations to current hardware
As far as we are aware, this is the first complete design and
implementation of a secure distance bounding protocol.
Future smartcard generations could use this design to provide
cost-effective resistance to relay attacks, which are a genuine threat
to deployed applications.
We also discuss the security-economics impact to customers of enhanced
An important threat to reliable storage of data is silent data corruption. In order to develop suitable protection mechanisms against data corruption, it is essential to understand its characteristics. In this paper, we present the first large-scale study of data corruption. We analyze corruption instances recorded in production storage systems containing a total of million disk drives, over a period of months. We study three classes of corruption: checksum mismatches, identity discrepancies, and parity inconsistencies. We focus on checksum mismatches since they occur the most.
We find more than 400,000 instances of checksum mismatches over the 41-month period. We find many interesting trends among these instances including: (i) nearline disks (and their adapters) develop checksum mismatches an order of magnitude more often than enterprise class disk drives, (ii) checksum mismatches within the same disk are not independent events and they show high spatial and temporal locality, and (iii) checksum mismatches across different disks in the same storage system are not independent. We use our observations to derive lessons for corruption-proof system design.