Check out the new USENIX Web site. next up previous
Next: 3.2 Changes to the Up: 3 Implementation Previous: 3 Implementation


3.1 Overview

Snapdragon clients run two kernel modules: a standard NFS lock daemon and a module that contains the core filesystem functionality, including requesting and caching capabilities and (partial) blockmaps. The second module exports through Linux's Virtual File System (VFS) interface the new filesystem type ``snapfs''.

The Snapdragon metadata server consists of a filesystem kernel module (snapd), a device driver (rddev) and a lock daemon (lockd). Lockd is identical to NFS's. Snapd and rddev are shown in Figure 6. Snapd translates client requests into the filesystem-independent operations. In Linux such operations are handled by the VFS layer, which invokes filesystem-specific code (in our case ext2) that implements the operation by issuing low-level block requests to the device driver, rddev in this case. Rddev translates these block requests to messages to the disk controller (NAD) using the same protocol as the clients use, but using ``allow-all'' capabilities. This architecture allows Snapdragon to be independent of the underlying file system and allows the data layout on remote disks to be exactly the same as if the disks were local. This has nice implications for deployment as we explain in Section 4.3.

Figure 6: Snapdragon server and NAD. When the Snapdragon server (snapd) receives a client request, it passes it to Linux's VFS, which invokes the underlying filesystem (ext2 in our case), which in turn issues block requests to the device driver rddev. The latter translates these requests to NAD requests containing ``allow-all'' capabilities.
\begin{figure}\begin{center}
\epsfig{figure=figures/snapdragon.idraw.eps, width=3in}\end{center} \end{figure}

The Snapdragon disk controller is implemented as a PC connected to the network. The PC runs a small multi-threaded user-level program that listens for, checks, and executes block requests.


next up previous
Next: 3.2 Changes to the Up: 3 Implementation Previous: 3 Implementation
Mark Lillibridge 2003-01-06