Check out the new USENIX Web site. next up previous
Next: Demand Paging Up: Introduction Previous: Introduction

Caching

Caching is a fundamental technique in hiding I/O latency and is widely used in storage controllers (IBM Shark, EMC Symmetrix, Hitachi Lightning), databases (IBM DB2, Oracle, SQL Server), file systems (NTFS, EXT3, NFS, CIFS), and operating systems (UNIX variants and Windows). SNIA (www.snia.org) defines a cache as ``A high speed memory or storage device used to reduce the effective time required to read data from or write data to a lower speed memory or device.'' We shall study cache algorithms for a storage controller wherein fast, but relatively expensive, random access memory is used as a cache for slow, but relatively inexpensive, disks. A modern storage controller's cache typically contains volatile memory used as a read cache and a non-volatile memory used as a write cache.

The effectiveness of read cache depends upon hit ratio, that is, the fraction of requests that are served from the cache without necessitating a disk trip (miss). We shall focus on improving the performance of the read cache that is increasing the hit ratio or equivalently minimizing the miss ratio. Typically, cache is managed in uniformly sized units called pages.


next up previous
Next: Demand Paging Up: Introduction Previous: Introduction
Binny Gill 2005-02-14