Check out the new USENIX Web site. next up previous
Next: Design of an Integrity Up: Design and Implementation of Previous: Measuring Systems


Related Work

Related work includes previous efforts to measure a system to improve its integrity and/or enable remote integrity verification. The key issues in prior work are: (1) the distinction between secure boot and authenticated boot and (2) the semantic value of previous integrity measurement approaches.

Secure boot enables a system to measure its own integrity and terminate the boot process if an action compromises this integrity. The AEGIS system by Arbaugh [1] provides a practical architecture for implementing secure boot on a PC system. It uses signed hash values to identify and validate each layer in the boot process. It will abort booting the system if the hashes cannot be validated. Secure boot does not enable a challenging party to verify the integrity of a boot process (i.e., authenticated boot) because it simply measures and checks the boot process, but does not generate attestations of the integrity of the process.

The IBM 4758 secure coprocessor [10] implements both secure boot and authenticated boot, albeit in a restricted environment. It promises secure boot guarantees by verifying (flash) partitions before activating them and by enforcing valid signatures before loading executables into the system. A mechanism called outgoing authentication [5] enables attestation that links each subsequent layer to its predecessor. The predecessor attests to the subsequent layer by generating a signed message that includes the cryptographic hash and the public key of the subsequent layer. To protect an application from flaws in other applications, only one application is allowed to run at a time. Thus, the integrity of the application depends on hashes of the code and manual verification of the application's installation data. This data is only accessible to trusted code after installation. Our web server example runs in a much more dynamic environment where multiple processes may access the same data and may interact. Further, the security requirements of the challenging party and the attesting party may differ such that secure boot based on the challenging party's requirements is impractical.

The Trusted Computing Group [11] is a consortium of companies that together have developed an open interface for a Trusted Platform Module, a hardware extension to systems that provides cryptographic functionality and protected storage. By default, the TPM enables the verification of static platform configurations, both in terms of content and order, by collecting a sequence of hashes over target code. For example, researchers have examined how a TPM can be used to prove that a system has booted a valid operating system [12]. The integrity of applications running on the operating system is outside the scope of this work and is exactly where we look to expand the application of the TPM.

Marchesini et al. [13] describe an approach that uses signed trustworthy configurations to protect a system's integrity. Such a configuration stores signatures of sensitive configuration files. A so-called Enforcer checks the integrity of signed files in the configuration against the real file every time the real file is opened. The approach enforces integrity through TPM- sealing of long-lived server certificates and binding of the unsealing to a correct configuration. In this respect the work is related to the platform configurations described in [12]. None of the known existing work extends the measurement of a software stack from the static boot configuration seamlessly into the application level.

Terra [14] and Microsoft's Next Generation Secure Computing Base (NGSCB [7]) are based on the same hardware security architecture (TCG/TPM) and are similar in providing a ``whole system solution'' to authenticated boot. NGSCB partitions a platform into a trusted and untrusted part each of which runs its own operating system. Only the trusted portion is measured which limits the flexibility of the approach (not all programs of interest should be fully trusted) and it depends on hardware and base software not yet available.

Terra is a trusted computing architecture that is built around a trusted virtual machine monitor that -among other things- authenticates the software running in a VM for challenging parties. Terra tries to resolve the conflict between building trusted customized closed-box run-time environments (e.g., IBM 4758) and open systems that offer rich functionality and significant economies of scale that, however, are difficult to trust because of their flexibility. As such, Terra tries to solve the same problem as we do, however in a very different way. Terra measures the trusted virtual machine monitor on the partition block level. Thus, on the one hand, Terra produces about 20 Megabyte of measurement values (i.e., hashes) when attesting an exemplary 4 Gigabyte VM partition. On the other hand, because those measurements are representative of blocks, it is difficult to interpret varying measurement values. Thus, our system measures selectively those parts of the system that contribute to the dynamic runtime system; it does so on a high level that is rich in semantics and enables remote parties to interpret varying measurements on a file level.


next up previous
Next: Design of an Integrity Up: Design and Implementation of Previous: Measuring Systems
sailer 2004-05-18