################################################ # # # ## ## ###### ####### ## ## ## ## ## # # ## ## ## ## ## ### ## ## ## ## # # ## ## ## ## #### ## ## ## ## # # ## ## ###### ###### ## ## ## ## ### # # ## ## ## ## ## #### ## ## ## # # ## ## ## ## ## ## ### ## ## ## # # ####### ###### ####### ## ## ## ## ## # # # ################################################ The following paper was originally published in the Proceedings of the First USENIX Workshop on Electronic Commerce New York, New York, July 1995 For more information about USENIX Association contact: 1. Phone: 510 528-8649 2. FAX: 510 548-5738 3. Email: office@usenix.org 4. WWW URL: https://www.usenix.org Secure Coprocessors in Electronic Commerce Applications NB: This ASCII version of our paper is not an accurate transcription of our paper. Footnote text may be misplaced. Figures are missing. Symbols and mathematics necessary to read our paper are missing. All citations and bibliography information is missing. FOOTNOTE: This work was supported in part by ARPA contracts F33615-90-C-1465 and F33615-93-1-1330, NSF Presidential Young Investigator Award CCR-9958087, matching funds from Motorola and TRW, a contract from the US Postal Service, and by an equipment grant from IBM. This work was done while the first author was at Carnegie Mellon University. This work is the opinion of the authors and does not necessarily represent the view of their employers, funding sponsors, or the US Government. Bennet Yee J. D. Tygar Microsoft Corporation Carnegie Mellon University Redmond, WA 98052 Pittsburgh, PA 15213 bsy@microsoft.com tygar@cs.cmu.edu Abstract Many researchers believe electronic wallets (secure storage devices that maintain account balances) are the solution to electronic commerce challenges. This paper argues for a more powerful model --- a secure coprocessor --- that can run a small operating system, run application programs, and also keep secure storage for cryptographic keys and balance information. We have built a system called Dyad, on top of a port of the Mach 3.0 microkernel to the IBM Citadel secure coprocessor. This paper describes the abstract architecture of Dyad and a general discussion of secure coprocessor implementations of a variety of electronic commerce applications: Copy protection for software Electronic cash (including a critique of proposed solutions for point-of-sale electronic wallet systems) Electronic contracts Secure postage Introduction Many researchers believe electronic wallets (secure storage devices that maintain account balances) are the solution to electronic commerce challenges [CITATION]. This paper argues for a more powerful model --- a secure coprocessor --- that can run a small operating system, run application programs, and also keep secure storage for cryptographic keys and balance information. Secure coprocessors are tamper-proof sealed devices that have a processor, memory storage, and (optional) fast crypto-support. They are protected in that any attempt to penetrate them will result in all critical memory being erased. Secure coprocessors have a number of advantages over electronic wallets. They have more powerful processors and larger amounts of memory, so they can do more: they can negotiate (and enforce) contracts that involve renting and redistributing intellectual information. If they incorporate a secure display, they can provide much greater protection for customers in point-of-sale applications. They can provide software copy protection, permitting much more general applications. We have built a system called Dyad, on top of a port of the Mach 3.0 microkernel [CITATION]. to the IBM Citadel secure coprocessor [CITATION]. This paper describes the abstract architecture of Dyad and a general discussion of secure coprocessor implementations of a variety of electronic commerce applications: Copy protection for software Electronic cash (including a critique of proposed solutions for point-of-sale electronic wallet systems) Electronic contracts Secure postage A few years ago, only experimental prototypes of secure coprocessors existed, but today, the market is filling up. Manufacturers such as Cylink, IBM, National Semiconductor, Spyrus, Telequip, and others have announced secure coprocessor products. Several other major manufacturers will announce significant new products in the near future. There is a new FIPS standard for cryptographic modules (including secure coprocessors) [CITATION] and a new service to perform those evaluations [CITATION]. This paper attempts to lay out the intellectual issues in the use of these new secure coprocessor products. Because of length considerations, this paper does not discuss our Dyad implementation (see [CITATION]), or additional applications of secure coprocessors arising from distributed computation (see [CITATION].) Secure Coprocessor Model A secure coprocessor is a hardware module containing (1) a CPU, (2) bootstrap ROM, and (3) secure non-volatile memory. This hardware module is physically shielded from penetration, and the I/O interface to the module is the only way to access the internal state of the module. (If the shield is somehow penetrated, then a secure coprocessor erases all critical memory.) This hardware module can store cryptographic keys without risk of release. More generally, the CPU can perform arbitrary computations (under control of the operating system); thus the hardware module, when added to a computer, becomes a true coprocessor. Often, the secure coprocessor will contain special purpose hardware in addition to the CPU and memory; for example, high speed encryption/decryption hardware may be used. Security Properties of Secure Coprocessors All security systems rely on a nucleus of assumptions. For example, it is often assumed that encryption systems are resistant to cryptanalysis. Similarly, we take as axiomatic that secure coprocessors provide private and tamper-proof memory and processing. These assumptions may be falsified: for example, attackers may exhaustively search cryptographic key spaces. Similarly, it may be possible to falsify our physical security axiom by expending enormous resources (possibly feasible for very large corporations or government agencies). We rely on a physical work-factor argument to justify our axiom, similar in spirit to intractability assumptions of cryptography. Our secure coprocessor model does not depend on the particular technology used to satisfy the work-factor assumption. Just as cryptographic schemes may be scaled or changed to increase the resources required to penetrate a cryptographic system, current security packaging techniques may be scaled or changed to increase the work-factor necessary to successfully bypass the secure coprocessor protections. Secure coprocessors must be packaged so that physical attempts to gain access to the internal state of the coprocessor will result in resetting the state of the secure coprocessor (i.e., erasure of the secure non-volatile memory contents and CPU registers). An intruder might be able to break into a secure coprocessor and see how it is constructed; the intruder cannot, however, learn or change the internal state of the secure coprocessor except through normal I/O channels or by forcibly resetting the entire secure coprocessor. The guarantees about the privacy and integrity of the secure non-volatile memory provide the foundations needed to build distributed security systems. Potential Platforms Several physically secure processors exist, and more are forthcoming. Some of these are very secure, satisfying the highest security level specified by FIPS PUB 140-1 [CITATION]. (This publication gives four security levels for cryptographic modules, including secure coprocessors.) Announced secure coprocessors include the micro-ABYSS [CITATION] and Citadel [CITATION] systems from IBM, the iPower [CITATION] encryption card by National Semiconductor, some extended implementations of the Clipper and Capstone systems [CITATION] proposed by the NSA as DES replacements, the Crypta Plus [CITATION] encryption card by Telequip, the CY512i [CITATION] chip from Cylink, and various smartcard systems such as some GEMPlus or Mondex cards [CITATION]. There will be additional announcements of systems with increased processing power from major vendors in the next few months. For a fuller descriptions of potential platforms, see [CITATION]. Applications Because secure coprocessors can process secrets as well as store them, they can do much more than just keep secrets confidential. We describe how to use secure coprocessors to realize exemplar electronic commerce applications: (1) copy protection, (2) electronic currency, (3) electronic contracts, and (4) secure postage meters. None of these are possible on physically exposed systems. These applications are discussed briefly below. copy Protection Software is often charged on a per-CPU, per-site, or per-use basis. Software licenses usually prohibit making copies for use on unlicensed machines. This injunction against copying is technically unenforceable without a secure coprocessor. If the user can execute code on a physically accessible workstation, the user can also read that code. Even if attackers cannot read the workstation memory while it is running, we are implicitly depending on the assumption that the workstation was booted correctly --- verifying this property, as discussed in [CITATION], requires the use of a secure coprocessor. Software copy protection is complementary to electronic commerce. Without copy protection, rental or per-use charging of software is not possible. Here we discuss tradeoffs in using a secure coprocessor to implement software copy protection. (Dyad includes a software protection mechanism [CITATION].) Copy Protection with Secure Coprocessors Secure coprocessors can protect executables from being copied and illegally used. The proprietary code to be protected --- or at least some critical portion of it --- is distributed and stored in encrypted form, so copying without the code decryption key is futile, [FOOTNOTE: Allowing the encrypted form of the code to be copied means that we can back up the workstation against disk failures. Even giving attackers access to the backup tapes will not release any of the proprietary code. (Note that our encryption function should be resistant to known-plaintext attacks, since executable binaries typically have standardized formats.) A more interesting question arises if the secure coprocessor may fail. Secure coprocessors may be used in a fault-tolerant fashion; see [CITATION].] and this protected code runs only inside the secure coprocessor. Either public key or private key cryptography may be used to encrypt protected software. If private key cryptography is used, key management is still handled by public key cryptography. In particular, when a user pays for the use of a program, he sends the certificate of his secure coprocessor public key to the software vendor. This certificate is digitally signed by a key management center and is prima facie evidence that the public key is valid. The corresponding private key is stored only within the secure non-volatile memory of the secure coprocessor; thus, only the secure coprocessor will have full access to the proprietary software. What if the code size is larger than the memory capacity of the secure coprocessor? We have two alternatives: we can crypto-page or we can split the code into protected and unprotected segments. Section [REFERENCE] discusses crypto-paging in greater detail, but the basic idea is to encrypt and decrypt virtual memory contents as they are copied between secure memory and external storage. When we run out of memory space on the coprocessor, we encrypt the data before it is flushed to unsecure external storage, maintaining privacy. Since good encryption chips are fast, we can encrypt and decrypt on the fly with little performance penalty. Splitting the code is an alternative to crypto-paging. We can divide the code into a security-critical section and an unprotected section. The security-critical section is encrypted and runs only on the secure coprocessor. The unprotected section runs concurrently on the host. An adversary can copy the unprotected section, but if the division is done well, he or she will not be able to run the code without the secure portion. In micro-ABYSS [CITATION], White and Comerford show how such a partitioning should be done to maximize the difficulty of reverse engineering the secure portion of the application. FOOTNOTE: We also examined a real application, gnu-emacs 19.22 [CITATION], to show how it could be partitioned to run partially within a secure coprocessor. The X Windows display code should remain within the host for performance. Most of the emacs lisp interpreter (e.g., bytecode.c, callint.c, eval.c, lread.c, marker.c etc) could be moved into the secure coprocessor and accessed as remote procedures. Any manipulation of host-side data --- text buffer manipulation, lisp object traversal --- required during remote procedure calls can be provided by a simple read-write interface (with caching) between the coprocessor and the host, with interpreter-private data such as catch/throw frames residing entirely within the secure coprocessor. Garbage collection does become a problem, since the garbage collector must be able to determine if a Lisp object is accessible from the call stack, a portion of which is inside the coprocessor. If we chose to hide the actions of the evaluator and keep the stack within the secure coprocessor hidden, this would require that the garbage collector code (Fgarbage_collect and its utilities) be moved within the secure coprocessor as well. Whether the proprietary code is split or not, the secure coprocessor runs a small security kernel. It provides the basic support necessary to communicate with the host or the host's I/O devices. With separate address spaces and a few communication primitives, the complexity of a security kernel can be kept low, providing greater assurance that a particular implementation is correct. Previous Work A more primitive version of the copy protection application for secure coprocessors appeared in [CITATION]; a secure-CPU approach using oblivious memory references (i.e., apparently random patterns of memory accesses) giving a poly-logarithmic slow down, appears in [CITATION] and [CITATION]. Electronic Currency We have shown how to keep licensed proprietary software encrypted and allow only execute access. A natural application is to allow charging on a pay-per-use or metered basis. In addition to controlling access to the software according to the terms of a license, some mechanism must perform cost accounting, whether it tracks the number of times a program has run or tracks dollars in a user's account. More generally, this accounting software provides an electronic currency abstraction. Correctly implementing electronic currency requires that account data be protected against tampering --- if we cannot guarantee integrity, attackers might be able to create electronic money at will. Privacy, while perhaps less important here, is a property that users expect for their bank balance and wallet contents; similarly, electronic money account balances should also be private. We argue that secure coprocessors can not only support electronic wallet functionality, but that they also offer stronger guarantees than existing and proposed electronic wallets. In particular, electronic coprocessors offer consumer protection unavailable with existing electronic wallets. (We have built an electronic currency mechanism on top of Dyad, see [CITATION].) Electronic Money Models Several models can be adopted for handling electronic funds. Any implementation of these models should follow the standard transactional model, i.e., to group together operations in a transaction having these three properties [CITATION]: Failure atomicity. If a transaction's work is interrupted by a failure, any partially completed results will be undone. Permanence. If a transaction completes successfully, the result of its work will never be lost, except due to a catastrophic failure. Serializability. Concurrent transactions may occur, but the results must be the same as if they executed serially. This means that temporary inconsistencies that occur inside a transaction are never visible to other transactions. These transactional properties are requirements for the safe operation of any database, and they are absolutely necessary for any electronic money system. In the following, we discuss various electronic money models, their security properties, and how they can be implemented using present day technology. (We have built an electronic currency system on top of Dyad.) The first electronic money model is based on the cash analogy. In this model, electronic cash has similar properties to cash: Exchanges of cash can be effectively anonymous. Cash cannot be created or destroyed except by national treasuries. Cash transfers require no online central authority. (Note that these properties are actually stronger than that provided by real currency --- serial numbers can be recorded to trace transactions. Similarly, currency can be destroyed.) The second electronic money model is based on the credit cards/checks analogy. Electronic funds are not transferred directly; rather, promises of payment, cryptographically signed to prove authenticity, are transferred instead. A straightforward implementation of the credit card model fails to exhibit any of the three properties above. However, by applying cryptographic techniques, anonymity can be achieved in a cashier's-check-like scheme (e.g., Chaum's Digicash model [CITATION], which lacks transactional properties such as failure atomicity --- see section [REFERENCE]), but the latter two requirements (conservation of cash and no online central authority) remain insurmountable. Electronic checks must be signed and validated at central authorities (banks), and checks/credit payments en route ``create'' temporary money. Furthermore, potential reuse of cryptographically signed checks requires that the recipient must be able to validate the check with the central authority prior to committing to a transaction. The third electronic money model is based on the bank rendezvous analogy. This model uses a centralized authority to authenticate all transactions and is poorly suited to large distributed applications. The bank is the sole arbiter of account balance information and can implement the access controls needed to ensure privacy and integrity of the data. Electronic Funds Transfer (EFT) services use this model --- there are no access restrictions on deposits into accounts, so only the person who controls the source account needs to be authenticated. We examine these models one by one. With electronic currency, integrity of accounting data is crucial. We can establish a secure communication channel between two secure coprocessors by using a key exchange cryptographic protocol and thus use cryptography to maintain privacy when transferring funds. To ensure that electronic money is conserved (neither created nor destroyed), the transfer of funds should be failure atomic, i.e., the transaction must terminate in such a way as to either fail completely or fully succeed --- transfer transactions cannot terminate with the source balance decremented without having incremented the destination balance or vice versa. By running a transaction protocol such as two-phase commit [CITATION] on top of the secure channel, secure coprocessors can transfer electronic funds from one account to another in a safe manner, providing privacy and ensuring that money is conserved. Most transaction protocols need stable storage for transaction logging to enable the system to roll back when a transaction aborts. On large transaction systems this typically has meant mirrored disks with uninterruptible power supplies. With the simple transactions needed for electronic currency, the per-transaction log typically is not that large, and the log can be truncated after transactions commit and further communications show all relevant parties have acknowledged the transaction. Because each secure coprocessor handles only a few users, small amounts of stable storage can satisfy logging needs. Furthermore, because secure coprocessors have secure non-volatile memory, we only need to reserve some of this memory for logging. The log, accounting data, and controlling code are all protected from modification by the secure coprocessor, so account data are safe from all attacks; their only threats are bugs and catastrophic failures. Of course, the system should be designed so that users should have little or no incentive to destroy secure coprocessors that they can access. This is natural when one's own balances are stored on a secure coprocessor, much like the cash in one's wallets. If the secure coprocessor has insufficient memory to hold account data for all the users, the code and accounting database may be written to host memory or disk after obtaining a cryptographic checksum (see discussion of crypto-sealing in section [REFERENCE]). For the accounting data, encryption may alternatively be employed since privacy is usually also desired. Note that this type of decentralized electronic currency is not appropriate for smartcards unless they can be made physically secure from attacks by their owners. Smartcards are only quasi-physically secure in that their privacy guarantees stem solely from their portability. Secrets may be stored within smartcards because their users can provide the physical security necessary. Malicious users, however, can violate smartcard integrity and insert false data. FOOTNOTE: Newer smartcards such as GEMPlus or Mondex cards [CITATION] feature limited physical security protection, though the types of attacks these cards can withstand have not been published. Secure coprocessor mediated electronic currency transfer is analogous to rights transfer (not to be confused with rights copying) in a capability-based protection system [CITATION]. Using the electronic money --- e.g., spending it when running a pay-per-use program --- is analogous to the revocation of a capability. This type of model relies on the idea of secure-coprocessor-protected unforgeable electronic tokens. In addition to electronic money, these unforgeable tokens are useful for many other applications. Electronic tokens can be created and destroyed by a few trusted programs. For pay-per-use applications, the token is created by the vendor's sales program and destroyed by executing the application --- the exact time of destruction of the token is a vendor design decision, since runs of application programs are not, in general, transactional in nature. However, the trusted electronic currency manager running in the secure coprocessorcan use distributed transactions to transfer money and other electronic tokens. Transaction messages are encrypted by the secure coprocessor's basic communication layer, providing privacy and integrity of communications. (Traffic analysis is beyond the scope of this work and is not addressed.) What about the other models for handling electronic funds? With the credit card/check analogy, the authenticity of the promise of payment must be established. When the computer cannot keep secrets for users, there can be no authentication because nothing uniquely identifies users. Even if we assume that users can enter their passwords into a workstation without fear of their password being compromised, we are still faced with the problem of providing privacy and integrity guarantees for network communication. We have similar problems as in host-to-host authentication in that cryptographic keys need to be somehow exchanged. If communications are in plaintext, attackers may simply record a transfer of a promise of payment and replay it to temporarily create cash. While security systems such as Kerberos [CITATION], if properly implemented [CITATION], can help to authenticate entities and create session keys, they use a centralized server and have problems similar to those in the bank rendezvous model. While we can implement the credit card/check model using secure coprocessors, the inherent weaknesses of this model keep us from taking full advantage of the security properties provided by secure coprocessors; if we use the full power of the secure coprocessor model to properly authenticate users and verify their ability to pay (perhaps by locking funds into escrow), the resulting system would be equivalent to the cash model. With the bank rendezvous model, a ``bank'' server supervises the transfer of funds. While it is easy to enforce the access controls on account data, this suffers from problems with non-scalability, loss of anonymity, and easy denial of service from excessive centralization. Because every transaction must contact the bank server, access to the bank service will be a performance bottleneck. Banks do not scale well to large user bases. When a bank system grows from a single computer to several machines, distributed transaction systems techniques must be brought to bear in any case, so this model has no real advantage over the use of secure coprocessors in ease of implementation. Furthermore, if a bank's host becomes inaccessible, either maliciously or as a result of normal hardware failures, no agent can make use of any bank transfers. This model does not exhibit graceful degradation with system failures. Point-of-Sale Terminals In addition to their use in networked computers, secure coprocessors can be used for commercial transactions at point-of-sale terminals. For this application, we would need portable secure coprocessor form factors, such as smartcards or PCMCIA cards. Unlike the networked PC scenario where the users can be familiar with particular PCs they use, customers at a point-of-sale terminal have no reason to trust its integrity. Point-of-sale use of secure coprocessors is vulnerable to a very important class of threats: communication spoofing between the secure coprocessor and the user. This problem arises because there is no private communications path [CITATION] between the user and the secure coprocessor. A secure display only displays data to the user originating from the secure coprocessor and guarantees that the displayed data can not be tapped by a third party; such a display would provide secure one-way communication from the secure coprocessor and the user. Today's smartcards and PCMCIA cards do not incorporate secure displays. Thus, for point-of-sale use, the user must rely on the display on the point-of-sale terminal to inform him of the total price. Unlike traditional paper credit-card-imprint slips, a secure coprocessor's digital signature is on a document that is never shown to the user --- whatever per-signature user authorization required is performed blind, and the secure coprocessor might sign a purchase order for a $10,000 gold watch when the point-of-sale terminal is displaying ``$1.98 watch batteries.'' Furthermore, to prevent a user authorization replay attack, some method for securely transferring the user authentication/authorization input to the secure coprocessor is required. To permit secure input of user passwords to a secure coprocessor and to display purchase information, a secure display suffices: we use the secure display as a one-way secure channel over which we transmit a one-time pad, i.e., a cryptographically random string. The user then uses the point-of-sale terminal's keyboard (perhaps via arrow keys) to modify the displayed string into the user's password. For example, if your password was ``SHOELACE'' and the displayed string was ``QZKNCFLX'', you would press the [SYMBOL] arrow twice to change the ``Q'' to an ``S'', and then press the [SYMBOL] arrow to advance to the next character, etc. (This is an idea adapted from [CITATION].) Price information can be shown on a secure display in the obvious way. Without a secure display of purchase data and secure entry of passwords, point-of-sale use of secure coprocessors does not increase the security of point-of-sale commerce over existing credit card systems. One much touted property of using smartcards in lieu of mag-stripe credit cards is customer non-repudiation and the elimination of merchant fraud. However, while the cryptographic signature keys may be secure, smartcards without some form of secure display can not link the signature to the purchase due to the absence of customer review. Thus customers are still vulnerable to merchant fraud --- rather than modifying the numbers on a credit card slip after the fact, the merchant can simply introduce a difference between data presented to the user and the users' secure coprocessor. Previous Work An alternative to the secure coprocessor managed electronic currency is Chaum's Digicash protocol [CITATION]. In such systems, anonymity is paramount, and cryptographic techniques are used to preserve the secrecy of the users' identities. No physically secure hardware is used, except in the observers refinement to prevent double spending of electronic money (rather than detecting it after the fact). FOOTNOTE: The observers model employs a physically secure hardware module to detect and prevent double spending. Chaum's protocol limits information flow to the observer, so that the user need not trust it to maintain privacy; however, it must be trusted to not destroy money. Secure coprocessors achieve the same goals with greater flexibility. Chaum-style electronic currency schemes are characterized by two key protocols. The first is a blind signature protocol between a user and a central bank. During a withdrawal, the user obtains a cryptographically signed check that is probabilistically proven to contain an encoding of the user's identity. The user keeps the values used in constructing the check secret; they are used later in the spending protocol. The second protocol is a randomized interactive protocol between a user and a merchant. The user sends the blind-signed check to the merchant and interactively proves that the check was constructed appropriately out of the secret values and reveals some, but not all, of those secrets. The merchant ``deposits'' to the central bank the blind-signed number and the protocol log as proof of payment. This interactive spending protocol has a flavor similar to zero-knowledge protocols in that the answers to the merchant's queries, if answered for both values of the random coin flips, reveal the user's identity. When double spending occurs, the central bank gets two logs for the same check, and from this identifies the double spender. There are a number of problems with this approach. First, any system that provides complete anonymity is currently illegal in the United States, since any monetary transfer exceeding $10,000 must be reported to the government [CITATION], employee payments must be reported similarly for tax purposes [CITATION], stock transfers must be reported to the Securities and Exchange Commission, etc. Second, in a real internetworked environment, network addresses are required to establish and maintain a communication channel, barring the use of trusted anonymous forwarders --- and such forwarding agents are still subject to traffic analysis. Providing real anonymity in the high level protocol is useless without taking network realities into account. Third, Chaum's cryptographic protocols do not handle failures, and any systems based on them cannot simultaneously have transactional properties and also maintain anonymity and security. A transaction abort in the blind signature protocol either leaves the user with a debited account and no electronic check or a free check. A transaction abort in the spending protocol either permits the user to falsify electronic cash if the random coin flips are reused when the transaction is reattempted (e.g., the network partition heals), or reveals identifying information to the merchant if new random coin flips are generated when the transaction is reattempted. Clearly, to provide a realistic distributed electronic currency system, transactional properties must be provided. Unfortunately, the safety provided by transactions and the anonymity provided by cryptographic techniques appear to be inherently at odds with each other, and the tradeoffs made by Chaum-style electronic cash systems for anonymity instead of safety are inappropriate for real systems. Another class of electronic money system is server-based. NetBill [CITATION] is one type of such a system. NetBill implements the credit card model of electronic currency. A central server acts as a credit provider for users who can place a spending limit on each authorized transaction, and it provides billing services to the service providers. No true anonymity is achieved: the central server has a complete record of every user's purchases and the records for the current billing period is sent to users as part of their bill. Some scaling may be achieved through replication, but in this case providing hard credit limits require either distributed transactions, or every user must be assigned to a particular server, making the system non-fault tolerant. Other approaches include anonymous credit cards [CITATION] or anonymous message forwarders to protect against traffic analysis, at the cost of adding centralized servers back to the system. Electronic Contracts One of the most exciting applications of secure coprocessors is the use of electronic contracts. Electronic contracts are a natural extension to the ``basic'' electronic commerce approach. Where existing electronic commerce systems provide a basic, two-party contract which offered money for goods, a full electronic contract approach permits multi-party contracts, delegation, and a richer set of contractual primitives. Electronic contracts provide enabling technology for creating electronic marketplaces [CITATION]. Applications include the idea of superdistribution of software [CITATION], and the creation of electronic futures markets. In superdistribution, the idea is that the traditional software distribution channel is replaced by allowing a software buyer to resell the software on the manufacturer's behalf. When we look at this in the electronic contracts viewpoint, the customer is entering into a contract with the software manufacturer whereby the customer not only obtains the rights to use that software, but also the rights to make the same contract with other potential customers on the manufacturer's behalf. Such a self-replicating contract is a relatively simple three-party contract, where all of the contractual terms --- electronic money transfer, rights to run a program, and making more electronic contracts --- are enforceable by a secure coprocessor. See Figure [REFERENCE] for an example superdistribution contract. Having an expressive electronic contract language also enables the creation of electronic markets not previously possible. For example, air travel requirements --- travel destination and approximate times --- may be written up as an electronic contract containing the maximum price that the user is willing to pay, and this contract may be put up for auction. Travel agents bid for and buy the right (and obligation) to fulfill such contracts, increasing the efficiency of the travel market; additionally, travel agents may speculate on airline pricing and offer a higher bid in anticipation of fare reductions. Note, furthermore, that airline tickets may also be objects handled by the electronic contract system: these may simply be electronic documents signed by the airline giving the customer the right to travel on a particular flight, or even a token of a specific token type which permits travel on a certain flight. In full generality, the objects referred to within electronic contracts will not always be objects that are managed by secure coprocessors, and this necessarily implies that external adjudication will be required when breaches of contracts occur. Furthermore, the user may not be able to satisfy the contractual demands, e.g., a broker who (speculatively) sells run-time on a mainframe may find all the cycles already allocated. FIGURE: software_distributor(signatory id_t manuf, signatory id_t distributor, key_t sw_key, int manuf_profit, id_t prev_distr, int prev_distr_cut, time_t expire) { int price; terminates when date() >= expire; access(none): super_buy(id_t buyer, money_t cash @ buyer) { int profit; /* profit for this distributor; no Amway tree */ if (cash->amount < price) reject; /* cannot sell at a loss */ xfer(cash,manuf->in_register,manuf_profit); xfer(cash,prev_distr->in_register,prev_distr_cut); profit = price - manuf_profit - prev_distr_cut; xfer(cash,distributor->in_register,profit); xfer(sw_key,buyer,1); software_distributor(manuf,buyer,sw_key,manuf_profit, distributor,profit,expire); /* to do Amway, we would pass profit up * the distr chain rather than all at once here */ } access(distributor): set_price(int new_price) { /* pricing must at least pay for manufacturer profit */ if (new_price < manuf_profit + prev_dist_cut) reject; price = new_price; enable_access(super_buy,all); } } Software Superdistribution Contract In this example, the software retail distributor enters into a contract with a software manufacturer, which enables the distributor to sell the software to customers for customer use and at the same time permit the customer to redistribute the software under the same contractual terms. For the duration of the contract, the distributor gains the power to make new contracts on the manufacturer's behalf. Our electronic contract model is built on the following two secure coprocessor-provided primitive objects: (1) unforgeable tokens and (2) computer-enforced contracts. Unforgeable tokens are protected objects conserved by secure coprocessors; they are freely transferable, but can be created and destroyed only by the agents that issued them (or their designees). Furthermore, the transfer of tokens occur in a transactional manner, so that the number of tokens is a conserved quantity (excepting explicit action by their issuer). Tokens are useful for representing electronic currency and execute-only rights to a piece of software (much as in capability systems). In the case of rights such as execute-only rights, the token provides access to cryptographic keys that may be used (only) within the secure coprocessors to run code. Electronic currency and execution rights are subtypes of tokens and inherit the transactional transfer property from tokens. Contracts are another class of protected objects. They are created when two parties agree on a contract draft. Contracts contain binding clauses specifying actions that each of the parties must perform or actions that the secure coprocessors will enforce, along with ``method'' clauses that may be invoked by certain parties (not necessarily restricted to just the parties who agreed on the contract). Time-based clauses and other event-based clauses may also exist. Contractual obligations may force the transfer of tokens between parties. Contract drafts are typically instantiated from a contract template. We can think of a contract template as a standardized contract with blanks which are filled in by the two parties involved, though certainly ``custom'' contracts are possible. Contract negotiation consists of an offerer sending a contract template along with the bindings (values with which to fill in blanks) to the offeree. The offeree either accepts or rejects the contract. If it is accepted, a contract instance is created whereby the contract bindings are permanent, and any immediate clauses are executed. If the draft is rejected, the offeree may take the contract template and re-instantiate a new draft with different bindings to create a counter-offer, whereupon the roles of offerer and offeree are reversed.