Check out the new USENIX Web site. USENIX - Summaries

Third USENIX Workshop on Electronic Commerce

BOSTON, MASSACHUSETTS August 31-September 3, 1998

Session: Advances in Payment Technology

Summary by Matthew Hohlfeld

Electronic Commerce and the Street Performer Protocol

John Kelsey and Bruce Schneier, Counterpane Systems

Bruce Schneier began his presentation by letting the audience know that, through his error, the version of the paper in the proceedings was not the correct one, which is available at <> and at <>.

The premise of this work is that none of the currently available systems for protecting intellectual property (IP) will be a full solution in a world where the duplication of a work is simple and nearly cost-free. As technologies for copying become cheaper, and we begin to rely more on digital delivery systems for intellectual property, the notion of copyright itself becomes invalid. Without copyright protection, the creation of works will diminish, and ultimately the public will suffer. As a possible solution to this problem, Schneier suggested that new works could be funded on an escrow system. This would help restore the incentive for the creation of works, even when the works are then released as part of the "public good." He provided an exhaustive list of the currently used (or designed) methods for protection of IP, along with the perceived flaws of each. The list included both secure perimeter schemes and traitor tracing schemes; law-based solutions; and alternatives such as advertising, product placement, and government funding.

By suggesting an analogy between "consumers" of IP and donations to a street performer, Schneier provides the basis for a new payment method, dubbed the Street Performer Protocol. Using a bank or publisher as a trusted third party, the Street Performer Protocol simply has the creator of a work request a specific minimum amount of donations. Once that level is reached, the creator promises to release the work into the public domain. After releasing the work, the third party transfers the donated money to the creator.

After discussing some of the motivations that individuals would have to become donors, and the fact that the whole notion of IP is relatively new, Schneier opened the floor to questions. Perry Metzger suggested that this protocol/economic model would not result in the same volume of goods as the market can sustain, and pointed out that Penguin Classics makes money as a value-added reseller of works in the public domain. Max Tsvetovat inquired as to the applicability of this protocol to open source software; Schneier responded that the closest match would be to use proposed feature lists as the description of the "work." Stuart Feldman then pointed out that Victorian subscription incorporated a very similar economic model, in which the contributors' names appeared on the first page of each work. Nicko Van Someren pointed out that widespread use of this protocol could result in IP production focusing on form rather than substance, and that production could change course back to forms, such as books, whose reproduction is more difficult. Juan Garay asked whether it is a problem that the trusted third party will become quite large when dealing with works that are reviewed prior to publication; Schneier responded that this already occurs.

VarietyCash: A Multi-purpose Electronic Payment System

M. Bellare, University of California, San Diego; J. Garay, Bell Laboratories; C. Jutla, IBM T.J. Watson Research Center; M. Yung, CertCo

Charanjit Jutla presented the culmination of some six years of cooperative research by the authors into a form of digital cash intended to replace physical cash. They explicitly consider only the single-issuer situation, but assert that the scheme can easily be extended to multiple-issuer. The main problem addressed by their approach is the need to incorporate network-based (online) and smartcard based (offline) solutions in a single interoperable system.

After discussing some other electronic payment schemes, and how they fare with regards to anonymity, atomicity, and network vs. smartcard issues, Jutla introduced the features of VarietyCash. These include an Issuer that maintains a list of all issued but unspent coins and a truly account-less system for ease of maintenance. The main cost of the system is incorporated in verification of each transaction. The Issuer in this system is trusted to preserve anonymity (which can be removed if required, e.g., by law).

Some of the security goals addressed by VarietyCash are: protocol security, internal security, user security, and network security. It uses tamper-proof hardware at the Issuer site to prevent insider attacks. Storing an encrypted version of each issued coin in a database at the issuer site is also required as a defense against insider attacks. The main goal in User Security is to prevent the loss of the user's anonymity.

Other features of the system include: using the issuer as a certification authority, a global requirement for "conservation of cash." Jutla also gave a brief trace through the Coin Purchase Protocol, emphasizing the speed of the Issuer in this protocol, and then showed the Payment Protocol at a high level. He said that a prototype system has been built, and that anyone interested in a demo should contact him.

Raphael Yahalom asked about the difference between anonymity in this system and the trust in credit card purchases; the answer was the receipt is the basis for the accountability in credit card purchases, and there's no such receipt in this system. Nevin Heintze suggested that some requirements were missing from the formalization, including guarantees of coin value and coin delivery upon purchase. Mark Seiden questioned the value of using VarietyCash versus a ledger-based system; the benefit is in the efficiency of the VarietyCash approach. Seiden also wanted to know what the result of an issuer private key breach is; the answer was that there is a large consequence to this, and the system is built to insure that the private key is kept private. An offline conversation with Jutla provided this more complete response: the Issuer has many keys - one for signing the coin, one for encrypting the coin in its database, one that is its public-private key pair for communications, one that is a public-private key pair for certificates. The system doesn't break if any of these keys is lost. It may lead to small-scale attacks: for example, if the signing key is lost, it leads to insider attacks; if the database encryption key is lost, again it may lead to insider or intruder attacks; if the communication key is lost, it could lead to theft of users' coins.

NetCents: A Lightweight Protocol for Secure Micropayments

Tomi Poutanen, University of Toronto; Heather Hinton, Ryerson University; Michael Stumm, University of Toronto

Tomi Poutanen discussed NetCents, which was originally designed to be a micropayment protocol for use on the Internet. The protocol has since been extended to support a wider range of values while maintaining efficiency and robustness.

Part of the motivation for this work is the fact that credit cards are used as the primary payment method on the Internet, even though their use includes several flaws. Poutanen listed the desired features in a payment protocol for the Internet. It must: be secure (by avoiding double-spending), be goods atomic, support anonymity, and support a full range of values. Importantly, it must be implementable; thus, it must be scalable, interoperable, and low-cost. He also noted that to gain consumer support, it should be "as easy to use as real cash."

The design of NetCents considers the tradeoff between security and cost that is inherent in the online versus offline decision. Its heritage is traced to Millicent, but it is more feature-rich, secure, and cheaper than Millicent. NetCents makes use of the public/private split in several ways, including separation of the scrip, and using public-key verification to verify each electronic payment order.

Batching of payments in NetCents is used to increase efficiency and remove some load from the bank. This feature directly influences the need to be able to transfer scrip from one vendor to another; this is included as a sub-protocol of the purchase protocol. After describing these features, Poutanen discussed fraud control in their system. The protocol ensures that scrip transfers are atomic, and that though a criminal vendor can allow customers to double-spend, a probabilistic verification scheme makes it economically undesirable to do.

Poutanen then described how NetCents handles the crossover between online and offline protocols. Online arbitration is performed by means of a trusted third party, and the issuer maintains a certain minimal balance so that large purchases can go through while smaller balances are stored at vendors in support of cheap (offline) micropayments. The performance of an individual vendor depends highly upon the scrip co-location hit performance.

Bill Frantz wanted to know if the need for secure hardware at the issuer's site indicated that we also need trusted code at each vendor's site; the answer was that NetCents doesn't require tamper proof hardware at each vendor, but only may catch cheaters. Manoj Kumar asked how to detect which of possibly many vendors in a scrip transfer chain was the malicious one; the answer was a full audit path is required to determine which was the malicious vendor. Kumar then pointed out that this would require each vendor to keep track of all outbound scrip transfers each day (to be flushed when transferred to real monetary value). Tracy Mullen asked what happens when a vendor fails during a purchase; the answer was that the value is rolled back into the scrip, as if the purchase had not occurred.

Session: Auction Markets

Summary by Matthew Hohlfeld

The Auction Manager: Market Middleware for Large-Scale Electronic Commerce

Tracy Mullen and Michael P. Wellman, University of Michigan

To introduce the motivation for their work, Tracy Mullen described an ongoing project in which the University of Michigan Digital Library is modelled as an information economy. Within this environment, different types of organizations have different market policies (e.g., corporate information policies vs. grade school ones). Participating individuals change over time and have needs that vary with current projects and interests. The authors' work addresses how to build flexible infrastructures that can support such dynamic, diverse economies. Mullen described how formal description languages for goods, services, and auctions (where auctions are really just services for setting a price) are used within their system interaction protocols and middle ware components to automate and simplify the electronic commerce process.

The talk focused on a particular component, the Auction Manager, which generates and tracks markets and provides other market management services. Commerce on the Internet often faces different economics than those of more traditional commerce channels, including lower transaction and distributions costs and marginal costs near zero for information goods, which tends to promote product bundling and unbundling. The Auction Manager must be able to match buyers and sellers in this more complex environment.

By extending product descriptions to include bundling and unbundling operators, Mullen showed how logical inference rules could be used to locate appropriate markets for buyers and sellers, as well as find potential arbitrage opportunities across related markets. In addition, the concept of buyer's and seller's choice bundling was introduced. Buyer's choice means that instead of buyers buying an entire bundle of goods or services, they can buy the option to choose one or more of these products; seller's choice is similar. For example, a newspaper is really a bundle of articles sold as seller's choice; this same bundle of articles could be sold as buyer's choice, where the buyers decide which they want to read.

In addition to matching agents with markets, the Auction Manager also serves as the focal point for market creation and selection policies. Since infrastructure costs and agent decision complexity costs exist for each market created, unlimited market creation has the potential to overwhelm the system. The Auction Manager is being used to explore enforcing various market-creation policies, such as having explicit organizational policies or using auction fees to provide agents with the right incentives not to create unnecessary markets.

Finally, the Auction Manager can also serve as a repository of market and organization-specific knowledge about selecting the best kinds of markets for exchanging different types of goods or services. While agents can always specify exactly the type of market they desire, they also have the option of allowing the Auction Manager to fill in reasonable defaults. Future work includes extending both the market creation and market selection policies.

The only question came from Max Tsvetovat, who wanted to know if the Auction Manager is a central, trusted component of the system; the answer was that though it can be distributed, it is centralized and trusted for simplicity.

Internet Auctions

Manoj Kumar, Stuart I. Feldman, IBM T. J. Watson Research Center

Manoj Kumar started his presentation with an anecdotal note about the history of auctions on the Internet: the most expensive T-shirt sold from the Nagano Olympics Web site in a series of daily sealed-bid auctions went for $15,000. He went on to present a list of the auction issues he would not be addressing: legal issues, cheating and/or sabotage, social issues, and multi-piece or continuous auctions.

With all of that out of the way, Kumar launched into predictions about the future of auctions on the Internet. The authors believe that there will be an increase in the popularity of auctions for several reasons, including the integration of auctions with existing business processes and an increase in trust in the auction model. Kumar noted that though there are already many traditional uses of auctions in business-to-business transactions, there is room for new forms and uses of auctions in business-to-consumer transactions.

Kumar then provided a classification of auctions and introduced some of the common variations on these types. He then introduced the system that the authors built on top of Net.Commerce (an existing IBM product) to allow for the integrated use of auctions in business-to-consumer transactions. They built this in a consumer-oriented fashion and made it as close to fixed-price shopping as possible. By choosing to make this a consumer-oriented project, they became aware that the user interface is very important its success. Kumar then described the user interface and its similarities to the "shopping cart" metaphor in current Web sites.

The Auctioneer is the commercial site in this setup, and the user interface for the Auctioneer has also undergone extensive work, including "experts" that provide pre-defined auction types. This is all built using a generic infrastructure that supports a wide variety of auctions, a prototype of which is available at <>.

Doug Tygar expressed his concern about the ability of this model to address the latency of bids, especially in double auctions; Kumar acknowledged that this problem is currently being ignored by the system. Tygar was also curious about how "accidental" bids are prevented; the answer was that a registration mechanism helps to prevent this (a full description is in the paper). Giray Pultar then asked about the use of agents for bidding in this system; the answer was that agents aren't yet allowed.

Electronic Auctions with Private Bids

Michael Harkavy and J. D. Tygar Carnegie Mellon University; Hiroaki Kikuchi, Tokai University

Michael Harkavy, winner of the workshop's Best Student Paper award for this work, began his presentation by skipping his prepared description of traditional auctions, since the previous two speakers had already thoroughly covered that topic. He then presented a double-barreled argument for the use of sealed-bid Vickrey auctions in an electronic setting: sealed-bid auctions are beneficial since they hide the preferences of the bidders, and Vickrey auctions have nice economic and efficiency properties.

Harkavy then discussed some of the problems in situations where bidding preferences are leaked, including signalling and cooperation between bidders. He claimed that simply having anonymous bidding is insufficient to protect the privacy of a bid.

Harkavy described how to leverage verifiable secret sharing and secret computation in order to implement an electronic auction that preserves the privacy of bids.. This includes addition and multiplication on shared secrets, where the computation and the intermediate results do not reveal the secrets. Using this technique, more complex functions (like the maximum function) can be computed on secrets.

One restriction that the desire for efficiency forces on this system is the need to encode all bids into a given range. The bidders encode their bids into a secret that is distributed among auctioneers (where only t of the n auctioneers can be malicious). The auctioneers then compute the maximum function digit by digit on their inputs, eliminating bidders without revealing their bid or their identity. The winning bidder is revealed at the end of the computation.

Harkavy concluded his presentation by describing some of the lower-level details and optimization techniques used to attain some measure of efficiency. There is an inherent tradeoff in this system between the efficiency of the computation and the security of the individual bids.

Session: Secure Systems -- What It Takes

Summary by Kevin Fu

A Resilient Access Control Scheme for Secure Electronic Transactions

Jong-Hyeon Lee, University of Cambridge

Jong-Hyeon Lee presented a way to authenticate customers without disclosing customer secrets to a merchant. (Lee, a student of Ross Anderson, is also capable of security in another dimension -- Aikido.)

Despite the vulnerability to copying, passwords and Personal Identification Numbers (PINs) commonly authenticate customers to service providers. Lee sought a simple and secure electronic transaction model that does not have to explicitly transfer customer secrets and or use public key cryptography. A scheme by Needham to control PINs is simple, provides for privacy, separates capabilities, and is customer-oriented; However, it is susceptible to replay attacks and bogus ATM machines.

Inspired by Needham's scheme, Lee developed a customer-oriented transaction model in which the customer generates and maintains personal secrets. The model enables a transaction procedure among three principals: a customer, a merchant, and a bank. Principals can participate in registration, transaction, or secret-revocation procedures. A somewhat lengthy protocol explains the communication among the principals. By using only hash functions, Lee's model enhances privacy for the customer and ensures nonrepudiation.

The registration procedure mimics that of Needham's scheme, and the transaction procedure uses a technique from KryptoKnight. In Lee's online scheme, the customer is involved with all procedures. An offline scheme works in a similar manner, but there is some extra communication between the merchant and customer.

Asked whether there exists an implementation, Lee explained there is yet no implementation for this scheme, but there is for Needham's scheme.

See <> for more information.

Trusting Trusted Hardware: Towards a Formal Model for Programmable Secure Coprocessors

Sean W. Smith and Vernon Austel, IBM T.J. Watson Research Center

Sean Smith presented his findings on proving the security of secure coprocessors with respect to Federal Information Processing Standard (FIPS) 140-1 level 4 certification. His group worked on three goals: achieving level 4 certification as a research project, verifying the soundness or finding the holes in the coprocessor, and formally describing the coprocessor.

FIPS 140-1 specifies security requirements for cryptographic modules. The most stringent level in the standard, FIPS 140-1 level 4, requires a formal model of a system and formal proof of security. As of this writing, level 4 is an unachieved grail.

A secure coprocessor is hardware that must survive in a hostile environment. It must guarantee that memory contents will be zeroized upon any foreseeable attack, and it needs to defend against threats such as changes in voltage, temperature, and radiation. Such a programmable device is useful for e-commerce.

A mechanical theorem prover was iterated over a logical abstraction of the coprocessor. First, a formal model was made from a finite state machine. Then a specification was written in LISP to prove simple properties of security. The proof must show that the coprocessor maintains its security guarantees despite hardware failures and hardware attacks. Guarantees for security fall into three categories: safe execution, safe access, and safe zeroization. Other assertions include authenticated execution, recoverability, and fault tolerance. The proof involves 2000 lines of C, 75 execution states, and 7500 lines of a mechanical proof.

Right now, only the hardware and bootstrap are being submitted for level 4 certification. IBM's plans for actual certification are still undecided. In this research, IBM went through a lot of the legwork for the bootstrap layer as an exercise; Smith notes that it would be "really cool" to go all the way with it. In the future, Smith hopes to evaluate the programs on the coprocessor. However, he expects complications, since the hardware could interrupt the software and the software could start interrupting the software.

Pointing out that FIPS is aging, an audience member asked Smith to share hints on where FIPS is falling short and where it goes too far. Smith replied that on the too-stringent side, FIPS requires the use of DSA for signatures. Everyone wants to use RSA, but to be FIPS-compliant, the coprocessor must contain algorithms no one wants to use. On the other hand, FIPS does not address security requirements of the manufacturing process.

Another audience member brought up the topic of differential power analysis with current fluctuations. Many security attacks result from crossing levels of abstraction (power analysis, buffer overrun, etc). Smith was ambivalent on whether good proof techniques can capture these attacks.

For more information, see <> and the IBM 4758 product brochure G325-1118.

On Secure and Pseudonymous Client-Relationships with Multiple Servers

Daniel Bleichenbacher, Eran Gabber, Phil Gibbons, Yossi Matias, and Alain Mayer, Lucent Technologies, Bell Laboratories

Alain Mayer talked about Janus, a cryptographic engine to establish and maintain pseudonymous relationships. Mayer enjoys hacking JavaScript and having fun on the Web. Coincidentally, he used the same Microsoft clip art in his presentation as does the Crowds project.

Janus facilitates relative pseudonymity. That is, a client is anonymous with respect to the client population (e.g., an ISP customer base). The server knows a message came from a particular client population, but it does not know which member of the population. Janus also allows for persistent relationships between clients and servers. Weak or strong authentication by means of passwords or keys allows for repeat visits.

Absolute anonymity is hard to achieve without a penalty in ease of use and performance. The work on Janus is complementary to other anonymizing efforts and can be combined with other techniques.

There is a distinction between data anonymity and connection anonymity. In data anonymity, data flowing over a connection does not reveal an identity. In this case the adversary would attack server endpoints. In connection anonymity, the connection itself does not reveal an identity, and the vulnerability is traffic analysis.

There are several candidate Janus functions. Mayer has three requirements of the function. First, it must ensure uniqueness of aliases among clients and resist impersonation; in other words, it must be hard to find an input that results in the same alias. Second, the function must not reveal information about clients. Third, there must be forward secrecy and statelessness for client mobility. Mayer described one such function involving a password-keyed hash of a client identifier, server identifier, and a usage tag. Mayer finds the CBC-MAC approach more promising than a simple hash because secrecy under a chosen message attack implies secrecy of passwords. The CBC-MAC approach fulfills all three requirements.

Janus works with email aliases. Aliased email can also help filter junk mail. A client may have a different mailbox for each server. One can filter (even by a third party) by ignoring mail to a particular alias.

Mayer indicated several places to house a Janus engine. In a local approach, the Janus engine lives in the client. Aliases would be routed through a proxy. This minimizes outside trust and cooperates with mobile code and Personal Privacy Preferences (P3P) repositories. In a gateway approach, a client need not download software. This allows easy upgrades and maintenance. In a third party approach, the Janus engine would exist in the outside world. The third party preserves subnet anonymity. Mayer pointed out that if you look at a gateway or local approach, the domain name or IP address does not reveal its alias or real address. A vendor could ask for a credit card for identity validation.

An audience participant asked whether anonymity is really beyond research and useful in the real world. Mayer responded that according to surveys on electronic commerce, end users worry about privacy. A high percentage of users leave sites that present fill-out forms. To demonstrate practicality, Mayer offered the example of personalized Web pages. A user no longer must remember passwords for services such as My Yahoo or NYT. Janus can be a tool to make personalized sites as easy to visit as regular sites.

The Lucent Personalized Web Assistant uses a Janus engine. See <>for more information.

Secure WWW Transactions Using Standard HTTP and Java Applets

F. Bergadano, Università di Torino, Italy; B. Crispo, University of Cambridge and Università di Torino; M. Eccettuato, Università di Torino.

Francesco Bergadano presented an alternative for securing HTTP transactions. This solution uses local Java applets on the client side to establish a secure link with the server.

Existing solutions include modifications to the application protocol (e.g., SHTTP), a secure transport below the browser (e.g., SSL/TLS, DCE-Web transport APIs), proxy-based services, and network layer changes (e.g., IPsec). Bergadano's group wanted to achieve privacy, authentication, and possibly non-repudiation. However, they did not want to implement a new browser or modify existing browsers. Moreover, they wanted to provide strong cryptography and make the source code freely available.

The proposed architecture uses normal HTTP, TCP, and a Java-capable browser. Essentially, the client runs an applet from the server. This applet triggers a local applet that communicates with a local application on the client. This application in turn creates an encrypted channel with the server.

This approach requires relatively few changes. More important, Bergadano claims it does not require trust of the browser. It is desirable to separate security routines from the browser. This approach is similar to a proxy-based approach. However, a proxy must intervene with all communication. Bergadano's approach only becomes active when an HTTP transaction is explicitly asked to be secure.

Launching several questions, Avi Rubin asked Bergadano to answer just one: Where did you put security? Is it better than SSL, why can't you run a simple proxy? Are you assuming you can change a firewall configuration? Taking a deep breath, Bergadano jokingly asked when dinner was scheduled. He chose to answer the SSL and firewall questions. In the case of SSL, one needs a trusted browser that supports SSL. In Europe, one cannot easily obtain a standard browser with strong cryptography. As for the firewall, Bergadano reported that the implementation was run on an open network. He was unsure about interactions with a firewall since a secondary channel must be established between the client and server. Another attendee commented that if this approach gets well used and works, it would be consumed by a browser.

For more information and the source code, see <>.

SWAPEROO: A Simple Wallet Architecture for Payments, Exchanges, Refunds, and Other Operations

Neil Daswani, Dan Boneh, Hector Garcia-Molina, Steven Ketchpel, and Andreas Paepcke, Stanford University

Neil Daswani presented the SWAPEROO digital wallet project. Started in September 1997, this project aimed to identify desirable wallet properties and features, define a wallet interaction model, define clean APIs for a wallet and its components, and build a prototype.

Daswani's group decided that: (1) A wallet architecture should be extensible. Rather than being completely proprietary, it should support multiple instruments and protocols. (2) It should not rely on a Web interface as the sole common interface. The basic architecture should be written once to be run anywhere. This enables the use of alternative devices such as Personal Digital Assistants (PDAs). (3) Symmetry allows for common services across commerce applications. Current wallet implementations are often non-symmetric; little infrastructure is shared between the client and server sides. (4) A wallet architecture should be client-driven. The user should initiate all transactions. Vendors should not be capable of automatically invoking a client's digital wallet. After all, would you want a vendor reaching for your wallet as soon as you enter a store?

Daswani described a wallet interaction model. After starting a transaction, wallets can negotiate on a protocol. Because of symmetry, the user and vendor have similar wallets.

SWAPEROO has been implemented in C++ (PalmOS) and Java (Windows). Future work includes populating the wallet, experimenting with other devices (e.g., smart cards), working on the architecture, and abstracting out the data manager.

One question was asked about symmetry. Since everyone would have wallets of a similar design, is there any reason clients would not want to communicate with each other? Daswani responded that there are no restrictions. Another question involved tamper resistance. Given that the wallet must be in some tamper resistant memory, how are these things initialized? Daswani answered that for PalmPilots, this is a problem. However, in the future with a JVM access control, a policy could potentially be downloaded directly into the wallet from a trusted site.

A related paper on the PalmPilot implementation will appear in the future. The PalmPilot implementation lets a user buy a food item from a particular vending machine at Stanford. For more information, see <>

The Eternal Resource Locator: An Alternative Means of Establishing Trust on the World Wide Web

Ross Anderson, Vaclav Matyas, and Fabien A.P. Petitcolas, University of Cambridge

Vaclav Matyas presented an alternative means of managing trust in electronic publishing. He spoke about WAX, a proprietary hypertext system for medical publishing. WAX uses hashes in combination with HTML links as an Eternal Resource Locator (ERL). Matyas is also the co-editor of the Global Trust Register, a massive directory with its own rating scheme of "top-level" PGP keys and X.509 certificates.

In the hierarchical WAX system, there are shelves owned by publishers, books owned by editors, and chapters owned by authors. WAX must protect against several threats: book contents could be altered, an incorrect book source could be claimed, or a publisher or author could deny past content. Matyas stressed that there are no confidentiality or audit requirements, only integrity and authenticity.

The WAX system originally used RSA for digital signatures. However, problems cropped up. In particular, RSA digital signatures require a Public Key Infrastructure (PKI), expiring keys cause problems for long-lasting information, compromised keys are difficult to address, and RSA-DSI royalties were expensive. As a result, WAX uses one-time signatures as an intermediate solution.

New HTML elements allow hashes and public keys to be embedded in documents. In addition to the standard linking information, the A element also includes a HASHVALUE parameter. When a browser follows a link, it can hash the appropriate contents and verify whether the document is authentic. For instance, a link may appear as <a HREF=" examresults" HASHMETHOD="TIGER" HASHVALUE="12345..." HASHPARENT="">link</a>. The exam results page would contain further information to reconstruct the hash.

Pure ERLs apply easily to static texts (e.g., health care, law and contracting, banking). One can also store hashes with bookmarks for change control. Additionally, this system can interact with public key mechanisms. Work progresses on medical applications (WAX, British National Formulary), incorporation of XML discussed with industrial partners, and formalization of the ERL logic extended by public key parameters.

For more information, email <> or visit <> and <>.

Detecting Hit Shaving in Click-Through Payment Schemes

Michael Reiter, AT&T Labs - Research; Vinod Anupam and Alain Mayer, Lucent Technologies, Bell Laboratories

Mike Reiter, presenting the winner of the workshop's Best Paper award, analyzed several mechanisms to calculate upper and lower bounds on referrals to another Web site. This is particularly useful in Web advertising schemes where a Web publisher receives a payment directly proportional to the number of "click-throughs" generated.

A user U "clicks through" site A to site B if A serves a page to U, and then U clicks on a link in A's page to reach B. Here A is the referrer and B is the target. In a click-through payment scheme, B pays A for each referral that A gives to B. There are two common forms of fraud in click-through payment schemes. Hit shaving results when site B fails to credit site A for referrals. Hit inflation results when site A causes bogus referrals to site B.

Reiter described two classes of practical and immediately useful techniques for detecting hit shaving. In a heuristic approach, the target site need not cooperate or even have knowledge of the process. In a cooperative approach, one can achieve better accuracy and non-repudiation of click-throughs. For both classes, the detection techniques are mostly invisible to the user.

The detection process must enable site A to monitor how often site B receives a request from any user U with a referrer field indicating A. This leads to the question of how to calculate upper and lower bounds on hit counts. Site A can record an upper bound on its referrals to site B with no cooperation from B. When user U clicks on a link to site B, A is told about the click. Then user U continues to B. One can implement this using HTTP redirection or a CGI script. A second approach uses JavaScript and an invisible frame to notify site A of the intent to follow a link. These techniques produce an upper bound because one cannot be sure whether B actually receives the hit. The notification represents the intention to visit site B, but not a guarantee to visit site B.

Techniques to calculate a lower bound are not so clean or simple. After a user follows the link on site A to reach site B, the user notifies site A. A receives notification only if the connection to B worked. Reiter described a rather complicated procedure which spawned a new browser window and used JavaScript. Since one window cannot access another window's namespace, there are a few hoops to jump through. A detection window probes the namespace of the window attempting to contact site B. When the detection window is no longer allowed to probe the other window, it knows the connection to site B was successful. The detection window then notifies site A by requesting a particular URL.

The lower bound technique has a few caveats. The user might close the window before A is notified. Additionally, this only detects that some page is loaded. The user may have stopped the request to site B and traveled elsewhere. A few tricks (e.g., hiding the toolbar) can make it hard for the user to bypass the notification process, but it also can cause annoyances to the user.

Reiter suggests using both lower and upper bound detection on referrals. The two measurements should be fairly similar.

In the cooperative approaches, site B acknowledges each referral as the referral happens. In a naive solution, B would open a connection to A for each hit. In a distributed approach, B's page would make the user request another page from site A as an acknowledgment. It is also possible to provide for nonrepudiation with digital signatures. B includes a digital signature while serving a page. However, this could easily become prohibitively costly. Hash chaining can alleviate some of the performance problems.

Reiter revealed a few disadvantages of hit shaving detection. There is a negative impact on user privacy. Web sites can discover your browsing habits. The schemes are also incompatible with anonymizing services such as Crowds or LPWA.

Questions began on a humorous note. How did Reiter become involved with this project? The saga began when Reiter placed his email address on a Web page. A spammer sent an email about click-through payments saying that a 1998 Corvette would be awarded for the highest number of click-throughs. Thinking something must be fishy, Reiter began to analyze click-through payment schemes. A few questions about ethics and morality popped up. All concerned impediments to the user (e.g., awkward windows popping up) and pornography. Reiter cleverly escaped the questions with witty remarks. However, he made it clear that improving the porn industry is not his goal. Click-through payment schemes are relevant for all types of Web advertising. Finally one attendee pointed out that these schemes act like a poor man's Remote Procedure Call via URLs. Asked whether he was on to something bigger, Reiter replied that there might be overlap or some related opportunities.

Session: Consumer Service

Summary by Matthew Hohlfeld

Sales Promotions on the Internet

Manoj Kumar, Anand Rangachari, Anant Jhingran and Rakesh Mohan, IBM T.J. Watson Research Center

Manoj Kumar presented his group's work on translating another set of real-world practices to the world of electronic commerce. Their specific goal is to provide the types of sales promotions that are not currently available at commerce sites on the Internet, namely coupons.

Kumar gave a list of fictional examples to illustrate the variety of purposes that can be addressed by coupons, and introduced the concept of eCoupons. The Internet disrupts several real-world assumptions and also allows some things that are not possible in the real world. One of the economic forces created by coupons is price discrimination. Price discrimination does not follow the understood rules on the Internet, however, if coupon "trading" is allowed to be as simple as it would appear to be.

One unexpected feature of coupons in the real world that issuers rely on is that they are "hard" to use. If the straightforward methods for creating coupons are used in the setting of electronic commerce, with the expected simple user interface, that would no longer be true. Kumar surprised me by discussing similarities between eCoupons and digital cash systems with respect to susceptibility to fraud, without stating that eCoupons are a type of digital cash (with a more complex "value" associated).

eCoupons are a set of definitions that are being refined and extended to attempt to cover all possible variations on promotions desirable on the Internet. Kumar's group is also investigating different methods for delivery of coupons and methods for storage after delivery (such as digital cash "wallets").

Perry Metzger stated that he thought that the marketing analysis in the presentation was better than the technical analysis, and that affinity schemes seem to be doing most of what is presented as eCoupons; many of the ideas are similar, just broader in scope. Avi Rubin wondered why eCoupons are needed when immediate discounts have all of the needed features listed; the only response was that businesses seem to want to continue to use what they "know." Max Tsvetovat suggested that the use of auctions and other variable or negotiated prices is common and handles most of these needs; the response was that they solve orthogonal problems.

General-purpose Digital Ticket Framework

Ko Fujimura and Yoshiaki Nakajima, NTT Information and Communication Systems Labs

Ko Fujimura presented a flexible digital ticket project whose main purpose is to develop a generic value-circulation medium that prevents double-spending. In this context, a ticket is a digital medium that guarantees certain rights to the owner of the ticket. Describing tickets this generally allows the tickets to contain many different values and types of values in a single ticket (or group thereof).

Fujimura claims that a general ticket framework will reduce the implementation cost in many cases because a single design can be used in many places. By being general, the tickets can be composed arbitrarily, allowing bundling and similar features. He claims that the creation of new businesses to run this framework, like issuing/revocation services and deposit box services, was a benefit.

A general-purpose digital ticket framework must meet most of the requirements of digital cash. Additional requirements are: (1) A ticket can control its anonymity, divisibility, and transferability depending on the application; (2) The individual specifications of a ticket need to be "machine understandable" to allow for the redemption of goods or services; (3) Ticket properties whose values change while it is circulated (e.g., payment or reservation status) must be changed securely; (4) A ticket that comprises more than one sub-tickets must be supported.

To implement such a framework, the authors created a Ticket Definition Language that allows for the specification of ticket properties. The tickets themselves are hypertext-based, allowing automation of the state-transitions and composability features. The tickets can also include dynamic information that is up-to-date when the ticket itself is used. Another feature (of less obvious utility) is that the tickets can contain very large data such as images and sounds.

The tickets themselves are inherently online (because of their hypertext basis and dynamic nature), but can also be circulated offline using smartcards. In either case, the system uses signed URIs to test the currency of the ticket. The meaning and constraints of the properties in the tickets are defined using the Resource Description Framework. Schemas for tickets can thus be controlled by the issuers of the tickets, and various restrictions can be contained in these schemas.

Fujimura outlined the ticket trust model. The issuer certificate, user certificate, and examiner certificate, which are required to issue, transfer, consume, or examine a ticket, are specified in the ticket itself using the Ticket Definition Language. So, any ticket with PK, such as drivers' licenses, can be used as a PK certificate if a ticket specifies them as a required certificate for the ticket. In other words, any ticket can play a roll in the PKI for other tickets.

They are drafting specifications for the implementation and intend to submit them to standards organizations. The goal of their project is to "Transform any Web terminal into a ticketing machine for any ticket in the world."

Paul Syverson spoke from the audience to indicate briefly that many related issues and solutions were addressed by Unlinkable Service Transactions.

Towards a Framework for Handling Disputes in Payment Systems

N. Asokan, Els Van Herreweghen, Michael Steiner, IBM Research Laboratory

Els Van Herreweghen spoke about dispute handling in a digital marketplace. While it is assumed in many designs that some offline dispute arbitration system exists, the designs do not include specifications for addressing this. Systems need to obtain evidence in order to resolve disputes, to show exactly what the evidence means, and to provide tools for the analysis of this evidence (inside and outside of arbitration). This evidence may be useful in situations other than litigation, such as local verification, friendly settlement, or showing a receipt to a third party. It should be possible to leverage the high-level interface of E-Commerce applications to address possible after-the-fact disputes. Van Herreweghen proceeded to give some clear examples what sorts of disputes these might be.

She introduced a language for dispute claims in first-order logic with who, what, modifiers, and attributes. All possible disputes in a given digital market system should be describable in this type of language. By representing protocols as a sequence of states where players in the protocol cause transitions between these states, she showed that a properly produced transcript of the protocol would provide evidence for claims. A claim verifier can take such evidence to determine which states the protocol reached, and use knowledge of what those states "mean" in the protocol to determine whether or not the claim is valid.

To provide proof that this method will work for automatic dispute handling, the authors are working toward building implementations for iKP and SET. They suggest a three-party dispute resolution system in which the verifier interacts with the claimant and the respondent, accepting evidence and making a decision based upon the claim and the evidence. Van Herreweghen admitted that different scenarios will require the use of different parties in the arbitration, and gave a brief overview of how to add dispute handling capabilities to a given payment system.

Max Tsvetovat wanted to know about the applicability of this work to level commitment contract systems (contractual protocols); Van Herreweghen answered that if the claims and contracts are expressed correctly, these techniques should work. Paul Syverson asked where the formalism that is used originated from; the answer was that the authors were inspired by generic payment system interfaces, but there were no existing generic payment system-independent claim languages, so they developed a new language for payment disputes. Syverson also wanted to know about the overlap between this and other provable logic systems; the answer was that they want a usable system, and an unused/unusable formulation. (According to a later discussion with the speaker, the focus of their work has been mainly on the generic claim language and framework for payment disputes. They were concerned primarily about the usability of their approach to existing, nonideal payment systems. They are involved in research into using other approaches and other logics for dispute resolution systems.)

Session: Short Talks/Works-in-Progress Reports (WIPs)

Summary by Matthew Hohlfeld

Robert Hettinga is working on a full flow diagram of commerce in "Cypherspace" (or encrypted Cyberspace). He claimed that one of the most important features is a peer-to-peer design, allowing anyone in the system to play the role of "merchant" or "purchaser." A white paper and the completed diagram will be forthcoming and available on his Web site.

John de Pre Gauntt presented a non-US-central point of view on electronic commerce. He pointed out that market penetration of mobile phones is much higher than that of personal computers in other parts of the world, and proceeded to show how they could be (and are being) used as a vehicle for electronic commerce.

Vinod Anupam suggested that even though there are many well-known attacks against the implementation of JavaScript in currently available commercial browsers, it is possible to fix this problem in coming versions. His group has been working on an implementation that uses a Safe Interpreter with controllable security policies by building on the source code provided for Mozilla. They are attempting to get this feature rolled into the official Mozilla 5.0 when it is released.

Otto Koppius presented the audience with a possible view of how multidimensional auctions could work. He noted that the auction itself will be dependent upon the evaluation function of the individual selling the item. Koppius is currently involved in examining what will occur with different scenarios that are made possible by multidimensional auctions.

Max Tsvetovat is looking at the issues involved in using agents to negotiate and execute contracts. The situations that his group is considering include multiple-supplier, single-customer arrangements. They are looking at a "one-shot, leveled-commitment" protocol to perform the negotiation, as well as other new topics.

Bob Carter described his company's efforts in deploying smartcards in a closed public key infrastructure in the context of delivering smartcards to account holders for a bank.

?Need help? Use our Contacts page.
First posted: 1 Feb 1999 jr
Last changed: 1 Feb 1999 jr
Conference index
Proceedings index