Check out the new USENIX Web site. next up previous
Next: Conclusions Up: The Phoenix Recovery System: Previous: Simulation Results


System Design Issues

The previous section gives us an idea of how much replication and how much storage is required in Phoenix. We end by briefly mentioning a number of design issues that an implementation of Phoenix needs to address as well.

The heuristics used for core identification need to use an index that maps hosts to the different attributes they possess. Phoenix therefore needs to maintain this index, which we intend to implement using a distributed hash table (DHT). Once Phoenix has identified a core, it stores copies of data on the hosts in the core. To ensure the integrity of the data, we plan on using some encryption mechanism. Thus, data is encrypted before releasing it to the hosts of a core. As observed in the previous section, it is also necessary to ensure fairness of storage allocation across users. For this, our heuristic to find cores will have to be modified to take storage load into account. Finally, we need to more carefully model the set of vulnerabilities and allow for dynamically adding and removing attributes/values.

In the wake of an Internet catastrophe, Phoenix itself has to continue functioning satisfactorily. Since we intend to use a DHT as a platform, it will need to survive a scenario where a large number of hosts suddenly leave the system [3]. Moreover, once there is a catastrophe, many users may try to recover files at the same time, potentially overloading the system; since recovery time is not critical, a distributed scheduler using randomized exponential wait times can ease recovery demand.

We are currently working on addressing these issues in a prototype design and implementation of Phoenix.



next up previous
Next: Conclusions Up: The Phoenix Recovery System: Previous: Simulation Results
2003-06-20