Check out the new USENIX Web site. next up previous
Next: Microbenchmarks Up: PRESTO: Feedback-driven Data Management Previous: PRESTO Implementation


Experimental Evaluation

In this section, we evaluate the performance of PRESTO using our prototype and simulations. The testbed for our experiments comprises one Stargate proxy and twenty Telos Mote sensor nodes. One of the Telos motes is connected to a Stargate node running a sensor network emulator in Emstar[8]. This emulator enables us to introduce additional virtual sensor nodes in our large-scale experiments that share a single Telos mote radio as the transceiver to send and receive messages. In addition to the testbed, we use numerical simulations in Matlab to evaluate the performance of the data processing algorithms in PRESTO.

Our experiments involve both replays of previously gathered sensor data as well as a live deployment. The first set of experiments are trace-driven and use a seven day temperature dataset from James reserve [22]. The first two days of this trace are used to train the model. In our experiments, sensors use the values from the remainder of these traces--which are stored in flash memory--as a substitute for live data gathering. This setup ensures repeatable experiments and comparison of results across experiments (which were conducted over a period of several weeks). We also experiment with a live, four day outdoor deployment of PRESTO at UMass to demonstrate that our results are representative of the ``real world''.

In order to evaluate the query processing performance of PRESTO, we generate queries as a Poisson arrival process. Each query requests the value of the temperature at a particular time that is picked in a uniform random manner from the start of the experiment to the current time. The confidence interval requested by the query is chosen from a normal distribution.



Subsections
next up previous
Next: Microbenchmarks Up: PRESTO: Feedback-driven Data Management Previous: PRESTO Implementation
root 2006-03-29