We use several tools to automatically maintain the traffic repository. The details of these tools are described later in this section. New trace data is collected from sampling points to the repository during the night. A web page for the new trace is automatically created.
At a sampling node, a script is invoked from cron to run tcpdump and compress the trace. The obtained raw trace file is placed under a certain directory.
At the repository node, another script is invoked from cron to fetch the raw trace and process it. The script copies the compressed raw trace from the sampling point over a secure session using scp. Then, the script uncompresses the trace and invokes tcpdpriv to remove privacy information from the trace. The trace is fed into tcpdstat to get a summary output. The script creates a web page for the trace, and updates the index page to include the newly created page. Finally, the script compresses the trace data again, and place it for the ftp service.