Check out the new USENIX Web site. next up previous
Next: Implications and Discussion Up: Experimental Results Previous: Frequency of Performance Monitoring

Analysis of overheads

As the performance results show, both passive and active measurement are still about 10-20% away from the optimal performance. Three key factors contribute to this gap: (1) the accuracy of measurement techniques, and correspondingly, the accuracy of provider choices, (2) overhead of performing measurement, and (3) software overhead, specifically, the overhead of making frequent updates to the NAT table3 and employing NAT rules on a significant fraction of packets. In this section, we analyze the contribution of these factors on the eventual performance of the different schemes.


Table: Analysis of performance overheads. Here ``penalty'' is defined as the value of $\mathcal{R}-1$ in each case.
  Passive Active Active
    FreqCount SlidingWin
Total      
performance 18% 14% 17%
penalty      
Penalty from      
inaccurate 16% 12% 14%
estimation only      
Penalty from      
measurement 2% 2% 3%
and NAT only      


Our approach to quantify the overhead of our implementation is to compare the performance derived from the choices made by the route control proxy, with the performance when the best ISP choices are made in an offline manner for each connection. Recall that in order to compute the performance metric $\mathcal{R}$, we evaluated the response time of each ISP for every transfer offline so that the best ISP link for each connection was known, independent of the route control mechanisms (the terms in the denominator in Equation 1). If we combine these offline response time values with the decisions made by the proxy, we can estimate the performance penalty due to incorrect choices, independent of the software overheads (i.e., #2 and #3 above). The difference between the resulting performance metric, $\mathcal{R}$, and 1 gives us the performance penalty, not including overheads of the implementation.

The penalties from the above analysis for the three proposed schemes are shown in Table 2, row 2. The client arrival rate is 13.3 requests/s and the sampling rate is 30s. In this table, the numbers in row 1 show the actual performance penalties suffered by the schemes in our implementation, taking all overheads into account (from Figure 11(a) and (b)). Notice that a large portion of the overall penalty is contributed by the inaccuracies in measurement and ISP selection (rows 1 and 2 are almost identical). Measurement and software overheads themselves result in a performance penalty of 2-3% (difference between rows 1 and 2, shown in row 3).


next up previous
Next: Implications and Discussion Up: Experimental Results Previous: Frequency of Performance Monitoring
Anees Shaikh 2004-05-05