Check out the new USENIX Web site. next up previous
Next: TCP Nice Up: Design and Implementation Previous: Design and Implementation

Background: Existing Algorithms

Congestion control mechanisms in existing transmission protocols are composed of a congestion signal and a reaction policy. The congestion control algorithms in popular variants of TCP (Reno, NewReno, Tahoe, SACK) use packet loss as a congestion signal. In steady state, the reaction policy uses additive increase and multiplicative decrease (AIMD) in which the sending rate is controlled by a congestion window that is multiplicatively decreased by a factor of two upon a packet drop and is increased by one per window of data acknowledged. The AIMD framework is believed to be fundamental to the robustness of the Internet [12,30].

However, with respect to our goal of minimizing interference, this congestion signal $-$ a packet loss $-$ arrives too late to avoid damaging other flows. In particular, overflowing a buffer (or filling a RED router enough to cause it to start dropping packets) may trigger losses in other flows, forcing them to back off multiplicatively and lose throughput.

In order to detect incipient congestion due to interference we monitor round-trip delays of packets and use increasing round-trip delays as a signal of congestion. In this respect, we draw inspiration from TCP Vegas [7], a protocol that differs from TCP-Reno in its congestion avoidance phase. By monitoring round-trip delays, each Vegas flow tries to keep between $\alpha$ (typically 1) and $\beta$ (typically 3) packets buffered at the bottleneck router. If fewer than $\alpha$ packets are queued, Vegas increases the window by one per window of data acknowledged. If more than $\beta$ packets are queued, the algorithm decreases the window by one per window of data acknowledged. Vegas adjusts the window $W$ once every round as follows ($minRTT$ is the minimum value of all measured round-trip delays and $observedRTT$ is the round-trip delay experienced by a distinguished packet in the previous round):


		$E \leftarrow \frac W {minRTT}$  // Expected throughput 


$A \leftarrow \frac W {observedRTT}$ // Actual throughput

$\textit{Diff} \leftarrow (E - A)\cdot minRTT$

if ($\textit{Diff} < {\alpha} $)
$W \leftarrow W + 1$
else if ($\textit{Diff} > {\beta} $)
$W \leftarrow W - 1$

Bounding the difference between the actual and expected throughput translates to maintaining between $\alpha$ and $\beta$ packets in the bottleneck router. Although Vegas seems a promising candidate protocol for background flows, it has some drawbacks: (i) Vegas has been designed to compete for throughput approximately fairly with Reno, (ii) Vegas backs off when the number of queued packets from its flow increases, but it does not necessarily back off when the number of packets enqueued by other flows increase, (iii) each Vegas flow tries to keep 1 to 3 packets in the bottleneck queue, hence a collection of background flows could cause significant interference.

Note that even setting $\alpha$ and $\beta$ to very small values does not prevent Vegas from interfering with cross traffic. The linear decrease on the `` $\textit{Diff} > \beta$'' trigger is not responsive enough to keep from interfering with other flows. We confirm this intuition using simulations and Internet experiments, and it also follows as a conclusion from the theoretical analysis.



next up previous
Next: TCP Nice Up: Design and Implementation Previous: Design and Implementation
Arun Venkataramani 2002-10-08