[e2e] Are Packet Trains / Packet Bursts a Problem in TCP?

John Heffner jheffner at psc.edu
Sat Sep 30 11:23:41 PDT 2006


Fred Baker wrote:
> 
> On Sep 25, 2006, at 3:42 PM, Ritesh Kumar wrote:
> 
>> The paper presents many details but the gist is that when TCP is 
>> paced, it takes very little time for the bottleneck queue to build up 
>> from nothing to the full queue size.
> 
> actually, I'm not sure that ack clocking that worked perfectly (all 
> traffic is evenly paced throughout the RTT) or pacing that worked 
> perfectly would behave much differently in any given RTT. The big issue 
> with pacing in every RTT is that it is every RTT - you never step back 
> to being ack clocked and as a result are not responsive to the 
> millisecond-to-millisecond variations inherent in the network.
> 
> My thought is that there are a few cases where you would like to use a 
> pacing algorithm to deal with momentary events, like when some 
> implementations get their input blocked and so throw away ACKs for a 
> while and then suddenly get an Ack that appears to permit them to send a 
> large volume all at once. Using some appropriate trigger, such as "the 
> current combination of effective window and available data permits me to 
> send more than N segments in a burst", I would hope it would send 
> whatever it chose to send at approximately cwnd/srtt.


So there are multiple definitions of the term "pacing". :)  I agree 
rate-based pacing is most likely not a good idea (for TCP -- some 
protocols are inherently rate-based).

To expand on Fred's thoughts, I believe there are significant benefits 
to "pacing" meaning introducing an additional clock source other than 
acks, so you can spread out bursts.  There are a number of cases where 
this can definitely help:

1) Big acks, due to thinning or lost acks

2) Compressed acks

3) Big writes or reads (i.e., big window updates) from inherently bursty 
applications.  An example would be a filesystem-limited transfer, where 
you have frequently have to to stall a few ms for disk seeks.  A 
CPU-bound application on a time-slicing system would be another example.

Less clear:
4) Slow start

The current state of the art in TCP is to either just send the burst 
(particularly in the case of compressed ACKs), or to reduce cwnd to 
avoid sending the burst.  Sending the burst may overflow short queues, 
having a detrimental effect on performance, and/or create significant 
jitter.  Reducing cwnd has a detrimental effect on performance, 
particularly at large window sizes, where the impact of a single 
reduction due to a transient event can be felt for hundreds of 
round-trip-times.  (Issues with the responsiveness of congestion control 
obviously come into play here as well.)

Renewed interest in delay-based congestion control may provide 
additional incentive for reducing bursts.

The 2000 Aggarwal paper does raise some legitimate issues with using 
pacing.  I think these issues are solvable...

   -John



More information about the end2end-interest mailing list