Hi,<br> The following paper (<a href="http://www.cs.washington.edu/homes/tom/pubs/pacing.html">http://www.cs.washington.edu/homes/tom/pubs/pacing.html</a>) makes a case against TCP Pacing saying that pacing packets in a given RTT can have adverse effects when TCP tries to infer congestion using its congestion avoidance algorithms. The paper presents many details but the gist is that when TCP is paced, it takes very little time for the bottleneck queue to build up from nothing to the full queue size. So even though lesser queueing is definitely an advantage of TCP pacing, probably it also calls for a redesign of the congestion control algorithms...?
<br><br>Ritesh<br><br><div><span class="gmail_quote">On 9/25/06, <b class="gmail_sendername">Fred Baker</b> <<a href="mailto:fred@cisco.com">fred@cisco.com</a>> wrote:</span><blockquote class="gmail_quote" style="border-left: 1px solid rgb(204, 204, 204); margin: 0pt 0pt 0pt 0.8ex; padding-left: 1ex;">
<br>On Sep 25, 2006, at 7:08 AM, Detlef Bosau wrote:<br><br>> IsnĀ“t it a more fundamental question, wether burstiness may cause<br>> grief in a significant number of scenarios, so that it would be<br>> useful to avoid burstiness at all?
<br><br>I think there is a fair bit we can say from mathematics. For example,<br>there is a fairly dramatic difference between the delays experienced<br>in an M/M/1 and an M/D/1 scenario. Queuing delays result from packets
<br>sitting in queues, and variability in queue depth results in<br>variability in delay. Increased burstiness increases average delay<br>and average jitter.<br><br>That said, I will agree with Craig that burstiness in moderation
<br>doesn't itself cause major problems in the network. Short sessions,<br>which are as you say very common in web and mail applications, are<br>inherently bursty, and it's hard to imagine them being otherwise when<br>slow-start combined with the fact that they are moving small amounts
<br>of data are brought into consideration. Longer sessions also occur in<br>web and mail transactions, and are common in p2p applications, which<br>are now pretty prominent as a traffic source. But when TCP runs in a<br>
longer session, I think you will find that burstiness levels out, as<br>the duration of a burst is stretched by the queues of bottlenecks in<br>the path, resulting in a reduction of the rate of the burst as<br>traffic crosses the network, and the Acks come back at a rate less
<br>than or equal to the bottleneck rate. I would expect to see ack<br>clocking spread the traffic of a longer TCP session so that it is<br>less bursty.<br><br>Pacing attacks burstiness. AQM actually doesn't; it attacks average
<br>queue depth.<br><br>There are places where improving TCP burstiness can be of value, such<br>as in the cases where (usually Linux) TCPs decide to send their<br>entire next window in a short period of time it would be nice of they
<br>could be convinced to do so at a rate that doesn't exceed cwnd/srtt.<br>Beyond handling extreme cases like that, I'm not convinced it's worth<br>the effort - it sounds like a lot of mechanism solving a problem that<br>
I'm not sure actually hurts us all that much.<br><br>I'm much more interested in TCPs that will handle high loss rates and<br>variable delay (WiFi/WiMax) and long delays over a wide range of<br>speeds, consistently delivering a good approximation of the available
<br>bandwidth (there's still a lot of dial in the world, and there are<br>many places that sport fiber end to end) to applications that attempt<br>to use it.<br></blockquote></div><br>