<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN">
<HTML><HEAD>
<META http-equiv=Content-Type content="text/html; charset=iso-8859-1">
<META content="MSHTML 6.00.2900.2963" name=GENERATOR>
<STYLE></STYLE>
</HEAD>
<BODY bgColor=#ffffff background=""><FONT face=Arial
size=2>Hi,<BR><BR> We have a preliminary study on the effect
of pacing when using with Reno or some loss-based highspeed TCPs with DropTail
recently. In our study, we partially revisited the Infocom00 paper against
pacing by Aggarwal et al (<A
href="http://www.cs.washington.edu/homes/tom/pubs/pacing.html">http://www.cs.washington.edu/homes/tom/pubs/pacing.html</A>)
pointed by Ritesh. It seems to us that the performance of pacing is quite more
complicated.<BR><BR> Not sure if our understanding is correct
or helpful but here I write a summary with references to some other works, we
have a technical note at <A
href="http://www.cs.caltech.edu/~weixl/pacing/sync.pdf">www.cs.caltech.edu/~weixl/pacing/sync.pdf</A>
with more details. <BR><BR> Our understanding is that
there are at least two levels of burstiness in TCP (or in two timescales). The
performance of paced TCP that we observed is the combined effect of these two
levels of burstiness.<BR><BR> The first level is
micro-burst (I borrowed the term from Allman&Blanton's CCR05 paper <A
href="http://www.icir.org/mallman/papers/burst-mitigate.ps">http://www.icir.org/mallman/papers/burst-mitigate.ps</A>).
Such burstiness is in the form of packet trains which are sent back-to-back from
the sender which is usually much faster than the bottleneck router. Such burst
usually happens during slow-start, or in the presence of ack-compression/stretch
acks. And as Fred pointed out, micro-burst leads to higher queue length and
jitters but it levels out in long flows by ack-clocking and cross traffics.
Micro-burst can also be mitigated by pacing and/or other mechanisms discussed in
Allman's paper above. When talking about micro-burst, we are quite sure that
pacing helps in reducing queueing delay and jitters with theories such as in
Fred's analysis and as shown in Craig's 99paper.<BR><BR>
The second level is sub-RTT burst (I borrowed the term from Jiang&Dovrolis's
SigMetric05 paper <A
href="http://www-static.cc.gatech.edu/~dovrolis/Papers/f11-dovrolis.pdf">http://www-static.cc.gatech.edu/~dovrolis/Papers/f11-dovrolis.pdf</A>).
Such bursts are packets sent in a rate no greater than bottleneck capacity
(already leveled by ack-clockings) but still sent within a small portion of RTT
in a rate higher than its fairshare rate (cwnd/RTT). This happens with multiple
flows sharing the same bottleneck. Jiang&Dovrolis's paper showed with packet
traces that such bursts exist in Internet traffic (by showing the on-off
patterns). Again, pacing attacks such bursts as it paces packets in a rate which
equals to the flow's fairshare. <BR> On sub-RTT
burstiness, we are quite sure that pacing can help in improving the short-term
fairness among multiple flows as it ensures that the flows with higher rate will
see higher packet loss rate. One observation we had is that: pacing can greatly
improve the fairness of HS-TCP and Scalable-TCP, the two TCPs that were said to
be unfair in many reports.<BR> However, it is not clear if
pacing really helps in terms of the flows' aggregate throughput. If the sub-RTT
burstiness exists in a flow's packet transmission processes, the flow is less
likely to see a packet loss in comparison to the flow which has its packet
evenly transmitted into the network (the synchronization effect of pacing). That
leads to several observations that also appear in Aggarwal's Infocom00 paper:
<BR> 1. paced flows might have smaller aggregate throughput as
flows are likely to synchronize (but such aggregate throughput loss is bounded
by 25% with Reno, and much smaller with high speed tcps).<BR>
2. paced flows usually lose to bursty flows in competition since the paced flows
are more likely to detect a loss event as their packets are evenly distributed
in time. <BR><BR> So, it seems to us that there are a
lot to understand in the future. The performance of paced TCP/bursty
TCP seems to depend on several questions:<BR>1. Is aggregate throughput the
most important metric?<BR>2. What is the packet loss pattern in Internet?<BR>3.
How does TCP reacts to loss? (Besides Reno, there are many new algorithms)<BR>4.
How do we implement and deploy pacing? (Are paced flows going to compete with
bursty flows? We can also tune the paced flows to let them compete... Are we
using AQM to generate less bursty packet loss? and
etc)<BR><BR>-David<BR>---------------------------------------------------------<BR>Xiaoliang
(David) Wei<BR>http://davidwei.org Graduate Student, Netlab,
Caltech<BR>======================================<BR><BR>----- Original Message
----- <BR>From: Ritesh Kumar <BR>To: Fred Baker <BR>Cc: Craig Partridge ;
end2end-interest@postel.org <BR>Sent: Monday, September 25, 2006 3:42
PM<BR>Subject: Re: [e2e] Are Packet Trains / Packet Bursts a Problem in
TCP?<BR><BR><BR>Hi,<BR> The following paper
(http://www.cs.washington.edu/homes/tom/pubs/pacing.html) makes a case against
TCP Pacing saying that pacing packets in a given RTT can have adverse effects
when TCP tries to infer congestion using its congestion avoidance algorithms.
The paper presents many details but the gist is that when TCP is paced, it takes
very little time for the bottleneck queue to build up from nothing to the full
queue size. So even though lesser queueing is definitely an advantage of TCP
pacing, probably it also calls for a redesign of the congestion control
algorithms...? <BR><BR>Ritesh<BR><BR><BR>On 9/25/06, Fred Baker
<fred@cisco.com> wrote:<BR><BR>On Sep 25, 2006, at 7:08 AM, Detlef Bosau
wrote:<BR><BR>> IsnĀ“t it a more fundamental question, wether burstiness may
cause<BR>> grief in a significant number of scenarios, so that it would
be<BR>> useful to avoid burstiness at all? <BR><BR>I think there is a fair
bit we can say from mathematics. For example,<BR>there is a fairly dramatic
difference between the delays experienced<BR>in an M/M/1 and an M/D/1 scenario.
Queuing delays result from packets <BR>sitting in queues, and variability in
queue depth results in<BR>variability in delay. Increased burstiness increases
average delay<BR>and average jitter.<BR><BR>That said, I will agree with Craig
that burstiness in moderation <BR>doesn't itself cause major problems in the
network. Short sessions,<BR>which are as you say very common in web and mail
applications, are<BR>inherently bursty, and it's hard to imagine them being
otherwise when<BR>slow-start combined with the fact that they are moving small
amounts <BR>of data are brought into consideration. Longer sessions also occur
in<BR>web and mail transactions, and are common in p2p applications,
which<BR>are now pretty prominent as a traffic source. But when TCP runs in
a<BR>longer session, I think you will find that burstiness levels out, as<BR>the
duration of a burst is stretched by the queues of bottlenecks in<BR>the path,
resulting in a reduction of the rate of the burst as<BR>traffic crosses the
network, and the Acks come back at a rate less <BR>than or equal to the
bottleneck rate. I would expect to see ack<BR>clocking spread the traffic of a
longer TCP session so that it is<BR>less bursty.<BR><BR>Pacing attacks
burstiness. AQM actually doesn't; it attacks average <BR>queue
depth.<BR><BR>There are places where improving TCP burstiness can be of value,
such<BR>as in the cases where (usually Linux) TCPs decide to send
their<BR>entire next window in a short period of time it would be nice of they
<BR>could be convinced to do so at a rate that doesn't exceed
cwnd/srtt.<BR>Beyond handling extreme cases like that, I'm not convinced it's
worth<BR>the effort - it sounds like a lot of mechanism solving a problem
that<BR>I'm not sure actually hurts us all that much.<BR><BR>I'm much more
interested in TCPs that will handle high loss rates and<BR>variable delay
(WiFi/WiMax) and long delays over a wide range of<BR>speeds, consistently
delivering a good approximation of the available <BR>bandwidth (there's still a
lot of dial in the world, and there are<BR>many places that sport fiber end to
end) to applications that attempt<BR>to use it.</FONT></BODY></HTML>