[e2e] Are Packet Trains / Packet Bursts a Problem in TCP?

Xiaoliang (David) Wei weixl at caltech.edu
Mon Sep 25 17:02:53 PDT 2006


Hi,

    We have a preliminary study on the effect of pacing when using with Reno or some loss-based highspeed TCPs with DropTail recently. In our study, we partially revisited the Infocom00 paper against pacing by Aggarwal et al (http://www.cs.washington.edu/homes/tom/pubs/pacing.html) pointed by Ritesh. It seems to us that the performance of pacing is quite more complicated.

    Not sure if our understanding is correct or helpful but here I write a summary with references to some other works, we have a technical note at www.cs.caltech.edu/~weixl/pacing/sync.pdf with more details. 

     Our understanding is that there are at least two levels of burstiness in TCP (or in two timescales). The performance of paced TCP that we observed is the combined effect of these two levels of burstiness.

     The first level is micro-burst (I borrowed the term from Allman&Blanton's CCR05 paper http://www.icir.org/mallman/papers/burst-mitigate.ps). Such burstiness is in the form of packet trains which are sent back-to-back from the sender which is usually much faster than the bottleneck router. Such burst usually happens during slow-start, or in the presence of ack-compression/stretch acks. And as Fred pointed out, micro-burst leads to higher queue length and jitters but it levels out in long flows by ack-clocking and cross traffics. Micro-burst can also be mitigated by pacing and/or other mechanisms discussed in Allman's paper above. When talking about micro-burst, we are quite sure that pacing helps in reducing queueing delay and jitters with theories such as in Fred's analysis and as shown in Craig's 99paper.

     The second level is sub-RTT burst (I borrowed the term from Jiang&Dovrolis's SigMetric05 paper http://www-static.cc.gatech.edu/~dovrolis/Papers/f11-dovrolis.pdf). Such bursts are packets sent in a rate no greater than bottleneck capacity (already leveled by ack-clockings) but still sent within a small portion of RTT in a rate higher than its fairshare rate (cwnd/RTT). This happens with multiple flows sharing the same bottleneck. Jiang&Dovrolis's paper showed with packet traces that such bursts exist in Internet traffic (by showing the on-off patterns). Again, pacing attacks such bursts as it paces packets in a rate which equals to the flow's fairshare. 
    On sub-RTT burstiness, we are quite sure that pacing can help in improving the short-term fairness among multiple flows as it ensures that the flows with higher rate will see higher packet loss rate. One observation we had is that: pacing can greatly improve the fairness of HS-TCP and Scalable-TCP, the two TCPs that were said to be unfair in many reports.
    However, it is not clear if pacing really helps in terms of the flows' aggregate throughput. If the sub-RTT burstiness exists in a flow's packet transmission processes, the flow is less likely to see a packet loss in comparison to the flow which has its packet evenly transmitted into the network (the synchronization effect of pacing). That leads to several observations that also appear in Aggarwal's Infocom00 paper: 
    1. paced flows might have smaller aggregate throughput as flows are likely to synchronize (but such aggregate throughput loss is bounded by 25% with Reno, and much smaller with high speed tcps).
    2. paced flows usually lose to bursty flows in competition since the paced flows are more likely to detect a loss event as their packets are evenly distributed in time. 

     So, it seems to us that there are a lot to understand in the future. The performance of paced TCP/bursty TCP seems to depend on several questions:
1. Is aggregate throughput the most important metric?
2. What is the packet loss pattern in Internet?
3. How does TCP reacts to loss? (Besides Reno, there are many new algorithms)
4. How do we implement and deploy pacing? (Are paced flows going to compete with bursty flows? We can also tune the paced flows to let them compete... Are we using AQM to generate less bursty packet loss? and etc)

-David
---------------------------------------------------------
Xiaoliang (David) Wei
http://davidwei.org    Graduate Student, Netlab, Caltech
======================================

----- Original Message ----- 
From: Ritesh Kumar 
To: Fred Baker 
Cc: Craig Partridge ; end2end-interest at postel.org 
Sent: Monday, September 25, 2006 3:42 PM
Subject: Re: [e2e] Are Packet Trains / Packet Bursts a Problem in TCP?


Hi,
    The following paper (http://www.cs.washington.edu/homes/tom/pubs/pacing.html) makes a case against TCP Pacing saying that pacing packets in a given RTT can have adverse effects when TCP tries to infer congestion using its congestion avoidance algorithms. The paper presents many details but the gist is that when TCP is paced, it takes very little time for the bottleneck queue to build up from nothing to the full queue size. So even though lesser queueing is definitely an advantage of TCP pacing, probably it also calls for a redesign of the congestion control algorithms...? 

Ritesh


On 9/25/06, Fred Baker <fred at cisco.com> wrote:

On Sep 25, 2006, at 7:08 AM, Detlef Bosau wrote:

> Isn´t it a more fundamental question, wether burstiness may cause
> grief in a significant number of scenarios, so that it would be
> useful to avoid burstiness at all? 

I think there is a fair bit we can say from mathematics. For example,
there is a fairly dramatic difference between the delays experienced
in an M/M/1 and an M/D/1 scenario. Queuing delays result from packets 
sitting in queues, and variability in queue depth results in
variability in delay. Increased burstiness increases average delay
and average jitter.

That said, I will agree with Craig that burstiness in moderation 
doesn't itself cause major problems in the network. Short sessions,
which are as you say very common in web and mail applications, are
inherently bursty, and it's hard to imagine them being otherwise when
slow-start combined with the fact that they are moving small amounts 
of data are brought into consideration. Longer sessions also occur in
web and mail transactions, and are common in p2p applications, which
are now pretty prominent as a traffic source. But when TCP runs in a
longer session, I think you will find that burstiness levels out, as
the duration of a burst is stretched by the queues of bottlenecks in
the path, resulting in a reduction of the rate of the burst as
traffic crosses the network, and the Acks come back at a rate less 
than or equal to the bottleneck rate. I would expect to see ack
clocking spread the traffic of a longer TCP session so that it is
less bursty.

Pacing attacks burstiness. AQM actually doesn't; it attacks average 
queue depth.

There are places where improving TCP burstiness can be of value, such
as in the cases where (usually Linux) TCPs decide to send their
entire next window in a short period of time it would be nice of they 
could be convinced to do so at a rate that doesn't exceed cwnd/srtt.
Beyond handling extreme cases like that, I'm not convinced it's worth
the effort - it sounds like a lot of mechanism solving a problem that
I'm not sure actually hurts us all that much.

I'm much more interested in TCPs that will handle high loss rates and
variable delay (WiFi/WiMax) and long delays over a wide range of
speeds, consistently delivering a good approximation of the available 
bandwidth (there's still a lot of dial in the world, and there are
many places that sport fiber end to end) to applications that attempt
to use it.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://mailman.postel.org/pipermail/end2end-interest/attachments/20060925/3302c1cb/attachment-0001.html


More information about the end2end-interest mailing list