[e2e] Are Packet Trains / Packet Bursts a Problem in TCP?

Detlef Bosau detlef.bosau at web.de
Thu Sep 28 08:53:42 PDT 2006


Fred Baker wrote:
> On Sep 27, 2006, at 7:46 PM, Lachlan Andrew wrote:
>> If we pace packets out at a rate "window / RTT" then we transmit at 
>> the same average rate as if we used ACK-clocking.  The only 
>> difference is that the packets are sent with roughly equal spacing in 
>> the case of pacing, and sent irregularly with pure ACK-clocking.
>
> yes. But in my comments, we might do so at a time that we are not 
> receiving acks that would clock us.
>
> What Detlef appears to be proposing 
Oh my goodness ;-) No, I don´t propose this.

> is that we pace all the time. Craig's paper and my comments are to the 
> effect that this is Really Hard to get right, and if it's not right, 
> it's Really Wrong.
That´s what I think as well.

I only try to understand a few things.

1.: Is  burstiness always a problen? And what I´ve learned so far, this 
is not the case.

Extreme burstiness can lead to overrunning queues and unterutilization 
of the network. But "little" burstiness is no problem at all.

2.: Where does burstiness stem from?

3: If something should be done against bursty flows: What can be 
reasonably done?

I don´t want to pace all the time. Quite contrary, at the moment I think 
the problem "burstiness" might be somewhat overestimated. Particularly 
when it comes to the "chaotic nature" of traffic or "self similarity" or 
other fine sounding words, that are mentioned simetimes in the academic 
world and which are always impressive.

There are two things important for me.

First: What do I know, when I know that traffic is bursty, chaotic, 
self-similar? What do I learn from that?

Second (and this one is a good advice from my professor in statistics): 
When a behaviour appears to be stochastic, its always crucial to 
understand where this stochastic beheviour comes from. It says nothing, 
when e.g. a time series passes numerous tests on "stochastic behaviour" 
when the reason for this behaviour is not understood. Therefore, I´m 
always critical when I read papers which "prove" one more the 
selfsimiliarity of the Intenet by the use of yet another GIGO (garbage 
in garbage out) statistical program or which deal with sophisticated 
transformations and wavelets and so on. The more mathematics one has not 
understood in one's owns paper, the less worth is the result. (I don´t 
say anything against mathematics. But I always enjoy well done mathematics.)

And it´s perhaps similar with burstiness. When there are sources of 
severe burstiness which lead to problems, it´s beneficial to fix this. 
However, it´s not worthwile to spend maximum effort for minimum results. 
So, e.g., what it´s good for to pace a TCP flow with a leaky bucket or 
something similar, when after two hops the traffic is as bursty as if 
nothing would have been done?

When I read David´s remarks on Little´s theorem, I thought about these 
well known throughput/window, RTT/window, "utility"/window diagrams 
which can be found e.g. in Raj Jains paper on delay based congestion 
control from 1989 (?).
I think, it´s essentialy a very similar message: Even if queues are 
unlimited, it is not only not useful to put arbitrary window sizes on 
the path, but too large a window may cause severe harm. I always thought 
on correct dimensioning of router queues, when I read this.

O.k. I´m writing to much and thinking to little :-)

Detlef





More information about the end2end-interest mailing list