[e2e] What's wrong with this picture?

Lachlan Andrew lachlan.andrew at gmail.com
Sun Sep 13 20:04:18 PDT 2009


2009/9/14 Detlef Bosau <detlef.bosau at web.de>:
>
>> The only reason I call it a hack is to counter the view that it is a
>> carefully engineered solution,
>
> What are you missing there for a "careful engineered" solution?
>
> Would the congavoid algorithm be more compelling, if Van had added ten pages
> with formulae and greek symbols to his work? ;-)

It would have been more compelling as a carefully engineered solution
(to resource allocation, rather than congestion avoidance) if his
paper hadn't said
"While algorithms at the transport endpoints can insure the network
capacity isn’t exceeded, they cannot insure fair sharing of that
capacity".

This thread isn't really about TCP-friendliness, but that has been a
stumbling block in implementing any other form of congestion control,
which might work on highly-buffered links.

> The idea was in fact shamelessly simple :-) The network tells us that it
> cannot carry the amount of data we put into it - and we simply halve the
> amount of data, which is put in the network.

Yep, I agree it was a great solution for the network at the time.

>>  and that networks should be designed to
>> show a particular undesirable symptom of congestion just because "TCP
>> needs it".
>
> I don't agree here.
>
> We do not intentionally introduce packet drops because we need it for TCP.

Then why is having a large buffer bad?

> And now, as you say, we've seen that a network may cry for help by dropping
> packets - and we made a virtue of necessity then and used these drops for
> congestion control.

I agree.  We should listen to drops.  We should also listen to delay.

> I forgot a third reason: We do not even design networks that way that they
> produce drops. The truth is: Packets are dropped - and we can't help it!
> (Except by use of a central scheduling and rate allocation, see Keshav's
> work.)

a) We can, with backpressure.
b) The issue is not whether we should *ever* drop packets.  David's
point was that we should drop them even if we can pay for enough
buffer space to keep them. (Given the current TCP, he is right.)

>> It works, except on...
>
> And that's the problem of "one size fits all".

We could have a one-size-fits-all solution which also responds to
excessive delay.

>> Someone has pointed out that simply the binary backoff of the RTO may
>> be enough to prevent congestion collapse.
>
> Binary backoff is a drastic measure.
>
> And sometimes _too_ drastic. If you encounter some transient link outage
> with your mobile, the RTO rapidly increases into ranges of minutes.

I agree.  We should have *something* else.  However, many things other
than AIMD are good enough for that "something", and safe enough to
try.  My point was that I don't think anyone has done a large-scale
trial of any other congestion control, and found that it doesn't
"work".

(For transient outage, binary backoff only over-estimates the duration
of the outage by a factor of 2.  It takes minutes to increase to the
range of minutes.  Binary backoff is more of a problem if we happen to
get a large number of "random" losses.)

>> Who knows what aspect of
>> VJ's algorithm is really responsible for making the internet "work",
>> and how much is simply that we don't see all the details?
>>
>
> Who _cares_?

Anyone who says "we use it rather than scheme A because it works"
should care, especially when looking at a case where it   doesn't
work.

> In some buildings, there are compensators for this purpose. And even if they
> don't exactly match the building's Eigenfrequency, the main thing is that
> they kill energy.
>
> This may be not an elegant mathematical solution, but it protects life.

The issue of this thread wasn't whether modelling is good.
However, since you bring it up:  The reason people know that they need
damping at all is because they understand the mathematics behind the
dynamics.  Without that insight, people would say "Let's just build
stronger walls".

> And that's the same in computer networks. If they start oscillating or are
> getting instable (i.e. the queues grow too large) - you kill energy, i.e.
> drop packets.

Alternatively, if queues grow too large, you can reduce the rate at
which you inject them into the network.  That is what congestion
control is all about.

>> I don't think TCP should assume that routers drop
>> packets instead of buffering them.  We can still use VJ's insight
>> (that we should look for symptoms of congestion, and then back off)
>> without that assumption.
>
> O.k., so you don't kill energy but tune the system ;-)

If packets are energy, the amount of energy removed by reducing the
send rate is much more than that removed by dropping any reasonable
fraction of packets.

I'm not saying how we should design buffers.  I'm just suggesting that
we should design TCP to listen to all available congestion signals,
rather than saying that the link is "bad" if sends the packets that
*we* have sent it (including others using the same algorithm as us).

Cheers,
Lachlan

-- 
Lachlan Andrew  Centre for Advanced Internet Architectures (CAIA)
Swinburne University of Technology, Melbourne, Australia
<http://caia.swin.edu.au/cv/landrew> <http://netlab.caltech.edu/lachlan>
Ph +61 3 9214 4837



More information about the end2end-interest mailing list