[e2e] What's wrong with this picture?
Detlef Bosau
detlef.bosau at web.de
Mon Sep 14 09:05:45 PDT 2009
Lachlan Andrew wrote:
> It would have been more compelling as a carefully engineered solution
> (to resource allocation, rather than congestion avoidance) if his
> paper hadn't said
> "While algorithms at the transport endpoints can insure the network
> capacity isn’t exceeded, they cannot insure fair sharing of that
> capacity".
>
>
Unfortunately, VJ is correct here.
As long as paths are lossy, either due to congestion or due to
corruption, you cannot predict how much of the sent data is actually
kept in the network path. Two flows may have sent 10000 bytes each - and
actually, 1000 bytes from the first
flow are dropped due to congestion and 800 bytes from the second one are
lost due to packet corruption, so these two flows don't share the
available resources equally. (BTW: IIRC, Kelly's paper does not pay
attention to this problem.)
>>> and that networks should be designed to
>>> show a particular undesirable symptom of congestion just because "TCP
>>> needs it".
>>>
>> I don't agree here.
>>
>> We do not intentionally introduce packet drops because we need it for TCP.
>>
>
> Then why is having a large buffer bad?
>
>
Large buffers may introduce long service times, sometimes packet bursts
and some authors write about long term dependencies which are the reason
for self similarity of network traffic.
>> And now, as you say, we've seen that a network may cry for help by dropping
>> packets - and we made a virtue of necessity then and used these drops for
>> congestion control.
>>
>
> I agree. We should listen to drops. We should also listen to delay.
>
>
One problem with delay is that the observed delay itself is a stochastic
variable. The observed values may spread around some expectation.
Actually, it is extremely hard to make a significant observation with
only one experiment. So, an extreme outlier could be mistaken as a
congestion indication. So, you have "false positive" results to some
extent.
>> I forgot a third reason: We do not even design networks that way that they
>> produce drops. The truth is: Packets are dropped - and we can't help it!
>> (Except by use of a central scheduling and rate allocation, see Keshav's
>> work.)
>>
>
> a) We can, with backpressure.
>
....and infinite backlog.
IIRC, in the BSD kernel, there is a function tcpquench() (?) which is
called when a packet cannot be enqueued at an outgoing interface, so the
sending attempt is postponed and cwnd is halved. This works at the
sender, unfortunately it doesn't work along the path because a router
"in between" cannot postpone an already sent packet.
Actually, this mechanism ensures fairness between two TCP flows sharing
the same sender ad receiver.
If you didn't have this mechanism, this fairness issue would be left to
the OS's scheduler and you could not provide for
resource fairness for TCP flows on single tasking systems, e.g. MS-DOS
(excuse me ;-)) and the KA9Q stack.
> b) The issue is not whether we should *ever* drop packets. David's
> point was that we should drop them even if we can pay for enough
> buffer space to keep them. (Given the current TCP, he is right.)
>
>
Yes, of course.
Or should we accept infinite head of line blocking for _all_ competing
flows when only _one_ listener e.g. in a cellular network has a problem?
> We could have a one-size-fits-all solution which also responds to
> excessive delay.
>
>
So we're looking for a "one size fits all significance test" for
delays.... ;-) _That's_ the very problem.
>> Binary backoff is a drastic measure.
>>
>> And sometimes _too_ drastic. If you encounter some transient link outage
>> with your mobile, the RTO rapidly increases into ranges of minutes.
>>
>
> I agree. We should have *something* else. However, many things other
> than AIMD are good enough for that "something", and safe enough to
> try. My point was that I don't think anyone has done a large-scale
> trial of any other congestion control, and found that it doesn't
> "work".
>
> (For transient outage, binary backoff only over-estimates the duration
> of the outage by a factor of 2.
^n.
2^n ;-)
n is the "time out counter".
There is an exponential growth.
> It takes minutes to increase to the
> range of minutes. Binary backoff is more of a problem if we happen to
> get a large number of "random" losses.)
>
>
Absolutely. And that's the scenario where the exponential growth becomes
a problem.
>>
>> Who _cares_?
>>
>
> Anyone who says "we use it rather than scheme A because it works"
> should care, especially when looking at a case where it doesn't
> work.
>
That's always true: We have to pay a close look to a scenario, where a
scheme fails.
>
>> In some buildings, there are compensators for this purpose. And even if they
>> don't exactly match the building's Eigenfrequency, the main thing is that
>> they kill energy.
>>
>> This may be not an elegant mathematical solution, but it protects life.
>>
>
> The issue of this thread wasn't whether modelling is good.
> However, since you bring it up: The reason people know that they need
> damping at all is because they understand the mathematics behind the
> dynamics. Without that insight, people would say "Let's just build
> stronger walls".
>
>
It's both. They understand the mathematics and see, why stronger walls
alone wouldn't work.
However, they understand the mathematics and see, that there is a simple
and practical solution to the problem.
>> And that's the same in computer networks. If they start oscillating or are
>> getting instable (i.e. the queues grow too large) - you kill energy, i.e.
>> drop packets.
>>
>
> Alternatively, if queues grow too large, you can reduce the rate at
> which you inject them into the network. That is what congestion
> control is all about.
>
>
It's both. Of course: Reducing the rate heals the _reason_ for the
problem. Dropping packets alleviates the _symptom_ of the problem. It's
a bit like our secretary of treasury after the Lehman's crash: "When a
house is burning, we have to extinguish the fire. No matter whether it
is malicious arson or not."
> I'm not saying how we should design buffers. I'm just suggesting that
> we should design TCP to listen to all available congestion signals,
> rather than saying that the link is "bad" if sends the packets that
> *we* have sent it (including others using the same algorithm as us).
>
>
I agree. However: The problem is not dealing with dedicated links. The
problem is dealing with shared ones. Because on shared links (e.g. one
base station, eight mobiles and therefore eight logical links) one bad
link can usurp all the resources in the cell and a head of line blocking
for _one_ link can cause severe harm to all the others.
Detlef
--
Detlef Bosau Galileistraße 30 70565 Stuttgart
phone: +49 711 5208031 mobile: +49 172 6819937 skype: detlef.bosau
ICQ: 566129673 http://detlef.bosau@web.de
More information about the end2end-interest
mailing list