[e2e] tcp connection timeout
David P. Reed
dpreed at reed.com
Wed Mar 1 19:57:59 PST 2006
I was describing a historical perspective. TCP is what it is, and no
one (not me) is suggesting it should be changed. Ideally it should be
thrown away someday, because it continues to acquire barnicles.
However, some comments regarding my points, below:
Vadim Antonov wrote:
> On Wed, 1 Mar 2006, David P. Reed wrote:
>
>
>> Actually, there is no reason why a TCP connection should EVER time out
>> merely because no one sends a packet over it.
>>
>
> The knowledge that connectivity is lost (i.e. that there's no *ability* to
> send information before the need arises) is valuable.
This cannot be well-defined. This is Jim Gettys' point above.
"Connectivity" is NOT lost. The end-to-end probability of packet
delivery just varies. Depends on the application's point of
viewwhether connectivity is lost. Does the refrigerator light remain
on when the door is closed? Why do you care? And more importantly, can
you describe precisely when you would care? This is a classic example
of the "end to end argument" - you can't define this function
"connectivity being lost" at the network layer, because connectivity
isn't lost, only packets are lost.
> A preemptive action
> can then be taken to either alert user, or to switch to an alternative.
> Just an example (with somewhat militarized slant): it does make a lot of
> difference if you know that you don't know your enemy's position, or if
> you falsely think that you know where they are, and meanwhile they moved
> and you simply didn't hear about it because some wire is cut.
>
You never know your enemies' position unless you are God. You only know
*where they were* not where they are now. You can waste all your time
sending radar pulses every microsecond, and you still won't know where
they are, and you'll never know where they will be when you decide to
act. At best, your information can be narrowed based on how much energy
you put into that. Better at some point to fire the missile based on
your best guess and see if it hits.
> There's also an issue of dead end-point detection and releasing the
> resources allocated to such dead point (which may never come back). There
> is no way to discriminate between dead end-point and an end-point which
> merely keeps quiet other than using connection loss detection.
>
> So, in practice, all useful applications end up with some kind of timeouts
> (and keepalives!) - embedded in zillion protocols, mostly designed
> improperly, or left to the user's manual invervention. It makes
> absolutely no sense - in a good design shared functions must be located at
> the level below, so there's no replication of functionality.
>
> What is needed is an API which lets applications to specify maximal
> duration of loss of connectivity which is acceptable to them. This part
> is broken-as-designed in all BSD-ish stacks, so few people use it.
>
It's so easy to do this at the application level, and you can do it
exactly as you wish - so why implement a slightly general and
always-wrong model in the lower layer, especially since most users don't
even need it, and some, like Jim Gettys end up having to patch around
its false alarms!
> Keepalives, arguably, is the crudest method for detection of loss of
> connectivity, and they load the network with extra traffic and do not
> provide for the fast detection of the loss. But, because of their
> crudity, they always work.
>
They NEVER work. (except if you define working as doing something, even
if it is not what you want it to do).
> A good idea would be to fix standard socket API and demand that all TCP
> stacks allow useful minimal keepalive times (down to seconds), rather than
> have application-level protocol designers to implement workarounds at the
> application level.
>
There you go - you think that you can demand that protocol engineers
know more than application designers about "usefulness".
That's why my cellphone says "call is lost" 20 times a day, requiring me
to redial over and over, instead of keeping the call alive until I press
hangup. Some network engineer decided that a momentary outage should
be treated as a permanent one. Foo.
> And, yes, provide the TCP stack with a way to probe the application to
> check if it is alive and not deadlocked (that being another reason to do
> app-level keepalives).
>
Oh yeah. Put TCP in charge of the application. Sure. And put the
fox in charge of the chicken coop.
The purpose of the network is to support applications, not the
applications to support the network's needs. Perhaps this is because
network protocol designers want to feel powerful? And I suppose the
purpose of applications and users are to pay for Bill Gates's house (or
Linus's mortgage?)
More information about the end2end-interest
mailing list