[e2e] tcp connection timeout

Vadim Antonov avg at kotovnik.com
Thu Mar 2 15:54:26 PST 2006


On Thu, 2 Mar 2006, Jim Gettys wrote:

> And you expect the operating system to have better knowledge than I do,
> who is familiar with my application?

Yes.  The only information application has which OS doesn't is the 
duration of loss of connectivity it is willing to tolerate.  You cannot be 
any more specific than that for the reasons I already explained.

Converserly, network stack has access to internal state not available to
applications (or which requires some non-portable and ugly contortions to 
obtain, such as the current state of interfaces).  OS internals do really
have better knowledge than your application.

> If things are stuck behind a large piece of data, in your example, the
> TCP connection will fail, and I get all the signaling I need, just when
> I need it; if not, the application will continue to make progress.  The
> keep alive isn't needed to detect a failure or continued progress in
> this case.

He-he. It may be that application on the other end is slow in sending that
data, not that the network is crappy. How do you tell?
 
Repeat, slowly: there is no way to tell slow application from stalled
network if the only tool you have is a serialized stream terminated by
that application.

> > Finally, TCP stack has RTT estimates.  Application doesn't.  
> 
> Depends on the applications.  Some do keep RTT estimates, and I
> certainly would like the OS to let me get the information that TCP keeps
> on my behalf (I'm not sure what the current state of the sockets API is
> in this area).

The current state is "no such API".  In any case, this is one piece of 
many, and it needs interpretation.
 
> If I care to detect transient failures, my application can/should do
> this on its own.

There's no boundary between "transient" and "hard" failures... one can 
always hope that the end-system on the other end may be fixed, even if it 
takes a year.

Having established that, we're back to the point that the only relevant
parameter application can use is the threshold for duration of outage
separating transient from hard fault.

And, no, applications cannot do that on their own because they cannot send
probe packets out of order and without influencing the TCP stream state
within the same connection.

So all application-level keepalives-over-TCP are broken by definition.  
Even if their implementors think that they're ok.

> The only time I care to have an error reported is when I'm actually
> trying to use the connection.  Signaling errors when I don't care just
> means I get failures that I wouldn't otherwise experience.  This is a
> feature?

Oh, I see. You didn't understand what I said about the value of knowledge 
that the communication line is down.
 
> If there are no costs for listening, I see no problem here.

There always costs.  They may be small (like keeping a TCB), they may be
large (like keeping 20Mb of state in RAM), they may be intolerable - but
there are always some, and you do not have any idea of the costs imposed
on the remote end.

If the only cost you have is memory for TCB and keeping a port, then 
timeout of 5-6 hours is probably OK (if you're not running a high-volume 
server, and then you're down to minutes).  If you're running real-time 
telemetry, then you need to know within a second if you can still read
that sensor, or you may blow half of your plant up.
 
> If there are, design your application server to be sane (as HTTP servers
> do, after much pulling of teeth).

Sure, heh.  HTTP still doesn't let me know if the sever is slow (and so 
I better wait) or if my network croaked (and so I better go call my ISP). 

And so the first thing ISP tech support is asking is to make sure the site 
I can't reach is OK... and most users don't have idea what "ping" means...
so the techsupport guy is wasting his life explaining how to do that to 
the hapless user, and the user wastes his life reading from the screen and 
trying to understand what techsupport guy says.

The lack of proper diagnostics also has nontrivial costs.  In fact, one of
my clients (a large app vendor) comissioned a quick diagnostic tool from
my company specifically to discriminate network faults from application
faults: it costs them about two mil a year to field tech support calls
which end up being resolved as "client's network problem".

> People seem to figure out pretty well when to hang up the phone:

Tell that to my friend who always wants to recall the entire story of her
life every time I talk to her on the phone :)

>the analogy with a telephone is broken to begin with.

Not my analogy, sorry :)

> The OS will *always* guess wrong, except by chance.

Sure. That's why we let OSes to run all our applications, on the slim
chance that they may guess right what page to load into memory or what 
block to write on the disk.
 
> Huh? People who build applications protocols are usually at least
> slightly familiar with underlying transport, though there have been some
> noticeable counter-examples ;-).

Like nearly every application-level protocol to date. Oh, HTTP 1.0 seems 
to be the leader of the "I don't have a clue how the network works" pack.

> > Transport layer has timers.  Applications, in most cases, shouldn't. 
> > Majority of application-level protocols are synchronous anyway; why force 
> > app developers to do asynchronous coding?
> 
> Huh? Every X application that has blinking cursors (e.g. any text
> widget), has to have timers.  All GUI application toolkits, on any
> platform I am aware of, have timers of some form (which may or may not
> be in active use, if the application is truly trivial). 

Most network applications out there aren't GUIs.  In fact, most real-world
network apps simply use a browser in lieu of GUI.  And there are three 
orders of magnitude more various back-ends than front-end UI apps.
 
> So any application that talks to people has this facility available
> trivially, and by definition, is an event driven program for which
> dealing with timers is easy.  Only batch sorts of applications written
> without any library support might not have timer facilities, and I'd
> then ask why you are writing at that low a level?  Just about any
> serious network interface library will have such facilities; you won't
> be coding them yourself unless you are a flat-rock programmer.

Show me a web browser that does not have glaringly obvious timing bugs,
and I may convert to your point of view. As is, I'm sick and tired of 
nifty widget-encrusted thingies which crashdump when I try to resize 
window without pausing to make sure they're done whatever they do, etc.

Async programming *is* hard.  It produces programs which cannot be
regression-tested.  For any non-trivial state machine there's an
exponential number of combinations of timing conditions. There was an
article (by Rob Pike et al, if I'm not mistaken) about how hard is to get
even a 10-line piece of reentrant code right (on design of spinlocks in 
Plan 9).

The result is that most GUI software and async servers are buggy as hell, 
simply because they cannot be tested with anything but a token coverage 
of various timing conditions. And because it is hard to test, most of it 
is not tested at all.

There's the same problem with OS kernels - they're not tested properly,
but kernels have hundreds of millions of users, so there's a fair chance
bugs will be discovered relatively quickly.  Besides, kernels have years
of history behind them (and the older they are, the more stable their
operation is... I wouldn't use Linux instead of BSD for anything critical,
because a BSD kernel has a decade more of bug-fixing behind it, etc).

Now if I build a typical application which may have, say, few thousand 
users and want to make sure it has adequate quality - I cannot rely on the 
million monkeys doing testing for me.  So I have to build my own 
regression tests - and those cannot test asynchronous operation.

Thus, if I care to deliver something which won't have customers calling me
to report spontaneous faults and strange lock-ups for the next twenty
years, I'd better do purely synchronous design.  Fortunately, in nearly
all cases that's all I need.

(The fact that GUI toolkits have to use timers to blink cursors and do
cutesey animations instead of telling the display server to do it merely
says that X-windows and MS Windows are crappy designs...  even the old
alpanumeric display designers had more sense than to send cursor blinks
over the wire.  Note that with web-based GUIs the actual application code
is nearly always purely synchronous, which explains why it is much easier
to build a working website than to build a working desktop GUI, even with
all the idiotic browser wars and the resulting compatibility issues.
Unfortunately, the intellectual "property" considerations often sink good
designs (like NeWS) and leave the crap fluorish.)

But, then, I guess, nobody cares to write quality software anymore. Hack
it together, seems to work, ship. Blame any crashes and lock-ups on 
Microsoft or el-cheapo PCs.

> As I said, they should be called "keep killing" rather than "keep
> alives".

They should be properly called "2x4 clue bars".

--vadim



More information about the end2end-interest mailing list