<!doctype html public "-//W3C//DTD W3 HTML//EN">
<html><head><style type="text/css"><!--
blockquote, dl, ul, ol, li { padding-top: 0 ; padding-bottom: 0 }
--></style><title>Re: [e2e] TCP improved closing
strategies?</title></head><body>
<div>At 16:30 -0400 2009/08/17, David P. Reed wrote:</div>
<blockquote type="cite" cite><font face="Helvetica">You need 2MSL to
reject delayed dups. However, one does not need "fully
live"</font></blockquote>
<div><br></div>
<div>Correct. Bill's question was on how soon port-ids could be
re-cycled. </div>
<div><br></div>
<blockquote type="cite" cite><font face="Helvetica">individual
connections to deal with delayed dups. You can reject delayed
dups by saying "port unreachable" without a problem in most
cases. 2MSL provides no semantic guarantees
whatever.</font></blockquote>
<div><br>
Nor should it. Nor should anyone even try to construe that it
might.<br>
</div>
<blockquote type="cite" cite><br>
On 08/17/2009 04:14 PM, John Day wrote:<br>
<blockquote type="cite" cite>Re: [e2e] TCP improved closing
strategies?</blockquote>
<blockquote type="cite" cite>At 11:54 -0400 2009/08/17, David P. Reed
wrote:<br>
<blockquote type="cite" cite><font face="Helvetica">The function of
the TCP close protocol has two parts:<br>
<br>
1) a "semantic" property that indicates to the
*applications* on each end that there will be no more data and that
all data sent has been delivered. (this has the usual problem
that "exactly once" semantics cannot be achieved, and TCP
provides "at most once" data delivery semantics on the data
octet just prior to the close. Of course, *most* apps follow the
end-to-end principle and use the TCP close only as an
"optimization" because they use their data to provide all
the necessary semantics for their needs.</font><br>
</blockquote>
</blockquote>
<blockquote type="cite" cite><br></blockquote>
<blockquote type="cite" cite>Correct. Blowing off even more
dust, yes, this result was well understood by at least 1982. And
translates into Ted's solution that explicit establishment and release
of an "application connection" is necessary. Again see
Watson's paper and Lamport's Byzantine General's paper. Using
the release of the lower level connection to terminate signal the end
of the higher level connection is overloading and always leads to
problems.</blockquote>
<blockquote type="cite" cite><br></blockquote>
<blockquote type="cite" cite>You still need 2MSL.</blockquote>
<blockquote type="cite" cite><br>
<blockquote type="cite" cite><font face="Helvetica"><br>
2) a "housekeeping" property related to keeping the
TCP-layer-state minimal. This is what seems to be of concern
here.</font><br>
</blockquote>
</blockquote>
<blockquote type="cite" cite><br></blockquote>
<blockquote type="cite" cite>Agreed here as well. Taking Dave's
point that the value of MSL has gotten completely out of hand. As Dave
says the RFC suggests 30 seconds, 1 or 2 minutes! for MSL. Going
through 2**32 port-ids in 4 minutes with one host is unlikely but not
*that* unlikely. And of course because of the well-known port
kludge you are restricted to the client's port-id space and address.
If you had good ole ICP, you wouldn't have 2**64 (there is other stuff
going on), but it would be a significant part of that.</blockquote>
<blockquote type="cite" cite><br></blockquote>
<blockquote type="cite" cite>But the TCP MSL may be adding insult to
injury, I have heard rumors that the IP TTL is usually set to 255,
which seems absurdly high as well. Even so, surely hitting 255
hops must take well under 4 minutes! So can we guess that
TCP is sitting around waiting even though all of the packets are long
gone from the network?</blockquote>
<blockquote type="cite" cite><br></blockquote>
<blockquote type="cite" cite>2MSL should probably smaller but it still
has to be there.</blockquote>
<blockquote type="cite" cite><br></blockquote>
<blockquote type="cite" cite>Take care,</blockquote>
<blockquote type="cite" cite>John</blockquote>
<blockquote type="cite" cite><br>
<blockquote type="cite" cite><font face="Helvetica"><br>
Avoiding (2) for many parts of TCP is the reason behind
"Trickles" (q.v.) a version of TCP that moves state to the
client side.<br>
If we had a "trickles" version of TCP (which could be done
on top of UDP) we could get all the functions of TCP with regard to
(2) without server side overloading, other than that necessary for the
app itself.<br>
<br>
Of course, "trickles" is also faithful to all of TCP's
end-to-end congestion management and flow control, etc. None of
which is needed for the DNS application - in fact, that stuff
(slowstart, QBIC, ...) is really ridiculous to think about in the DNS
requirements space (as it is also in the HTML page serving space,
given RTT and bitrates we observe today, but I can't stop the a
academic hotrodders from their addiction to tuning terabyte FTPs from
unloaded servers for 5 % improvements over 10% lossy links).<br>
<br>
You all should know that a very practical fix to both close-wait and
syn-wait problems is to recognize that 500 *milli*seconds is a much
better choice for lost-packet timeouts these days - 250 would be
pretty good. Instead, we have a default designed so that a human
drinking coffee with one hand can drive a manual connection setup one
packet at a time using DDT on an ASR33 TTY while having a chat with a
co-worker. And the result is that we have DDOS
attacks...</font></blockquote>
<blockquote type="cite" cite><font face="Helvetica"><br>
I understand the legacy problems, but really. If we still
designed modern HDTV signals so that a 1950 Dumont console TV could
show a Blu-Ray movie, we'd have not advanced far.<br>
<br>
</font><br>
On 08/17/2009 10:16 AM, Joe Touch wrote:<br>
<blockquote type="cite" cite><tt>-----BEGIN PGP SIGNED
MESSAGE-----<br>
Hash: SHA1<br>
<br>
<br>
<br>
William Allen Simpson wrote:<br>
...<br>
</tt><br>
<blockquote type="cite" cite>
<blockquote type="cite" cite><tt>As I recall Andy Tanenbaum used to
point out that TP4 had an abrupt close<br>
and it worked. It does require somewhat more application
coordination</tt><br>
</blockquote>
<blockquote type="cite" cite><tt>but<br>
perhaps we can fake that by, say, retransmitting the last segment
and<br>
the FIN<br>
a few times to seek to ensure that all data is received by the
client???<br>
<br>
</tt><br>
</blockquote>
</blockquote>
<blockquote type="cite" cite><tt>Cannot depend on the DNS client's OS
to be that smart. Has to be a server<br>
only solution. Or based on a new TCP option, that tells us both
ends are<br>
smart. (I've an option in mind.)<br>
</tt><br>
</blockquote>
</blockquote>
<blockquote type="cite" cite><tt><br>
There are two different problems here:<br>
<br>
1) server maintaining state clogging the server<br>
<br>
2) server TIME-WAIT slowing connections to a single address<br>
<br>
Both go away if the client closes the connection. If you are going
to<br>
modify both ends, then that's a much simpler place to start than a
TCP<br>
option (which will need to be negotiated during the SYN, and might
be<br>
removed/dropped by firewalls or NATs, etc.).<br>
<br>
FWIW, persistent connections helps only #2. If it's the number of<br>
different clients connecting a server that is locking up too much
server<br>
memory, then persistent connections will make the problem worse, not
better.<br>
<br>
Joe<br>
-----BEGIN PGP SIGNATURE-----<br>
Version: GnuPG v1.4.9 (MingW32)<br>
Comment: Using GnuPG with Mozilla -</tt> <a
href="http://enigmail.mozdev.org/"><tt>http://enigmail.mozdev.org/</tt
></a><tt><br>
<br>
iEYEARECAAYFAkqJZjcACgkQE5f5cImnZruodwCeI3Lgpd8FNgsVt/g/FxPG29He<br>
NAAAoOXx0XeCkuadd26u87RBfnNSro3k<br>
=ZI0g<br>
-----END PGP SIGNATURE-----<br>
<br>
</tt></blockquote>
</blockquote>
</blockquote>
</blockquote>
<div><br></div>
</body>
</html>