<!doctype html public "-//W3C//DTD W3 HTML//EN">
<html><head><style type="text/css"><!--
blockquote, dl, ul, ol, li { padding-top: 0 ; padding-bottom: 0 }
--></style><title>Re: [e2e] TCP improved closing
strategies?</title></head><body>
<div>Correct me if I am wrong, but in terms of recycling port-ids so
that no late packets are accepted and having to wait 2MSL, I believe
that Dick Watson proved in 1981 that waiting 2MSL was both necessary
and sufficient. Actually it is 2MSL on one side and 3MSL on the
other.</div>
<div><br></div>
<div><font color="#000000">Watson, Richard. "Timer-Based Mechanisms
in Reliable Transport Protocol Connection Management,<b> Computer
Networks</b> 5 (1981) 47 - 56.</font></div>
<div><font color="#000000"><br></font></div>
<div>As far as TP4 is concerned, it has the same constraint.
There is no avoiding it.</div>
<div><br></div>
<div>Earlier Belnes showed that the 5 way exchange was required to
deliver one message reliably as long as there were no failures, but to
be absolutely sure timers were required and you were back to Watson's
result.</div>
<div><br></div>
<div><font color="#000000">Belnes, Dag "Single Message
Communiction,"<b> IEEE Transactions on Communications</b>, Vol.
COM-24, No. 2 February, 1976, pp 190 - 194.</font><br>
<font color="#000000"></font></div>
<div>Watson shows that bounding 3 timers are necessary and sufficient
to ensure reliable transfer. Matta and his students did a paper
recently that looked at the single message case under harsh conditions
and found that even though TCP bounds the same 3 timers it isn't as
effective as Watson's approach,.</div>
<div><br></div>
<div>Take care,</div>
<div>John Day</div>
<div><br></div>
<div><br></div>
<div>At 8:53 -0700 2009/08/13, rick jones wrote:</div>
<blockquote type="cite" cite>On Aug 13, 2009, at 5:58 AM, William
Allen Simpson wrote:
<blockquote type="cite" cite>AFAIK, the last survey (6-7 years ago)
was 100 million queries per day, so<br>
that's roughly 694,444 during each 2MSL period. Of course,
that's average,<br>
not peak (likely much more)....<br>
<br>
http://dns.measurement-factory.com/writings/wessels-pam2003-paper.pdf<br
>
<br>
We're talking about Linux, Solaris, HP-UX, AIX, maybe some others. Do
all<br>
these servers have the capability to handle that many TCP
connections,<br>
rather than UDP connections?<br>
<br>
Do *any* of them?</blockquote>
</blockquote>
<blockquote type="cite" cite><br>
Modulo the variations in how persistent the connections were relative
to the transaction rate, and the differences in the metrics, you can
probably look at the archives of SPECweb96 (HTTP 1.0) SPECweb99 (1.1),
SPECweb99_SSL (new SSL session for each new TCP connection, IIRC, but
it has been a while) or even SPECweb2005/SPECweb2009 if you can decide
which among "Banking," "Ecommerce," and
"Support" workload is closest, to get an idea of how many
TCP connections servers can handle. During the heyday of web
server benchmarking, there was a lot of work done in minimizing the
overhead of TIME_WAIT tracking etc.<br>
<blockquote type="cite" cite>
<blockquote type="cite" cite>That said, the problem is fun.<br>
As I recall Andy Tanenbaum used to point out that TP4 had an abrupt
close<br>
and it worked. It does require somewhat more application
coordination but<br>
perhaps we can fake that by, say, retransmitting the last segment and
the FIN<br>
a few times to seek to ensure that all data is received by the
client???</blockquote>
</blockquote>
<blockquote type="cite" cite>Cannot depend on the DNS client's OS to
be that smart. Has to be a server<br>
only solution. Or based on a new TCP option, that tells us both
ends are<br>
smart. (I've an option in mind.)</blockquote>
</blockquote>
<blockquote type="cite" cite><br>
Isn't a new TCP option by definition depending on the client's OS to
be smart?<br>
<br>
rick jones<br>
Wisdom teeth are impacted, people are affected by the effects of
events</blockquote>
<div><br></div>
</body>
</html>