<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN">
<HTML><HEAD>
<META http-equiv=Content-Type content="text/html; charset=iso-8859-1">
<META content="MSHTML 6.00.6000.16674" name=GENERATOR>
<STYLE></STYLE>
</HEAD>
<BODY bgColor=#ffffff>
<DIV><FONT face=Arial size=2><FONT size=3><FONT face="Times New Roman">dear
david,<BR><BR>i think that using rwnd is important not only because the control
of the receiver buffer is more efficient (otherwise the receiver buffer would be
controlled as any other router buffers on the path through cwnd).<BR>i think
that rwnd plays also the important role of setting a cap on the
probing, which ends indeed when cwnd=rwnd.<BR>This is very important in
practice. It avoids having networks on the verge of congestion that is
detrimental for many reasons.<BR><BR>best<BR><FONT
color=#888888>saverio</FONT></FONT></FONT>
<DIV>
<DIV><SPAN class=WQ9l9c id=q_11acee2055dff836_2>- Hide quoted text
-</SPAN></DIV>
<DIV class=Ih2E3d><BR><BR>
<DIV class=gmail_quote>On Sat, Jun 28, 2008 at 6:13 AM, Xiaoliang David Wei
<<A href="mailto:weixl@caltech.edu" target=_blank>weixl@caltech.edu</A>>
wrote:<BR>
<BLOCKQUOTE class=gmail_quote
style="PADDING-LEFT: 1ex; MARGIN: 0pt 0pt 0pt 0.8ex; BORDER-LEFT: rgb(204,204,204) 1px solid">
<DIV> I really enjoyed reading this thread -- lots of wisdom
and history from what all the gurus have said. :)<BR></DIV>
<DIV><BR></DIV>
<DIV> I would add two cents with my experience of playing
with the rwnd:</DIV>
<DIV><BR></DIV>
<DIV>1. rwnd is not critical for the correctness of TCP. So, yes, we can
remove rwnd without breaking TCP's correctness.</DIV>
<DIV><BR></DIV>
<DIV> The TCP algorithm is very robust to guarantee
reliability and avoid congestion. If we remove rwnd, the receiver buffer will
just be viewed as part of (the last hop of) the network model in the
congestion control algorithm, and receiver dropping the packet (due to lack of
buffer) will be a congestion signal to the sender to slow down. This will
work, though the sender now has to "guess" the receiver's buffer with the
*same* assumption of network congestion, and the guessing function will be the
same congestion control algorithm such as AIMD or whatever loss based
algorithms -- not necessary sawtooth if your algorithm is not AIMD. So,
removing rwnd control will be OK (maybe less efficient), and works well when
the receiving application is not the bottleneck, or the receiving application
has a similar processing pattern as network processing patterns.<BR></DIV>
<DIV><BR></DIV>
<DIV><BR></DIV>
<DIV>2. Why do we want to remove rwnd control (John's question)?
rwnd has its own goodness and badness:<BR></DIV>
<DIV><BR></DIV>
<DIV> Pro: rwnd control is very good in avoiding buffer
overflow -- no loss will happen for lack of receiving buffer (unless OS some
buffer negation).</DIV>
<DIV><BR></DIV>
<DIV> Con: However, rwnd is not very good in using the
buffer efficiently, esp with small buffer cases. With rwnd control, we have to
allocate BDP worth of buffer at the receiver to fully utilize the network
capacity. However, this BDP worth of buffer is not always necessary at all --
Think about an extreme case that the receiving application has a much
larger processing capacity, and each packet arrives at the receiver side can
be immediately consumed by the application: we only need to have 1 packet
worth of buffer to hold that received packet. But with rwnd control,
sender will only send a maximum of rwnd packets each RTT, even there is no
queue-up at the receiver side at all! (As David Reed pointed out, the rwnd
should indicate the receiving app's processing capacity, but unfortunately,
the current way of indication is through available buffer size, which is not
always an accurate indication.)</DIV>
<DIV> This trouble is particular obvious with the majority
OS implementations of the last generation. As many research (e.g. web100)
pointed out a few years ago, most of the TCP connections are bounded by a very
small default buffer of Windows and also Linux. While it is easy to change the
server's sending buffer, the clients' receiver buffer (usually lies in
millions of customers Windows boxes) is hard to change. So, if we can remove
the rwnd control (e.g. having the sender ignore the rwnd and only rely on
congestion control), we might improve the connection speed and don't even have
extra loss if the receivers can process all the packets quickly. I remember
some of the network enhancement units on the market actually do such a feature
(with other features to reduce the negative effect of ignoring rwnd). This
reason, however, will probably be weaken as Vista and Linux 2.6 both come with
buffer auto-tuning.<BR></DIV>
<DIV><BR></DIV>
<DIV>3. rwnd is very important for the responsiveness and adaptability of
TCP. So, no, please don't remove rwnd until you get a good solution for all
TCP usages.:)<BR></DIV>
<DIV><BR></DIV>
<DIV> TCP are used almost universally in all reliability
traffic. Bulk traffic where network is bottle-necked usually satisfies above
conditions that receiver is not a bottleneck. However, there are also
many cases that the receiver is slow, or the receiver's processing pattern is
completely different from network router (and hence congestion control
algorithm's estimation will completely go off).</DIV>
<DIV><BR></DIV>
<DIV> Just give an example of networked-printer. When a
networked-printer runs out of paper, it is data processing capability quickly
drops to zero and lasts for minutes, then after feeding paper, its capacity
quickly jumps back to normal. This on-off pattern is very different from most
network congestion, and I don't see TCP congestion control algorithms can
handle such case responsively. In this case, rwnd control has its own
advantage: great responsiveness (by preventive control, explicit notification
when the buffer opens up and etc). </DIV>
<DIV> </DIV>
<DIV> Note that to achieve such great responsiveness, rwnd
control is designed to be very conservative and preventive -- sender (at this
moment) can at most send data up to whatever the receiver (half RTT ago) could
receive. This conservativeness guarantees that no packet will be dropped even
application completely shut down its processing after announcing the rwnd. ECN
and other explicit congestion control provide no such guarantee and cannot
achieve the same responsiveness to a sudden capacity shutdown.</DIV>
<DIV><BR></DIV>
<DIV> I think there are a lot other applications that have
very different processing patterns and it is very hard to have one algorithm
to predict all these patterns efficiently.</DIV>
<DIV><BR></DIV>
<DIV>
<DIV> So, my understanding here is that: </DIV>
<DIV> A. if the receiver is very fast, we don't need rwnd
control at all; </DIV>
<DIV> B. if the receiver's processing pattern is similar to
network congestion and if tcp congestion does a good job, we don't need rwnd
either.</DIV>
<DIV> C. The two "if" in A and B might stand in some cases,
but not all the usage cases. I don't expect TCP will work as universally well
as it currently does if we don't have rwnd control.<BR></DIV></DIV>
<DIV><BR></DIV>
<DIV><BR></DIV>
<DIV>-David</DIV>
<DIV><BR></DIV>
<DIV>
<DIV>
<DIV></DIV>
<DIV><BR>
<DIV class=gmail_quote>On Thu, Jun 26, 2008 at 12:38 AM, Michael Scharf <<A
href="mailto:michael.scharf@ikr.uni-stuttgart.de"
target=_blank>michael.scharf@ikr.uni-stuttgar<WBR>t.de</A>> wrote:<BR>
<BLOCKQUOTE class=gmail_quote
style="PADDING-LEFT: 1ex; MARGIN: 0pt 0pt 0pt 0.8ex; BORDER-LEFT: rgb(204,204,204) 1px solid">Hi,<BR><BR>maybe
this is a stupid question: Is there really a need for the TCP<BR>flow
control, i. e., for signaling the receiver window back to<BR>the
sender?<BR><BR>It is well known that TCP realizes both congestion control
and flow<BR>control, and that a TCP sender therefore maintains two
different<BR>windows (cwnd and rwnd). Obviously, the congestion control
protects<BR>the path from overload, while the flow control protects the
receiver<BR>from overload.<BR><BR>However, I have some difficulties to
understand why the flow control<BR>part and receiver advertized window is
actually needed.<BR><BR>Instead of reducing rwnd, an overloaded receiver
running out of buffer<BR>space could simply drop (or mark) new arriving
packets, or just<BR>refrain from sending acknowledgements. As a reaction to
this, the<BR>sender probably times out and the TCP congestion control
significantly<BR>reduces the sending rate, which reduces the load on the
receiver, too.<BR><BR>To my understanding, a fine granular receiver
advertized window is<BR>much more efficient if the buffer sizes are of the
order of a few<BR>packets only. But I guess that most of today's Internet
hosts have<BR>larger buffers, and therefore they hardly need a fine granular
flow<BR>control.<BR><BR>Are there reasons why TCP can't just use its
congestion control to<BR>handle slow receivers? Do I overlook some
aspect? Any hint or<BR>reference would be welcome.<BR><FONT
color=#888888><BR>Michael<BR><BR></FONT></BLOCKQUOTE></DIV><BR><BR
clear=all><BR></DIV></DIV><FONT color=#888888>-- <BR>Xiaoliang "David"
Wei<BR><A href="http://davidwei.org/"
target=_blank>http://davidwei.org</A><BR>******************************<WBR>*****************
</FONT></DIV></BLOCKQUOTE></DIV></DIV></DIV><FONT size=3><FONT
face="Times New Roman"></TD></FONT></FONT></FONT></DIV></BODY></HTML>