Nice to learn from your write-up as well. Just one more point where rwnd helps:<br><br> Imagine a middle box that acts as a tcp relay/forwarder or a PEP. In this case rwnd control provides adequate buffering on both ends (read and write sockets) thus increasing the total throughput. If not for rwnd, the optimal throughput for each individual connection is never reached.<br>
<br>-Paddy Ganti<br><br><div class="gmail_quote">On Fri, Jun 27, 2008 at 9:13 PM, Xiaoliang David Wei <<a href="mailto:weixl@caltech.edu">weixl@caltech.edu</a>> wrote:<br><blockquote class="gmail_quote" style="border-left: 1px solid rgb(204, 204, 204); margin: 0pt 0pt 0pt 0.8ex; padding-left: 1ex;">
<div> I really enjoyed reading this thread -- lots of wisdom and history from what all the gurus have said. :)<br></div><div><br></div><div> I would add two cents with my experience of playing with the rwnd:</div><div>
<br></div><div>1. rwnd is not critical for the correctness of TCP. So, yes, we can remove rwnd without breaking TCP's correctness.</div><div><br></div><div> The TCP algorithm is very robust to guarantee reliability and avoid congestion. If we remove rwnd, the receiver buffer will just be viewed as part of (the last hop of) the network model in the congestion control algorithm, and receiver dropping the packet (due to lack of buffer) will be a congestion signal to the sender to slow down. This will work, though the sender now has to "guess" the receiver's buffer with the *same* assumption of network congestion, and the guessing function will be the same congestion control algorithm such as AIMD or whatever loss based algorithms -- not necessary sawtooth if your algorithm is not AIMD. So, removing rwnd control will be OK (maybe less efficient), and works well when the receiving application is not the bottleneck, or the receiving application has a similar processing pattern as network processing patterns.<br>
</div><div><br></div><div><br></div><div>2. Why do we want to remove rwnd control (John's question)? rwnd has its own goodness and badness:<br></div><div><br></div><div> Pro: rwnd control is very good in avoiding buffer overflow -- no loss will happen for lack of receiving buffer (unless OS some buffer negation).</div>
<div><br></div><div> Con: However, rwnd is not very good in using the buffer efficiently, esp with small buffer cases. With rwnd control, we have to allocate BDP worth of buffer at the receiver to fully utilize the network capacity. However, this BDP worth of buffer is not always necessary at all -- Think about an extreme case that the receiving application has a much larger processing capacity, and each packet arrives at the receiver side can be immediately consumed by the application: we only need to have 1 packet worth of buffer to hold that received packet. But with rwnd control, sender will only send a maximum of rwnd packets each RTT, even there is no queue-up at the receiver side at all! (As David Reed pointed out, the rwnd should indicate the receiving app's processing capacity, but unfortunately, the current way of indication is through available buffer size, which is not always an accurate indication.)</div>
<div> This trouble is particular obvious with the majority OS implementations of the last generation. As many research (e.g. web100) pointed out a few years ago, most of the TCP connections are bounded by a very small default buffer of Windows and also Linux. While it is easy to change the server's sending buffer, the clients' receiver buffer (usually lies in millions of customers Windows boxes) is hard to change. So, if we can remove the rwnd control (e.g. having the sender ignore the rwnd and only rely on congestion control), we might improve the connection speed and don't even have extra loss if the receivers can process all the packets quickly. I remember some of the network enhancement units on the market actually do such a feature (with other features to reduce the negative effect of ignoring rwnd). This reason, however, will probably be weaken as Vista and Linux 2.6 both come with buffer auto-tuning.<br>
</div><div><br></div><div>3. rwnd is very important for the responsiveness and adaptability of TCP. So, no, please don't remove rwnd until you get a good solution for all TCP usages.:)<br></div><div><br></div><div> TCP are used almost universally in all reliability traffic. Bulk traffic where network is bottle-necked usually satisfies above conditions that receiver is not a bottleneck. However, there are also many cases that the receiver is slow, or the receiver's processing pattern is completely different from network router (and hence congestion control algorithm's estimation will completely go off).</div>
<div><br></div><div> Just give an example of networked-printer. When a networked-printer runs out of paper, it is data processing capability quickly drops to zero and lasts for minutes, then after feeding paper, its capacity quickly jumps back to normal. This on-off pattern is very different from most network congestion, and I don't see TCP congestion control algorithms can handle such case responsively. In this case, rwnd control has its own advantage: great responsiveness (by preventive control, explicit notification when the buffer opens up and etc). </div>
<div> </div><div> Note that to achieve such great responsiveness, rwnd control is designed to be very conservative and preventive -- sender (at this moment) can at most send data up to whatever the receiver (half RTT ago) could receive. This conservativeness guarantees that no packet will be dropped even application completely shut down its processing after announcing the rwnd. ECN and other explicit congestion control provide no such guarantee and cannot achieve the same responsiveness to a sudden capacity shutdown.</div>
<div><br></div><div> I think there are a lot other applications that have very different processing patterns and it is very hard to have one algorithm to predict all these patterns efficiently.</div><div><br></div><div>
<div> So, my understanding here is that: </div><div> A. if the receiver is very fast, we don't need rwnd control at all; </div><div> B. if the receiver's processing pattern is similar to network congestion and if tcp congestion does a good job, we don't need rwnd either.</div>
<div> C. The two "if" in A and B might stand in some cases, but not all the usage cases. I don't expect TCP will work as universally well as it currently does if we don't have rwnd control.<br></div>
</div>
<div><br></div><div><br></div><div>-David</div><div><br></div><div><div><div></div><div class="Wj3C7c"><br><div class="gmail_quote">On Thu, Jun 26, 2008 at 12:38 AM, Michael Scharf <<a href="mailto:michael.scharf@ikr.uni-stuttgart.de" target="_blank">michael.scharf@ikr.uni-stuttgart.de</a>> wrote:<br>
<blockquote class="gmail_quote" style="border-left: 1px solid rgb(204, 204, 204); margin: 0pt 0pt 0pt 0.8ex; padding-left: 1ex;">Hi,<br>
<br>
maybe this is a stupid question: Is there really a need for the TCP<br>
flow control, i. e., for signaling the receiver window back to<br>
the sender?<br>
<br>
It is well known that TCP realizes both congestion control and flow<br>
control, and that a TCP sender therefore maintains two different<br>
windows (cwnd and rwnd). Obviously, the congestion control protects<br>
the path from overload, while the flow control protects the receiver<br>
from overload.<br>
<br>
However, I have some difficulties to understand why the flow control<br>
part and receiver advertized window is actually needed.<br>
<br>
Instead of reducing rwnd, an overloaded receiver running out of buffer<br>
space could simply drop (or mark) new arriving packets, or just<br>
refrain from sending acknowledgements. As a reaction to this, the<br>
sender probably times out and the TCP congestion control significantly<br>
reduces the sending rate, which reduces the load on the receiver, too.<br>
<br>
To my understanding, a fine granular receiver advertized window is<br>
much more efficient if the buffer sizes are of the order of a few<br>
packets only. But I guess that most of today's Internet hosts have<br>
larger buffers, and therefore they hardly need a fine granular flow<br>
control.<br>
<br>
Are there reasons why TCP can't just use its congestion control to<br>
handle slow receivers? Do I overlook some aspect? Any hint or<br>
reference would be welcome.<br>
<font color="#888888"><br>
Michael<br>
<br>
</font></blockquote></div><br><br clear="all"><br></div></div><font color="#888888">-- <br>Xiaoliang "David" Wei<br><a href="http://davidwei.org" target="_blank">http://davidwei.org</a><br>***********************************************
</font></div>
</blockquote></div><br>