<div dir="ltr"><div><div>Hi£¬<br><br></div> I found two papers which state that with ideal fountain code,there is generally no congestion collapse.Efficiency remains higher than 90% for most network topologiesas long as maximum source rates are less than link capacity by one or two orders of magnitude. Moreover, a simple fair drop policy enforcing fair sharing at flow level is sufficient to guarantee 100% efficiency in all cases. I refer to several papers on the congestion problem when fountain code is used. [1] [2]<br>
</div><div> <br></div> <br><div><div><div>[1] <span style="font-size:10pt;font-family:"Times New Roman"" lang="EN-US">Bonald, T., M. Feuillet, et al. (2009). <u>Is the''Law
of the Jungle''Sustainable for the Internet?</u> INFOCOM 2009, IEEE</span><br><br>[2] B. Raghavan and A. Snoeren. Decongestion control. In HOTNETS-V,<br>2006.<br></div></div></div><div class="gmail_extra"><br>
<br><div class="gmail_quote">2013/3/7 RAMAKRISHNAN, KADANGODE (K. K.) <span dir="ltr"><<a href="mailto:kkrama@research.att.com" target="_blank">kkrama@research.att.com</a>></span><br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
I want to second what Jon and Keshav say with regard to the assistance provided by coding, but the limitations that arise in an environment without effective congestion control.<br>
<br>
We'd explored the benefit of coding (admittedly simple R-S codes) at the end-end transport layer to complement TCP, so as to help sustain losses on wireless links, in our work on LT-TCP.<br>
We did see the benefit of coding, to extend the dynamic range of transport protocols to tolerate higher loss rates, but only up to a point. Beyond that, you see the same results as you would see in an uncontrolled environment where losses (and the resulting wasted work) begin to dominate the utilization of the resources in the network. That is without paying attention to the delays that result from excessive losses that cause the receiver to wait to reconstruct a block. There is still the need for reasonable congestion control mechanisms to keep from causing excessive losses. And Keshav's point of the unfairness across flows in the short term and the eventual result of everyone losing out is certainly important to keep in mind.<br>
<br>
Finally, I heartily agree with Jon's last point regarding ECN...<br>
<br>
--<br>
K. K. Ramakrishnan Email: <a href="mailto:kkrama@research.att.com" target="_blank">kkrama@research.att.com</a><br>
AT&T Labs-Research, Rm. A161 Tel: (973)360-8764<br>
180 Park Ave, Florham Park, NJ 07932 Fax: (973) 360-8871<br>
URL: <a href="http://www.research.att.com/people/Ramakrishnan_Kadangode_K/index.html" target="_blank">http://www.research.att.com/people/Ramakrishnan_Kadangode_K/index.html</a><br>
<div><br>
<br>
-----Original Message-----<br>
From: <a href="mailto:end2end-interest-bounces@postel.org" target="_blank">end2end-interest-bounces@postel.org</a> [mailto:<a href="mailto:end2end-interest-bounces@postel.org" target="_blank">end2end-interest-bounces@postel.org</a>] On Behalf Of Jon Crowcroft<br>
Sent: Wednesday, March 06, 2013 10:03 AM<br>
To: shun cai<br>
Cc: <a href="mailto:Jon.Crowcroft@cl.cam.ac.uk" target="_blank">Jon.Crowcroft@cl.cam.ac.uk</a>; <a href="mailto:end2end-interest@postel.org" target="_blank">end2end-interest@postel.org</a><br>
Subject: Re: [e2e] Why do we need congestion control?<br>
<br>
</div><div><div>ok - i see your point - this is true if your sources have a peak rate they can send at<br>
<br>
this could be the line rate of their uplink -<br>
that would be embarrasingly bad<br>
(see keshav's followup on escalating costs of coding)<br>
or the rate they can get data off disk (which could be as bad, but might be lower)<br>
or an application specific rate (e.g. streamed video) for which you're suggestion is<br>
quite reasonable....<br>
<br>
but for data sources which are greedy<br>
(TCP with arbitrarily large files)<br>
you need a way to tell sources a non wasteful way of sending -<br>
<br>
and what is more<br>
there isn't just one set of sources in one location<br>
and a set of sinks in one other location<br>
so the system of senders sending at<br>
unconstrainted rates on a finite speed net with high speed edges,<br>
would create multiple bottlenecks,<br>
which would exponentiate the problem<br>
<br>
coding isn't magic - its info theory - if you lose info<br>
you must add redundency - coding does it pre-emptively<br>
rather than post-hoc the way ARQ/Retransmit does,<br>
which saves you time, but in the end, can't defer the inevitable<br>
<br>
if you look at digital fountin systems for video<br>
they pick a likely loss rate, pick a tolerable picture degradation rate<br>
and use those to derive/choose a code<br>
<br>
the assumption is that the losses are capped because most other systems<br>
are backing off just like TCP - if you break that assumption,<br>
you'll break the coding parameter choice<br>
<br>
anyhow, roll out ECN - much betterer technology:)<br>
congestion avoidance without keeping queues filled everywhere...<br>
<br>
<br>
<br>
<br>
<br>
</div></div></blockquote></div><br></div></div>