<br><br><div class="gmail_quote">On Mon, Feb 9, 2009 at 11:24 AM, David P. Reed <span dir="ltr"><<a href="mailto:dpreed@reed.com">dpreed@reed.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="border-left: 1px solid rgb(204, 204, 204); margin: 0pt 0pt 0pt 0.8ex; padding-left: 1ex;">
Before going too far in this direction, one should note that unicast traffic on layer 2 transports commonly used in practice for Internet transport has negligible loss rates, even on wireless networks such as 802.11.<br>
</blockquote><div><br>I guess you are restricting yourself to 'well behaved' 802.11 settings. Multi-hop networks (with outdoor links) and mobility scenarios (such as wifi from moving cars) do experience losses even with link layer reliability and no loss of connection. <br>
<br></div><blockquote class="gmail_quote" style="border-left: 1px solid rgb(204, 204, 204); margin: 0pt 0pt 0pt 0.8ex; padding-left: 1ex;">
The problem of differentiation arises when attempting to elide layer 2 functionality and run "TCP/IP on bare PHY". Otherwise "link loss rate" is a concept without much reality at layer 3. We don't run TCP/IP on bare PHY layers. We run it on layer 2 protocol, over PHY layers, which protocols always have high reliability today. Some multicast layer 3 protocols run on unreliable layer 2 multicast protocols (such as 802.11 multicast), but TCP/IP never uses multicast.<br>
</blockquote><div> </div><blockquote class="gmail_quote" style="border-left: 1px solid rgb(204, 204, 204); margin: 0pt 0pt 0pt 0.8ex; padding-left: 1ex;"><br>
Layer 3 losses are nearly always the result of *only* 2 very different phenomena: 1) buffer overflow drops due to router/switch congestion queue management or 2) layer 2 breaks in connectivity.<br>
<br>
Thinking about "link loss rates" is a nice academic math modeling exercise for a world that doesn't exist, but perhaps the practical modeling differentiation should focus on these two phenomena, rather than focusing on "link loss rates". The "connectivity break" case (which shows up in 802.11 when the NIC retransmits some number of times - 255?) doesn't have very good statistical models, certainly not the kind of models that can be baked into TCP's congestion/rate control algorithms. And that model is not likely to be poisson, or any distribution easily characterized by a "rate parameter"</blockquote>
<div><br>Fahad <br></div><blockquote class="gmail_quote" style="border-left: 1px solid rgb(204, 204, 204); margin: 0pt 0pt 0pt 0.8ex; padding-left: 1ex;"><div><div class="Wj3C7c"><br>
<br>
Detlef Bosau wrote:<br>
<blockquote class="gmail_quote" style="border-left: 1px solid rgb(204, 204, 204); margin: 0pt 0pt 0pt 0.8ex; padding-left: 1ex;">
Hi.<br>
<br>
Some years ago, the issue of packet loss differentation in TCP was a big issue. Does somebody happen to know the state of the art in this area?<br>
<br>
I'm particularly interested in those cases were we do _not_ have a reliable knowledge about the loss rate on a link. (So, particularly the CETEN<br>
approach by Allman and Eddy cannot be easily applied.)<br>
<br>
Thanks.<br>
<br>
Detlef<br>
<br>
</blockquote>
</div></div></blockquote></div><br>