<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN">
<html>
<head>
<meta content="text/html; charset=ISO-8859-1"
http-equiv="Content-Type">
</head>
<body text="#000000" bgcolor="#ffffff">
<font face="Helvetica, Arial, sans-serif">If the bottleneck router has
too much buffering, and there are at least some users who are infinite
data sources (read big FTP), then all users will suffer congestion at
the bottleneck router proportional to the buffer size, *even though*
the link will be "fully utilized" and therefore "economically
maximized".<br>
<br>
This is the "end to end" list, not the "link maximum utilization"
list. And a large percentage of end-to-end application requirements
depend on keeping latency on bottleneck links very low, in order to
make endpoint apps responsive - in their UIs, in the control loops that
respond quickly and smoothly to traffic load changes, etc.<br>
<br>
Analyses that focus 100% on maximizing static throughput and
utilization leave out some of the most important things. It's like
desiging cars to only work well as fuel-injected dragsters that run on
Bonneville salt flats. Nice hobby, but commercially irrelevant.<br>
</font><br>
On 06/26/2009 11:17 AM, S. Keshav wrote:
<blockquote cite="mid:70908D4C-4189-481F-9B6F-7C35A4364D92@uwaterloo.ca"
type="cite">The whole idea of buffering, as I understand it, is to
make sure that *transient* increases in arrival rates do not result in
packet losses. If r(t) is the instantaneous arrival rate (packet size
divided by inter-packet interval) and s(t) the instantaneous service
rate, then a buffer of size B will avert packet loss when integral from
t1 to t2 (r(t) - s(t)) < B. If B is 0, then any interval where r(t)
is greater than s(t) will result in a packet loss.
<br>
<br>
If you have a fluid system, where a source send packets as packets of
infinitesimal size evenly spaced apart, and if routers do not add
burstiness, then there is no need for buffering. Indeed, in the
classical telephone network, where sources are 64kbps constant bit rate
sources and switches do not add burstiness, we need only one sample's
worth of buffering, independent of the bandwidth-delay product. A
similar approach was proposed by Golestani in 1990 with 'Stop-and-go'
queueing, which also decoupled the amount of buffering (equivalent to
one 'time slot's worth) from the bandwidth-delay product.
<a class="moz-txt-link-freetext" href="http://portal.acm.org/citation.cfm?id=99523">http://portal.acm.org/citation.cfm?id=99523</a>
<br>
<br>
As Jon points out, if exogenous elements conspire to make your packet
rate fluid-like, you get the same effect.
<br>
<br>
On Jun 25, 2009, at 3:00 PM, Jon Crowcroft wrote:
<br>
<br>
<blockquote type="cite">so exogeneous effects may mean you dont need
BW*RTT at all of
<br>
buffering...
<br>
</blockquote>
<br>
So, why the need for a bandwidth-delay product buffer rule? The BDP is
the window size at *a source* to fully utilize a bottleneck link. If a
link is shared, then the sum of windows at the sources must add up to
at least the BDP for link saturation.
<br>
<br>
Taking this into account, and the fact that link status is delayed by
one RTT, there is a possibility that all sources maximally burst their
window's worth of packets synchronously, which is a rough upper bound
on s(t). With one BDP worth of buffering, there will be no packet loss
even in this situation. So, it's a good engineering rule of thumb.
<br>
<br>
In reality: (a) RTTs are not the same for sources (b) the sum of source
window sizes often exceeds the BDP and (c) the worst-case synchronized
burstiness rarely happens. These factors (hopefully) balance
themselves, so that the BDP rule seemed reasonable. Of course, we have
seen considerable work showing that in the network 'core' the regime is
closer to fluid than bursty, so that we can probably do with far less.
<br>
<br>
Hope this helps,
<br>
<br>
keshav
<br>
<br>
<br>
</blockquote>
</body>
</html>