<html>
<head>
<meta content="text/html; charset=UTF-8" http-equiv="Content-Type">
</head>
<body bgcolor="#FFFFFF" text="#000000">
<div class="moz-cite-prefix">Am 25.12.2012 04:19, schrieb
<a class="moz-txt-link-abbreviated" href="mailto:dpreed@reed.com">dpreed@reed.com</a>:<br>
</div>
<blockquote cite="mid:1356405568.01746958@apps.rackspace.com"
type="cite">
<meta http-equiv="Context-Type" content="text/html; charset=UTF-8">
<p>Good luck getting people to use the term "bitrate" instead of
the entirely erroneous term "bandwidth" that has developed since
the 1990's or so.</p>
<p>Â </p>
</blockquote>
<br>
David, I very well see the "custom" side of the problem. In the
sense, that using "bandwidth" here, is a custom. H<br>
<br>
<blockquote cite="mid:1356405568.01746958@apps.rackspace.com"
type="cite">
<p>Indeed, bandwidth is now meaningless as a term, just as
"broadband" is.  Once upon a time, both referred to bounds on
the frequency components of the physical layer signal, both in
wires (twisted pair, coax, etc) and in RF.  The RF bandwidth of
802.11b DSSS modulation was about 10 MHz, whereas the bitrate
achieved was about 2 Mb/sec. Now we use OFDM modulation in
802.11n, with bandwidths of 40 MHz more or less, but bitrates of
>> 40 Mb/sec.  (yes, that is mostly because of 64-QAM,
which encodes 6 bits on each subcarrier within an OFDM
"symbol").</p>
</blockquote>
<br>
<br>
You're talking about two different things which should be taken
apart. The one thing are the physical limitations of technologies.
And I agree: We now have broad frequency ranges even for wireless
channels so that we can - in principle - achieve huge throughputs
there. "In principle". It's not that long ago that I encountered a
classroom scenario where the students should establish MANETs - and
although the "bandwidth" was quite attractive, the achieved
"throughput" wasn't. This was no university scenario. And now the
teachers try to explain.....<br>
<br>
<br>
Â
<blockquote cite="mid:1356405568.01746958@apps.rackspace.com"
type="cite">
<p>What causes the 802.11n MAC protocol to achieve whatever
bitrate it achieves is incredibly complex. <br>
</p>
</blockquote>
<br>
I absolutely agree. <br>
<br>
However, I don't agree to quite a lot of simulations and emulations
which simply ignore that complexity and replace it by simply wrong
models.<br>
<br>
<blockquote cite="mid:1356405568.01746958@apps.rackspace.com"
type="cite">
<p>Â Interestingly, in many cases the problem is really bad due to
"bufferbloat" in the 802.11n device designs and drivers, which
causes extreme buildup of latency, which then causes the TCP
control loops to be very slow in adapting to new flows sharing
the path.</p>
</blockquote>
<br>
Interestingly ;-)<br>
<br>
Could it be, that we (at least to a certain degree) encounter a
"home made problem" here?<br>
<br>
Particularly when you refer to nonsense like that one:<br>
<br>
@article{ meyer,<br>
   author ="Michael Meyer and Joachim Sachs and Markus Holzke",<br>
   title ="{Performance Evaluation of A TCP Proxy in WCSMA
Networks}",<br>
   journal ="IEEE Wireless Communications",<br>
   year = "2003",<br>
   month = "October"<br>
}<br>
<br>
Where the path capacity of a GPRS link is given by the "latency
bandwidth product" and the "bandwidth" is 384 kBit/s (which is the
gross data rate of GPRS) - and then users are recommended to use
large initial window sizes for TCP to fully exploit this "capacity"?
Which simply ignores, and I had lots of arguments here even with EE
and CE guys, that the path capacity is not used for initial
transmissions but because of retransmissions, some packets are
placed on "the channel" dozens of times?<br>
<br>
Could it be, that using a simple stop 'n wait algorithm for mobile
links (which is generally a good idea because in terrestric mobile
networks the "radio link" quite frequently doesn't keep the amount
of data which is necessary for one IP address, so there's no need to
use sliding window here) and sending a packet to the link when
another packet has left the link (which is the recommendation by
Jacobson and Karels for more then twenty years now ;-) would
alleviate the problem?<br>
<br>
Actually, the GPRS standard accepts delivery times up to five
minutes. So, assuming a "bandwidth" of 384 kBit/s, I sometimes
consider making my notebook talking to itself using GPRS as some
kind of memory extension.... (However, I'm still looking for a fast
random access algorithm, the serial access is sometimes a bit
annoying.)<br>
<br>
(It could be a nice product brand: "WAX". "Wireless Address
eXtension." )<br>
<br>
No kidding: When we send huge amounts of data to an interface with
the strong expectation that this data is conveyed to somewhere -
although the interface does not get any service for some reason,
this is as reasonable as lending money to Greece and strongly
expecting to get the money paid back - with interest.<br>
<br>
(I herewith apologize to greek readers here. You might tell me, that
not we Germans suffer from Greece - but it's the other way round,
Greece suffers from Germany. However, hope is on the way: The next
elections in Germany are in fall 2013.) <br>
<br>
So, it's not surprising that we have buffer bloats there (like the
debit of Greece to Germany). (To the non european readers: Germany
sometimes talked <br>
Greece in buying useless things in Germany, like e.g. submarines.
Particularly Germany vouched for Greece's credit worthiness - and
now, Greece has useless submarines and no money but an incredible
debit, German submarine builders are unemployed - and German taxis
payers are made to believe that Greece were the cause of the
problem.)<br>
<br>
The analogy is not new: In some text books, sliding window systems
as they are used in TCP, are called "credit based schemes". And we
can find quite some analogies between data transportation systems
and networks on the one hand and economic systems on the other. So,
the often mentioned "buffer bloat" problem in networking is similar
to the "balance bloat" problem, which is often talked about in
economics. And the reasons are not that different: In both cases, we
have a strong mismatch between expectation and reality. <br>
<br>
<br>
<br>
<blockquote cite="mid:1356405568.01746958@apps.rackspace.com"
type="cite">
<p>Â </p>
<p>This latter problem is due to the fact that neither the radio
engineers (who design the modulators) nor the CS people (who
write the code in the device drivers and application programs)
actually understand queueing theory or control theory very
well. </p>
</blockquote>
<br>
<br>
The more important problem is, at least in Germany, that they do not
understand each other. <br>
<br>
And the very reason for this is, at least in Germany, that they do
not listen to each other.<br>
<br>
David, when I talk to colleagues and claim that throughput may be
different from a gross bit rate, I'm blamed as a know it all - and
frankly spoken, I'm more than offended by this after having
experienced this for quite a couple of years.<br>
<br>
And with particular respect to radio engineers: At least in Germany,
these are mostly electrical engineers by education. And a typical
electrical engineer is supposed to have more forgotten knowledge and
understanding of control theory than a CS guy is supposed to ever
have heard of.<br>
<br>
Let me take the Meyer/Holzke/Sachs paper for example. It took me
weeks to understand how these guys forged their results.<br>
<br>
It took me YEARS to get an understandig of mobile networks to see,
that these "results" are wrong, however, they are submitted,
accepted, published, so anyone is convinced: "This is correct, it's
published by scientists, at least one of them holds a PhD, so the
paper must be correct." It is neither correct nor science, it is
bullshit. However, the public believes in this "story" and other
guys than the authors are blamed when systems do not work as
promissed by this paper.<br>
<br>
<br>
<blockquote cite="mid:1356405568.01746958@apps.rackspace.com"
type="cite">
<p>For example, both seem to think that adding buffering
"increases throughput"</p>
</blockquote>
<br>
I do not know who is "both". I'm neither of them.<br>
<br>
When people say, increasing buffers means increasing throughput, I
first of all would discriminate workload from buffers and when we
talk about workload and buffers, it's always a good idea to have a
look at textbooks like "Queuing systems" by Len Kleinrock.<br>
<br>
Throughput is achieved, as the word says, by "putting through" and
not by "putting at". The pyramids would not be there if the slaves
only putted the stones somewhere on stock near the building place -
and no one would have moved the stones to their final place. <br>
<blockquote cite="mid:1356405568.01746958@apps.rackspace.com"
type="cite">
<p> - whereas typically it causes catastrophic failure of the
control loops involved in adaptation.</p>
</blockquote>
<br>
And that is much too simple. <br>
<br>
One say, buffers increase throughput. You say: buffers cause
catastrophic failure.<br>
<br>
Could we agree upon: "It depends."?<br>
<br>
And that it is worthwhile to carefully look at the system of
interest, if buffers can be beneficial and should be added - or if
buffers are malicious and should be left out?<br>
<br>
Otherwise we would end up in having "the answer". No matter, what's
the question.<br>
<br>
When you say we should stay away from thoughtless buffering, I
couldn't be more with you.<br>
<br>
(And in much private conversations I pointed to the Takoma bridge
disaster, where a "buffer bloat" (sic! State variables in dynamic
systems can be seen as buffers for energy!) caused structural
damage. <br>
<br>
And it's the very core of the congavoid paper to ensure stability
(and in the very core, stability does not mean anything else as
avoiding buffer bloat) by fixing a system's workload to a reasonable
size.<br>
<br>
(The, not easy, question is: What is "reasonable"?)<br>
<br>
The congavoid paper does not make too much words on this issue.
However: Beware of the footnote on page 2. The note on the Ljapunov
equation.<br>
Stated in the terms of TCP that does mean, amongst others: By
limiting the workload on the path, we limit the workload which can
gather in a local buffer.<br>
There are other, more complex, implications. <br>
<br>
And that's why I posted my question to this list. One implication
is: It might be beneficial to have some MBytes of workload in a TCP
flow, when the path includes a geostationary satellite link from
Hamburg to New York. The same workload is completely malicious when
the path consists only of a GPRS link. <br>
So, may I speak frankly? God in heaven, which devil made us use one
and only one congestion window for the whole path end to end?<br>
<br>
And when I recently proposed to change this in a research proposal,
I got the question: "Which problem do you want to solve?" <br>
<br>
Saying this, I strongly emphasise: For the model used by JK, the
congavoid paper is a stroke of genius.<br>
<br>
And for those, who see shortcomings in the congavoid paper, I say:
The shortcomings don't lie in the congavoid paper but in your
reading.<br>
If we carefully read the work by Jacobson and Karels and the work by
Raj Jain and others from the same period of time, many of our
problems would not appear new. The authors anticipated much of our
problems. If we only spent the time for careful reading. "The
badness lies in the brain of the reader."<br>
<blockquote cite="mid:1356405568.01746958@apps.rackspace.com"
type="cite">
<p>Â </p>
<p>Or worse, many times the hardware and firmware on an 802.11n
link will be set so that it will retransmit the current frame
over and over (either until delivery or perhaps 255 times)
before dropping the packet.  </p>
</blockquote>
<br>
That's exactly what happens on all wireless networks to my knowledge
(not only on 802.11 but in others as well, and again: Retransmission
itself isn't bad. But thoughtless retransmission is evil.) And
that's what's simply ignored in the paper I referred to yesterday
(Herrscher, Leonhardi, Rothermel) and the paper I referred to above
(Meier, Sachs, Holzke) and what's simply ignored in countless
private conversations.<br>
<br>
<br>
However: You run into the same pitfall yourself!<br>
<br>
Please look at the alternative you mention: Either you retransmit
the packet over and over - or you drop it. <br>
<br>
We know the saying: If you have only two alternatives, both of which
are impossible, choose the third.<br>
<br>
In some cases it may be possible to adapt your system, so that
further transmissions may become successful.<br>
In some cases it may be possible to change your route. <br>
<br>
I don't know - and again: It depends.<br>
<br>
However, I doubt that "retransmit the packet over and over" and
"(silently) drop" the packet are the only possibilities in each and
every case. <br>
So the problem is sometimes not the lack of opportunities. It's the
lack of willingness to choose them.<br>
<br>
Sometimes, there is no third way. As often stated: TANSTAAFL.<br>
<br>
And again to my criticism for using only one CWND. Exactly using one
CWND for a whole path (of say 70 hops ore more) means to use ONE
answer fo MANY questions. And ONE solution to ANY problem. (My
apologies go to IBM guys ;-))<br>
<br>
<br>
<br>
<br>
<blockquote cite="mid:1356405568.01746958@apps.rackspace.com"
type="cite">
<p>Such very long delays mean that the channel is heavily blocked
by traffic that slows down all the other stations whose traffic
would actually get through without retransmission.</p>
</blockquote>
<br>
Absolutely. <br>
<br>
We should not make the whole world suffer from our local problem.<br>
<br>
(It was part of my research proposal.)<br>
<br>
<blockquote cite="mid:1356405568.01746958@apps.rackspace.com"
type="cite">
<p>Â </p>
<p>Yet CS people and EE's are prone to say that the problem is due
to "interference", and try to arrange for "more power". </p>
</blockquote>
<br>
More power? Sometimes interference can be alleviated by less power!<br>
<blockquote cite="mid:1356405568.01746958@apps.rackspace.com"
type="cite">
<p>More power then causes other nearby networks to be slowed down
(as they have to listen for "silence" before transmitting).</p>
</blockquote>
<br>
I stated so, when I discussed the Herrscher, Leonhardi, Rothermel
Paper yesterday.<br>
<br>
However, you don't make a difference between external noise (which
may interfere with your wireless cell) or multipath interference,
where your signal is split up in rays which interfere with
themselves. These are different scenarios which should be treated
differently. (E.g. the first one by power regulation, the second one
by MIMO systems or rake receivers. Sometimes, different problems
require different solutions.) <br>
<br>
In some cases, power regulation will not work. Why do not act in the
other way round then? Make the cells use the same frequency range
and couple the adjacent bands from, say, two "fife band" ranges to
one "ten band" range and increase sending power. And then change
your line coding and channel coding to a higher net data rate - and
hence shorter (in a temporal sense) packets. So, you couple your
cells to increase the joined capacity - and doing so you lower your
network load. Could this be a way to go? Alleviate interference by
increasing sending power. Sounds strange, however may work
sometimes.<br>
(And it's an example for the aforementioned "third alternative".)<br>
<br>
<blockquote cite="mid:1356405568.01746958@apps.rackspace.com"
type="cite">
<p>Â </p>
<p>Thus, Detlef, we do need an improvement in terminology, but
even more in understanding. <br>
</p>
</blockquote>
<br>
David, is that really a contradiction? Or isn't a careful
terminology helpful to improve better understanding?<br>
<br>
<br>
<blockquote cite="mid:1356405568.01746958@apps.rackspace.com"
type="cite">
<p> The nonsense that passes for knowledge around wireless
networking, even taught by "professors of networking" is
appalling. It's the blind leading the blind.</p>
</blockquote>
<br>
May I put this in my signature?<br>
<br>
This is the by far the most wise sentence, I've ever read on this
subject.<br>
<blockquote cite="mid:1356405568.01746958@apps.rackspace.com"
type="cite">
<p>Â </p>
<p>I don't think graduate students in computer networking will
ever be required to learn about Poynting vectors, control
theory, and wavelet decompositions, and the ones in EE will not
learn dynamic queueing theory, distributed congestion control,
and so forth.  And the information theory students will
probably never use a vector signal analyzer.</p>
</blockquote>
<br>
And this is perhaps even not necessary. But it is highly useful to
listen to each other and be willing to get things explained by each
other. We can walk around like a blinds leading blinds - or we can
try to walk around adding our views.<br>
<blockquote cite="mid:1356405568.01746958@apps.rackspace.com"
type="cite">
<p>Â </p>
<p>So the terminology and understanding issues will persist.</p>
<p>Â </p>
</blockquote>
<br>
But it should lead to a better understanding instead of more
confusion.<br>
<br>
<br>
<br>
<pre>--
------------------------------------------------------------------
Detlef Bosau
GalileistraÃe 30
70565 Stuttgart Tel.: +49 711 5208031
mobile: +49 172 6819937
skype: detlef.bosau
ICQ: 566129673
<a class="moz-txt-link-abbreviated" href="mailto:detlef.bosau@web.de">detlef.bosau@web.de</a> <a class="moz-txt-link-freetext" href="http://www.detlef-bosau.de">http://www.detlef-bosau.de</a>
------------------------------------------------------------------
</pre>
</body>
</html>