[e2e] What's wrong with this picture?

Stiliadis, Dimitrios (Dimitri) stiliadi at alcatel-lucent.com
Fri Sep 11 06:24:15 PDT 2009


btw, the bimodal can be also explained if your card is attaching
to two different cells (or bands), depending on signal conditions/fading etc.
(i.e. it oscillates between cell A and cell B or band A and band B). 
Also, it is not rare to see that the device will switch
between HSDPA and EDGE if the signal conditions on HSDPA are marginal.
Several tests have shown that EDGE delays of 500ms are normal.

In order to fully understand the issue, you need to also log all the 
modem information (such as cell ID, band that it is attached etc.etc.),
or you need to perform the test in a anechoic chamber with test 
equipment. Usually you can do that, by issuing the right AT commands
to the modem. 

David:

As for the uplink bandwidth bottleneck, the problem there other than
larger router buffers is that ARQ interprets losses on the wire
as losses on the air, and tries retransmissions. Your picture
is not accurate without an RNC in the middle that implements some form
of ARQ/RLP

There are just two many things going on in the background, that pings
alone cannot show.

Cheers,

Dimitri 

> -----Original Message-----
> From: end2end-interest-bounces at postel.org 
> [mailto:end2end-interest-bounces at postel.org] On Behalf Of 
> Dominik Kaspar
> Sent: Thursday, September 10, 2009 8:38 PM
> To: David P. Reed
> Cc: end2end-interest at postel.org
> Subject: Re: [e2e] What's wrong with this picture?
> 
> Hi David,
> 
> Thanks for the explanations about the bottleneck link to the 
> backbone ISP. The illustrated system architecture and the 
> overuse of buffers certainly sounds like reasonable cause for 
> those huge delays you have posted at the beginning of this thread.
> 
> The "bimodal" behaviour of delays > 5000 ms and delays < 200 
> ms that you have measured is really extreme and it seems to 
> differ somewhat from what I have observed. In my experiments, 
> the delay abruptly switches between two rather stable 
> "modes"... sometimes every few minutes, sometimes just once a 
> day. It is completely unpredictable and I have not yet found 
> _the_ explanation for its cause. I doubt it has anything to 
> do with TCP... it seems much more likely to be one of the 
> HSDPA-specific properties that Detlef has pointed out (line 
> coding, MAC-layer ACKs, ...).
> 
> Here is the entire 24h ping log that clearly illustrates the 
> two "modes":
> http://home.simula.no/~kaspar/static/ping-hsdpa-24h-bimodal-00.txt
> 
> Greetings,
> Dominik
> 
> 
> On Wed, Sep 9, 2009 at 1:07 AM, David P. Reed <dpreed at reed.com> wrote:
> > I'm willing to bet that you are seeing the same problem I 
> am, and that 
> > it has nothing to do with the modem or wireless protocol.
> >
> > Instead you are seeing what would happen if you simulate in ns2 the 
> > following system structure:
> >
> > -------------------------\
> > --------------------------\
> > ---------------------------\
> >       wireless medium   [WIRELESS 
> > HUB]------[ROUTER]-----------backbone ISP 
> ---------------------------/ 
> > --------------------------/
> >
> > When the link between the ROUTER and backbone ISP is of 
> lower bitrate 
> > B than the sum of all the realizable simultaneous uplink 
> demand from 
> > devices on the left, the outbound queue of the router is of 
> size M > 
> > BT where T is the observed stable long delay, and the ROUTER does 
> > nothing to signal congestion until the entire M bytes (now 
> very large) of memory are exhausted.
> >
> > Memory is now very cheap, and not-very-clueful network layer 2 
> > designers (who don't study TCP or the Internet) are likely to throw 
> > too much at the problem without doing the right thing in 
> their firmware.
> >
> > On 09/08/2009 06:47 PM, Dominik Kaspar wrote:
> >
> > Hello David,
> >
> > You mentioned the bimodal behaviour of your 3G connection. 
> I recently 
> > noticed the same thing but have not yet been able to 
> explain why this 
> > happens.
> >
> > I also ran Ping tests over multiple days using an HSDPA modem (with 
> > both the client and server located in Oslo, Norway). The 
> experienced 
> > RTTs were very stable over short periods of time, but 
> sometimes they 
> > averaged around 80ms, while at other times the average was at about 
> > 300ms.
> >
> > A CDF illustration of the results is available here:
> > http://home.simula.no/~kaspar/static/cdf-hsdpa-rtt-00.png
> >
> > What is the reason of these two modes? Is it caused by adaptive 
> > modulation and coding on the physical layer? If so, why 
> does it affect 
> > the delay so much? I would only expect a reduced bandwidth, but not 
> > much change in delay...
> >
> > Greetings,
> > Dominik
> >
> >
> > On Tue, Sep 8, 2009 at 7:56 PM, David P. 
> Reed<dpreed at reed.com> wrote:
> >
> >
> > I should not have been so cute - I didn't really want to 
> pick on the 
> > operator involved, because I suspect that other 3G operators around 
> > the world probably use the same equipment and same rough 
> configuration.
> >
> > The ping and traceroute were from Chicago, using an ATT 
> Mercury data 
> > modem, the same channel as the Apple iPhones use, but it's 
> much easier 
> > to run test suites from my netbook.
> >
> > Here's the same test from another time of day, early Sunday 
> morning, 
> > when things were working well.
> >
> > Note that I ran the test over the entire labor day weekend 
> at intervals.
> > The end-to-end ping time was bimodal.  Either it pegged at 
> over 5000 
> > milliseconds, or happily sat at under 200 milliseconds.   
> Exactly what 
> > one would expect if TCP congestion control were disabled by 
> > overbuffering in a router preceding the bottleneck link 
> shared by many users.
> >
> > ------------------------------
> >
> > $ ping lcs.mit.edu
> > PING lcs.mit.edu (128.30.2.121) 56(84) bytes of data.
> > 64 bytes from zermatt.csail.mit.edu (128.30.2.121): 
> icmp_seq=1 ttl=44
> > time=209 ms
> > 64 bytes from zermatt.csail.mit.edu (128.30.2.121): 
> icmp_seq=2 ttl=44
> > time=118 ms
> > 64 bytes from zermatt.csail.mit.edu (128.30.2.121): 
> icmp_seq=3 ttl=44
> > time=166 ms
> > 64 bytes from zermatt.csail.mit.edu (128.30.2.121): 
> icmp_seq=4 ttl=44
> > time=165 ms
> > 64 bytes from zermatt.csail.mit.edu (128.30.2.121): 
> icmp_seq=5 ttl=44
> > time=224 ms
> > 64 bytes from zermatt.csail.mit.edu (128.30.2.121): 
> icmp_seq=6 ttl=44
> > time=183 ms
> > 64 bytes from zermatt.csail.mit.edu (128.30.2.121): 
> icmp_seq=7 ttl=44
> > time=224 ms
> > 64 bytes from zermatt.csail.mit.edu (128.30.2.121): 
> icmp_seq=8 ttl=44
> > time=181 ms
> > 64 bytes from zermatt.csail.mit.edu (128.30.2.121): 
> icmp_seq=9 ttl=44 
> > time=220 ms
> > 64 bytes from zermatt.csail.mit.edu (128.30.2.121): 
> icmp_seq=10 ttl=44
> > time=179 ms
> > 64 bytes from zermatt.csail.mit.edu (128.30.2.121): 
> icmp_seq=11 ttl=44
> > time=219 ms
> > ^C
> > --- lcs.mit.edu ping statistics ---
> > 11 packets transmitted, 11 received, 0% packet loss, time 
> 10780ms rtt 
> > min/avg/max/mdev = 118.008/190.547/224.960/31.772 ms $ traceroute 
> > lcs.mit.edu traceroute to lcs.mit.edu (128.30.2.121), 30 
> hops max, 60 
> > byte packets
> >  1  * * *
> >  2  172.26.248.2 (172.26.248.2)  178.725 ms  178.568 ms  179.500 ms
> >  3  * * *
> >  4  172.16.192.34 (172.16.192.34)  187.794 ms  187.677 ms  
> 207.527 ms
> >  5  12.88.7.205 (12.88.7.205)  207.416 ms  208.325 ms  69.630 ms
> >  6  cr84.cgcil.ip.att.net (12.122.152.134)  79.425 ms  89.227 ms  
> > 90.083 ms
> >  7  cr2.cgcil.ip.att.net (12.123.7.250)  98.679 ms  90.727 
> ms  91.576 
> > ms
> >  8  ggr2.cgcil.ip.att.net (12.122.132.137)  72.728 ms  89.628 ms  
> > 88.825 ms
> >  9  192.205.33.186 (192.205.33.186)  89.787 ms  89.794 ms  
> 80.918 ms 
> > 10  ae-31-55.ebr1.Chicago1.Level3.net (4.68.101.158)  79.895 ms  
> > 70.927 ms
> >  78.817 ms
> > 11  ae-1-5.bar1.Boston1.Level3.net (4.69.140.93)  107.820 
> ms  156.892 
> > ms
> >  140.711 ms
> > 12  ae-7-7.car1.Boston1.Level3.net (4.69.132.241)  139.638 
> ms  139.764 
> > ms
> >  129.853 ms
> > 13  MASSACHUSET.car1.Boston1.Level3.net (4.53.48.98)  149.595 ms  
> > 154.366 ms
> >  152.225 ms
> > 14  B24-RTR-2-BACKBONE.MIT.EDU (18.168.0.23)  146.808 ms  
> 129.801 ms  
> > 89.659 ms
> > 15  MITNET.TRANTOR.CSAIL.MIT.EDU (18.4.7.65)  109.463 ms  
> 118.818 ms  
> > 91.727 ms
> > 16  trantor.kalgan.csail.mit.edu (128.30.0.246)  91.541 ms  
> 88.768 ms
> >  85.837 ms
> > 17  zermatt.csail.mit.edu (128.30.2.121)  117.581 ms  116.564 ms  
> > 103.569 ms $
> >
> >
> >
> >
> >
> >
> 
> 


More information about the end2end-interest mailing list