<META HTTP-EQUIV="Content-Type" CONTENT="text/html; charset=iso-8859-1">
<!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN">
<html>
<head>
<title></title>
</head>
<body bgcolor="#ffffff" text="#000000">
<DIV id=idOWAReplyText53037 dir=ltr>
<DIV dir=ltr><FONT face=Arial color=#000000 size=2>Detlef,</FONT></DIV>
<DIV dir=ltr><FONT face=Arial size=2></FONT> </DIV>
<DIV dir=ltr><FONT face=Arial size=2>In my earlier description, I had
incorrectly assumed that link 2-3 was at 10 Mbps. The nature of the problem is
similar whether link 2-3 is at 10 Mbps or 100 Mbps.</FONT></DIV>
<DIV dir=ltr><FONT face=Arial size=2></FONT> </DIV>
<DIV dir=ltr><FONT face=Arial size=2>Here is a corrected description for your
network scenario -</FONT></DIV>
<DIV dir=ltr><FONT face=Arial size=2></FONT> </DIV>
<DIV dir=ltr><FONT face=Arial size=2>
<DIV dir=ltr><FONT face=Arial size=2>Take the case when both connections are
active and the queue at router 3 remains non-empty.</FONT></DIV>
<DIV dir=ltr><FONT face=Arial size=2></FONT> </DIV>
<DIV dir=ltr><FONT face=Arial size=2>Every T seconds, there will be a packet
departure at router 3, resulting in the queue size decreasing by 1
packet.</FONT></DIV>
<DIV dir=ltr><FONT face=Arial size=2></FONT> </DIV>
<DIV dir=ltr><FONT face=Arial size=2>At router 3, if a packet from node 1
departs at time n*T, then at time (n+1)*T + ta1 + t0, another packet will
arrive from node 1.</FONT></DIV>
<DIV dir=ltr><FONT face=Arial size=2> ta1 is the
time taken by the Ack to reach node 1 from node 4.</FONT></DIV>
<DIV dir=ltr><FONT face=Arial size=2>
<DIV dir=ltr><FONT face=Arial size=2>
<DIV dir=ltr><FONT face=Arial size=2> t0 is the
transmission time of a packet at 100 Mbps.</FONT></DIV>
<DIV dir=ltr> </DIV></FONT></DIV></FONT></DIV>
<DIV dir=ltr><FONT face=Arial size=2>
<DIV dir=ltr><FONT face=Arial size=2>At router 3, if a packet from node 0
departs at time n*T, then at time n*T + ta0 + 2 * t0, another packet will
arrive from node 0.</FONT></DIV>
<DIV dir=ltr><FONT face=Arial size=2> ta0 is the
time taken by the Ack to reach node 0 from node 4.</FONT></DIV>
<DIV dir=ltr><FONT face=Arial size=2> t0 is the
transmission time of a packet at 100 Mbps. </FONT></DIV>
<DIV dir=ltr><FONT face=Arial size=2> Another
packet (of a packet pair) from node 0 may arrive at time n*T + ta0 + 3 *
t0.</FONT></DIV>
<DIV dir=ltr> </DIV>
<DIV dir=ltr>In the scenario, ta0 << T, ta1 << T, and t0 = T /
10, ta0 + 2 * t0 > ta1 + t0. I am assuming that propagation delays were
set to 0 in the simulations.</DIV>
<DIV dir=ltr> </DIV>
<DIV dir=ltr><FONT face=Arial size=2>It can be seen, that when a node
1 packet arrives at node 3, the queue is never full - a packet departure takes
place ta1 + t0 seconds before its arrival, and no node 0 packets arrive during
ths interval.</FONT></DIV>
<DIV dir=ltr> </DIV>
<DIV dir=ltr>No such property holds for node 0 packets - hence node 0 packets
are selectively dropped.</DIV>
<DIV dir=ltr> </DIV>
<DIV dir=ltr>Changing bandwidths a bit or introducing real-life factors such as
propagation delays, variable processing delays and/or variable Ethernet switch
delays will probably break this synchronized relationship. RED will also
help.</DIV>
<DIV dir=ltr> </DIV>
<DIV dir=ltr>One can construct many other similar scenarios, where one
connection is selectively favored over another. Perhaps, one more reason to use
RED.</DIV>
<DIV dir=ltr> </DIV></FONT></DIV></FONT></DIV>
<DIV dir=ltr><FONT face=Arial size=2>Anil</FONT></DIV></DIV>
<DIV dir=ltr><BR>
<HR tabIndex=-1>
<FONT face=Tahoma size=2><B>From:</B> end2end-interest-bounces@postel.org on
behalf of Agarwal, Anil<BR><B>Sent:</B> Mon 12/25/2006 11:35 AM<BR><B>To:</B>
Detlef Bosau; end2end-interest@postel.org<BR><B>Cc:</B> Michael Kochte; Martin
Reisslein; Frank Duerr; Daniel Minder<BR><B>Subject:</B> Re: [e2e] How shall we
deal with servers with different bandwidthsand a common bottleneck to the
client?<BR></FONT><BR></DIV>
<DIV>
<DIV id=idOWAReplyText34147 dir=ltr>
<DIV dir=ltr><FONT face=Arial color=#000000 size=2>Detlef,</FONT></DIV>
<DIV dir=ltr><FONT face=Arial size=2></FONT> </DIV>
<DIV dir=ltr><FONT face=Arial size=2>Here is a possible explanation for the
results in your scenario -</FONT></DIV>
<DIV dir=ltr><FONT face=Arial size=2></FONT> </DIV>
<DIV dir=ltr><FONT face=Arial size=2>Take the case when both connections are
active and the queue at router 2 remains non-empty.</FONT></DIV>
<DIV dir=ltr><FONT face=Arial size=2></FONT> </DIV>
<DIV dir=ltr><FONT face=Arial size=2>Every T seconds, there will be a packet
departure at router 2, resulting in the queue size decreasing by 1 packet at
time T.</FONT></DIV>
<DIV dir=ltr><FONT face=Arial size=2></FONT> </DIV>
<DIV dir=ltr><FONT face=Arial size=2>If a packet from node 1 departs at time
n*T, then at time (n+1)*T + ta1, another packet will arrive at router 2
from node 1.</FONT></DIV>
<DIV dir=ltr><FONT face=Arial size=2> ta1 is the
time taken by the Ack to reach node 1.</FONT></DIV>
<DIV dir=ltr><FONT face=Arial size=2></FONT> </DIV>
<DIV dir=ltr><FONT face=Arial size=2>
<DIV dir=ltr><FONT face=Arial size=2>If a packet from node 0 departs at
time n*T, then at time n*T + ta0 + t0, another packet will arrive at router
2 from node 0.</FONT></DIV>
<DIV dir=ltr><FONT face=Arial size=2> ta0 is the
time taken by the Ack to reach node 0. </FONT></DIV>
<DIV dir=ltr><FONT face=Arial size=2> t0 is the
transmission time of a packet at 100 Mbps. </FONT></DIV>
<DIV dir=ltr><FONT face=Arial size=2> Another
packet from node 0 may arrive at time n*T + ta0 + 2 * t0.</FONT></DIV>
<DIV dir=ltr> </DIV>
<DIV dir=ltr>In the scenario, ta0 << T, ta1 << T, and t0 = T /
10, ta0 + t0 > ta1. I am assuming that propagation delays were set to 0 in
the simulations.</DIV>
<DIV dir=ltr> </DIV>
<DIV dir=ltr><FONT face=Arial size=2>It can be seen, that when a node
1 packet arrives at node 2, the queue is never full - a packet departure takes
place ta1 seconds before its arrival, and no node 0 packet arrive during the ta1
seconds.</FONT></DIV>
<DIV dir=ltr> </DIV>
<DIV dir=ltr>No such property holds for node 0 packets - hence node 0 packets
are selectively dropped.</DIV>
<DIV dir=ltr> </DIV>
<DIV dir=ltr>Changing bandwidths a bit or introducing real-life factors such as
propagation delays, variable processing delays and/or variable Ethernet
switching delays will probably break this synchronized relationship.</DIV>
<DIV dir=ltr> </DIV>
<DIV dir=ltr>Regards,</DIV>
<DIV dir=ltr>Anil</DIV>
<DIV dir=ltr> </DIV>
<DIV dir=ltr>Anil Agarwal</DIV>
<DIV dir=ltr>ViaSat Inc.</DIV>
<DIV dir=ltr>Germantown, MD</DIV>
<DIV dir=ltr> </DIV></FONT></DIV></DIV>
<DIV dir=ltr><BR>
<HR tabIndex=-1>
<FONT face=Tahoma size=2><B>From:</B> end2end-interest-bounces@postel.org on
behalf of Detlef Bosau<BR><B>Sent:</B> Sun 12/24/2006 5:52 PM<BR><B>To:</B>
end2end-interest@postel.org<BR><B>Cc:</B> Michael Kochte; Daniel Minder; Martin
Reisslein; Frank Duerr<BR><B>Subject:</B> Re: [e2e] How shall we deal with
servers with different bandwidths and a common bottleneck to the
client?<BR></FONT><BR></DIV>
<DIV>Detlef Bosau wrote:
<BLOCKQUOTE cite="" type="cite">I apologize if this is a stupid
question.<BR></BLOCKQUOTE><BR>I admit, it was a <B>very</B> stupid question
:-)<BR><BR>Because my ASCII arts were terrible, I add a nam-screenshot here
(hopefully, I´m allowed to send this mail in HTML):<BR><BR><IMG height=589
alt="NAM screenshot" src="bild.png" width=627><BR><BR>Links: <BR>0-2: 100
Mbit/s, 1 ms<BR>1-2: 10 Mbit/s, 1 ms<BR>2-3: 100 Mbit/s, 10 ms<BR>3-4: 10
MBit/s, 1 ms<BR><BR>Sender: 0,1<BR>Receiver: 4<BR>
<BLOCKQUOTE cite="" type="cite"><BR><BR>My feeling is that the flow server 1 -
client should achieve more throughput than the other. From what I see in a
simulation, the ratio in the secnario above is roughly 2:1. (I did this
simulation this evening, so admittedly there might be errors.) <BR><BR>Is
there a general opinion how the throughput ratio should be in a scenario like
this?</BLOCKQUOTE><BR><BR>Obviously, my feeling is wrong. Perhaps, I should
consider reality more than my feelings <SPAN class=moz-smiley-s6><SPAN>:-[
</SPAN></SPAN><BR><BR>AIMD distributes the <B>path capacity (i.e.
"memory") </B>in equal shares. So, in case of two flows sharing a path,
each flow is assigned an equal window. Hence, the rates should be equal as they
depend on the window (= estimate of path capaciyt) and RTT. (Well known rule of
thumb: rate = cwnd/RTT)<BR><BR>However, the scenario depicted above is an
interesting one: Apparently, the sender at node 1 is paced "ideally" by the link
1-2. So, packets sent by node 0 are dropped at node 3 unuduly often. In
consequence, the flow from 0 to 4 hardly achieves any throughput whereas the
flow from 1 to 4 runs as if there was no competitor.<BR><BR>If the bandwdith 1-2
is changed a little bit, the bevaviour returns to the expected one.<BR><BR>I´m
still not quite sure whether this behaviour matches reality or whether it is an
NS2 artifact.<BR><BR>Detlef<BR></DIV></DIV>
</body>
</html>