raj at cup.hp.com
Fri Mar 23 11:55:27 PST 2001
I suspect this could be an issue:
The method proposed herein for automatic BDP discovery and caching is to
use a simple mechanism modeled after the ICMP Echo Request and Echo
Reply protocol to discover the bandwidth of the least-capable hop
between a given source and destination host pair. This new mechanism
could be a new type of ICMP Request/Reply pair, or it could be a simple
enhancement to the existing Echo Request/Reply, but using a new IP
class/number combination. The main difference between the new mechanism
existing ICMP Request/Reply pair is that the router would have to
process two new fields in the message.
Development of this BDP protocol initially requires the cooperation of
at least one router vendor, though a crude prototype could be
demonstrated with traceroute and SNMP-derived information.
Seems that the ifSpeed fields of the standard SNMP MIBs would be the
best way to go here anyhow. It does mean knowing the community string or
authentication stuff for the SNMP access. True, that will have "issues"
crossing AS (is that the right term?) but then I suspect that AS would
not like to have that bandwidth info escaping their shpere anyhow.
As far as driving "supercharged Web"
(http://www.web100.org/papers/web100.html) I would have thought that if
the commercial types were that keen on it, they would be taking part in
the SPECweb9X benchmarks and perhaps the IRC bakeoffs. If the 100 in
web100 is supposed to represent 100 Mbit/s, those benchmarks are already
demonstrating solutions going far faster.
The stuff about driving demand for fibre to the home was fun to read in
the context of long-haul bandwidth prices bottoming-out due to
oversupply, and vendors not being able to recoup their investments.
Other interesting things from the concept paper:
A great deal of fine research has been underway by the Pittsburgh
Supercomputer Center's Networking Research Group, the University of
Washington's Department of Computer Science & Engineering, and several
other groups regarding networking performance tuning and TCP protocol
stack improvements. This research needs to be intensified and
capitalized upon in terms of application to the TCP protocol stack in
the chosen development system. The individual research groups might also
be more effective if their various efforts could be utilized in a
cohesive fashion. For instance, no standing TCP-stack improvement forum
exists to provide a focal point for the exchange of ideas. Finally, it
should be noted that the TCP protocol stack improvement task would be
the most complex and most difficult task of all of those listed.
I guess e2e and tcp-impl don't count... :)
Needed TCP-stack improvements are listed below.
Include Well-Known Mechanisms
Standard mechanisms like per-destination MTU-discovery (RFC 1191) and
extensions to TCP (RFC1323) would certainly be included in the
is there a commercial stack out there that doesn't already have these
things?!? Their target OS - Linux already has them.
include Advanced Mechanisms
In addition to such standard mechanisms as listed above, more advanced
improvements are needed. For instance, TCP Selective Acknowledgment
(SACK), defined by RFC 2018, should also be included in the development
hmm, also in the latest (?) linux bits, and in HP-UX 11, and in Solaris,
and in WinSomething. Seems that is already done...
Furthermore, work needs to be done not just to improve high-performance
networking, but to improve short-duration network-flows as well,
particularly when congestion is relatively high, as such short-duration
high-loss transfers are typical of most current Web transfers. Current
end-to-end congestion avoidance and congestion control mechanisms can
greatly impede performance in such circumstances.
I must be missing something - that sounds like the increase in the
allowable initial cwnd?
The following is a list of needed improvements.
Currently, operating system kernels generally provide statistics
regarding network only in the aggregate. Kernel hooks to monitor
individual TCP sessions in real-time need to be added as a foundation
for developing a large class of highly needed network diagnostic and
performance monitoring tools. Such hooks should maintain dynamic counts
of important TCP-session parameters, as well as be able to supply
TCP-session packet streams upon demand.
OK, per-session stats might be interesting. It will be move overhead in
the stack of course :)
GUI-based TCP-Session Monitoring Tools
Based upon the aforementioned kernel hooks, one or more TCP-monitoring
tools need to be developed that are capable of concurrent, dynamic,
real-time graphing of sets of user-selected real-time TCP-session
statistics. Among these statistics are: data rate, window size,
round-trip-time, number of packets unacknowledged, number of
retransmitted packets, number of out-of-order packets, number of
duplicate packets, etc. A variety of display options should be available
such as totals, deltas, running-averages, etc.
All nice and wizzy, but to what end?
How a GUI for traceroute makes it any better is an open question. (I've
not bothered to quote from the article)
Anyhow, it sounds like nice cushy funding if you can get it :)
these opinions are mine, all mine; HP might not want them anyway... :)
feel free to email, OR post, but please do NOT do BOTH...
my email address is raj in the cup.hp.com domain...
More information about the end2end-interest