[rbridge] draft protocol-10 WGLC Maximum Bridge Transit Delay
james.d.carlson at sun.com
Mon Dec 1 10:27:07 PST 2008
Donald Eastlake writes:
> On Mon, Dec 1, 2008 at 9:15 AM, James Carlson <james.d.carlson at sun.com> wrote:
> > I suspect that one good reason to avoid bothering with such a feature
> > on high-end switches is that the amount of buffering required to reach
> > a 1 second delay at high line rates becomes prohibitive -- in other
> > words, you'll ordinarily drop on queue entry well before that happens
> > in all but pathological cases, so why bother caring?
> Hummm, it's not clear to me what the resolution of the maximum transit
> delay is... although 1 second is the default, the resolution could be
> small allowing it to be set to some number of milliseconds or
Actually, I wasn't talking about the resolution, but rather just the
amount of buffering you'd need at a single priority level to queue
across that much delay anywhere and not begin dropping.
> As above, you could have a stream of, say, 1 Gbps higher priority
> frames going from port 1 to port 3 and one lower priority frame for
> port 3 that drifts in port 2 and this poor low priority frame could
> just sit in a queue for days and then pop out due to a hiccup in the
> high priority stream. I won't argue if you want to call that
> pathological but it doesn't seem good that it could happen.
Yes, I'd call that pathological, but, ok, point taken.
> The specific provisions in 802.1D-2004 include :
> "6.3.6 Frame lifetime
> The MAC Service mandates an upper bound to the transit delay
> experienced for a particular instance of
> communication. This maximum frame lifetime is necessary to ensure the
> correct operation of higher layer
> protocols. ..."
If there's no limit on the number of bridges that may be encountered
in transit, and no limit on the speed-of-light delays between nodes,
then I think there's little that this transit delay limit does to
protect those upper layer protocols.
It limits variance in some cases, at the cost of higher loss, but
that's about it; it can't practically set any upper limit. (For a
worst-case scenario, consider two adjacent nodes that are unable to
use a shared link due to redundancy elsewhere in the network. Failure
of that redundancy can bring up that low-latency link, and restoring
the far-away link tears it down again. The variance in that case may
be predictable if you know the entire topology, but it's arbitrarily
> Similar provisions appear in 802.1Q-2005.
Yep; I saw them. I wasn't questioning whether they're there, but
whether they do any real good.
> Well, I'm OK with saying MAY. Since this is described in the 802.1
> standards as being in the port output queue behavior, and since we now
> incorporate that by reference unless we say otherwise, one could argue
> that it is mandatory under the current draft. Unless people weigh in
> with other opinions, I'll change it to MAY.
That seems fine. (If we're incorporating all of 802.1 by reference,
do we need to restate much of it ... ?)
James Carlson, Solaris Networking <james.d.carlson at sun.com>
Sun Microsystems / 35 Network Drive 71.232W Vox +1 781 442 2084
MS UBUR02-212 / Burlington MA 01803-2757 42.496N Fax +1 781 442 1677
More information about the rbridge