<html><head></head><body style="word-wrap: break-word; -webkit-nbsp-mode: space; -webkit-line-break: after-white-space; "><br><div><div>On Jul 16, 2012, at 6:23 PM, Joe Touch wrote:</div><br><blockquote type="cite"><span class="Apple-style-span" style="border-collapse: separate; font-family: Helvetica; font-style: normal; font-variant: normal; font-weight: normal; letter-spacing: normal; line-height: normal; orphans: 2; text-align: -webkit-auto; text-indent: 0px; text-transform: none; white-space: normal; widows: 2; word-spacing: 0px; -webkit-border-horizontal-spacing: 0px; -webkit-border-vertical-spacing: 0px; -webkit-text-decorations-in-effect: none; -webkit-text-size-adjust: auto; -webkit-text-stroke-width: 0px; font-size: medium; "><div><blockquote type="cite">One interpretation of end-to-end tells us that in order to improve<br></blockquote><blockquote type="cite">the scalability of our solution, we should do less in the channel,<br></blockquote><blockquote type="cite">let corruption go uncorrected, and move the work of overcoming faults<br></blockquote><blockquote type="cite">closer to the endpoint.<br></blockquote><br>Scalability depends on your metric - are you concerned with archive size, ongoing restoration maintenance (repeated checking and correcting detected errors), or something else?<br></div></span></blockquote></div><br><div>I always get tripped up by this point. Perhaps I shouldn't use the term "scalability" which has so many different connotations. </div><div><br></div><div>What *I* mean by scalability is a solution that can be widely deployed and widely adopted without undue non-linearity in cost and difficulty. One which meet the needs of varied communities and can be implemented on varied technologies and will therefore attract investment and instill confidence in its longterm stability, neutrality and correctness. Call me nostalgic, but the best examples I have to point to are the Unix kernel interface and IP. I realize that such a general answer leaves me open to charges that I can't state my goal clearly so any attempt to engage in reasoned discourse with me is futile. But I have decided to try anyway, and to ask the E2E community for help.</div><div><br></div><div>In this context, the obvious aspects of "scalability" that I am attempting to address are scale of data to be preserved (as measured in Zettabytes) and length of time to preserve it (measured in centuries). Also important are the varied environments, both technological and societal, through which preservation must continue. Natural disasters and war are obvious cases, but lack of funding and loss of political support for the preservation effort are others. Correlated failures due to use of the same software or closely related hardware throughout highly distributed systems must also be anticipated. All kinds of low-probability or easily-avoided failures will eventually occur if you wait long enough and don't pay close enough attention to the archive. Eventually the power will go out in a data center containing the only copy of the data you later decide you absolutely need, and no one will have brought it back online for a year, or two years, or ten years.</div><div><br></div><div>Today, we can deploy IP on a cell phone in the middle of the Sahara dessert and interoperate with servers attached to the North American backbone. Today my telephone (Android) and my laptop (OS X) run operating systems whose kernel interfaces are descended from the one that Ken Thompson designed, and which still have a certain interoperable core. Those are designs that *have* scaled. Call it what you will, that kind of design success is my goal when designing hardware or software infrastructure.</div><div><br></div></body></html>