Lessons from the 60s

Philip Greenspun cites the software engineering classic The Mythical Man Month today in a response to a question about open source. A key message of that book seems to have been lost on people who are using control arguments to talk down RSS.
In the book Fred Brooks develops the notion of conceptual integrity:

  I will contend that conceptual integrity is the most important consideration in system design. It is better to have a system omit certain anomalous features and improvements, but to reflect one set of design ideas, than to have one that contains many good but independent and uncoordinated ideas. (p. 42, 20th anniversary ed.)

You should be able to appreciate his point if you’ve spent much time looking through code that hasn’t been tightly managed, or that has been managed by a succession of different people with different ideas at different times. Such code is generally harder to understand and harder to program against than code that has conceptual integrity.
Brooks submits that

  Conceptual integrity in turn dictates that the design must proceed from one mind, or from a very small number of agreeing resonant minds. (p. 44, 20th anniversary ed.)

This conclusion is entirely rational but also entirely at odds with those who advocate for a democratic software design process. In a recent Guardian story about Dave Winer’s RFC to merge RSS and Atom, Ben Hammersley writes that

  The style of the Atom project differs greatly from the RSS 2.0. Whereas RSS 2.0 is controlled by a steering committee of Dave Winer and two others, Atom has been developed by an adhocracy of interested developers, with decisions reached by consensus.

This statement is interesting. You could read a bias into the word choices, assigning negative connotation to the terms appearing around RSS (“controlled”, “steering committee”) and positive connotation to the terms apearing around Atom (“adhocracy”,”interest”,”consensus”). I think there’s something to this notion, at least in the author’s mind. But from a software engineering standpoint, you would assign the values the other way, because control by a few gives you a chance at achieving conceptual integrity.
We resist the idea, because we’re all brilliant developers and surely our feature ideas are worthy and should be included in the spec. But our gut feeling is wrong on this one. And I think the people in charge of Atom probably know this, even if they don’t talk loudly about it. You can’t have a vote on every feature. At some point somebody has to make decisions, and it’s better to have the same one or two people making all of the decisions guided by a consistent notion of how the system should work. The alternative is chaos.
This fact does not devalue the individual. Everyone should be able to air their ideas for consideration by the system designers. And the system designers should be able to incorporate or shoot these ideas down as they see fit. More importantly, there’s no reason that a person can’t be a leader on one project and a follower on another. Given that we can’t all be leaders, most of us are going to have to learn to be followers most of the time.
None of the above is it at odds with the idea of open standards. The Open Standards principles and practice document does not prescribe the number of people that get to design a standard, nor how those people should be chosen. And it shouldn’t.

10 thoughts on “Lessons from the 60s”

  1. I really liked what you had to say here. Very well said. A lot of the internet seems to have been swept up by the open-source movement into thinking that the degree of openness is directly proportional to the quality of the output. Clearly this is not true in all cases, standard design being one of them.

    Like

  2. Interesting angle, and I’m sure there’s something in it. Conceptual integrity is extremely important, though I’m not convinced you need a (hopefully benign) dictator or Central Committee to get this.

    I’d also note that “adhocracy” and so on doesn’t necessarily mean there isn’t a small central steering committee, and this in turn doesn’t mean that decisions can’t be made by consensus (whether or not this applies to Atom is another matter).

    There are definite downsides to small development teams, errors are more easily missed – : “Given enough eyeballs, all bugs are shallow.”. I don’t think the ambiguity about HTML escaping would have crept in to RSS 2.0 had there been true community involvement in the spec.

    Additionally, if a spec is to serve a wide range of needs, there needs to be some knowledge of those needs, which is hard to get with only a small number of people. RSS 2.0 doesn’t help the RDF developer at all, not coincidentally Dave is antagonistic towards RDF. But if there had been someone that understood that communities requirements on the team the might have been more web-friendly (i.e. a URI).

    Anyhow, the bottom line is whether or not Atom ends up with conceptual integrity. Based on progress so far I think it very likely, there has been a tendency towards integrity and consistency – the format being shared between publication and authoring is a prime example. Contrast this with RSS(x versions)+BloggerAPI+MetWeblogAPI.

    Like

  3. That is a very interesting argument! I think everyone involved in standardization needs to keep it in mind.

    I’m not so sure it has the implications for the RSS/Atom debate that you imply, however. First, successful standards aren’t the result of “democratic design process”, but more from systematically applying the results of natural selection to the ideas submitted for consideration. “Democratic” implies one participant, one vote; “rough consensus and running code” (the IETF mantra, as I understand it) implies that successful ideas get more votes. Of course, lots of things we call “standards” are from a consensus process where powerful organizations with veto power (ahem, think SQL3 or XQuery) create unholy complexity and non-interoperability, but the Atom people can choose to avoid that road to hell.

    Second, even if having a single individual in charge is a necessary condition for conceptual integrity, it’s not a sufficient condition. RSS 2.0 is *not* a marvel of conceptual integrity in reality, for reasons that a number of people will happily go into at length. In fact, the main argument in its favor seems to be the classic “worse is better” thesis that simple and underspecified technologies usually prevail because they work fine for the typical cases, causing rapid, widespread deployment. Pragmatic people learn to avoid or hack around the corner cases that are hard or controversial, geeks argue about how to clarify or improve them.

    Finally, any spec that is intended to be ‘unchanging’, falls afoul of Orgel’s Second Rule, i.e. “evolution is smarter than you are.” Even if, for the sake of argument, RSS 2.0 is a work of inspired genius, the collective wisdom of thousands of people trying new tweaks and fixes is greater. Atom seem to be oriented toward collecting and codifying the results of all that evolution rather than trying to stifle it. Again, the people working in the IETF committee (assuming it is approved) can and should *choose* to use conceptual integrity as a selection criterion.

    So, you’re right that conceptual integrity is critical. Brooks noted that one way to achieve it is via “resonant minds.” The way forward seems to be to to get people resonating on concepts and ideas rather than labels, credit, past insults, and all the other baggage of the RSS/Atom disputes.

    Like

  4. Right on, Andrew. Good engineers respect the need for conceptual integrity. They submit to its rituals and short-term sacrifices because they know in the end the product will be better for it. As a result, good engineers should silently recognize and support the crucial mechanisms in any collaborative effort, such as a poetry workshop (see Richard Gabriel’s _Writers’ Workshops and the Work of Making Things_, http://dreamsongs.com/Books.html ). Distracting an engineering group with another vision — no matter how compelling it is — is just another way to replace real progress with vaporware.

    Like

  5. “Atom seem to be oriented toward collecting and codifying the results of all that evolution rather than trying to stifle it.”

    What do you base this on? I see Atom more as another bite at the same apple, benefiting from the experience that its lead developers acquired implementing RSS software and other projects. But calling it evolutionary, as if design decisions they’ve made were naturally selected on objective merit, seems a bit grandiose.

    Like

  6. “I see Atom more as another bite at the same apple, benefiting from the experience that its lead developers acquired implementing RSS software and other projects. But calling it evolutionary, as if design decisions they’ve made were naturally selected on objective merit, seems a bit grandiose.”

    Maybe I said it in a grandiose way 🙂 but I think we’re saying more or less the same thing – Atom benefits from the experience the developers got trying to untangle the ambiguities within and conflicts between the various versions of RSS. That motivates them to clarify. Support can be added for things that experience shows should be in the core. Likewise stuff that doesn’t justify its complexity can be dropped.

    Admittedly, what I’m advocating is more like selective breeding than *natural* selection, which does produce a lot of solutions utterly lacking in conceptual integrity. Still the process is “evolutionary” in the sense that clever ideas that don’t actually work die a quiet death, and apparently stupid ideas that solve real problems tend to survive the flames. No single human mind is making these decisions, they are the result of the ‘ecosystem’ (sorry, more grandiosity!) of people proposing ideas, flaming them, writing code, testing it, and repeating the process until what’s left is, if not elegant, at least flameproof.

    This is the key difference between RSS 2 and Atom in my mind — the Atom people have embraced open discussion and a bit of flamage in the name of determining what needs to change; the RSS spec ends with a “roadmap” saying that change is undesireable. The way to get conceptual integrity IMHO is not to enshrine the idea that it comes from a lone genius, but to add it to the criteria used to evaluate whether a particular proposal deserves to live to spawn new ideas, or to be put out of its misery.

    Like

  7. Nice post, Andrew! I place high value on conceptual integrity and believe it was one of the core reasons Java was successful. It has somewhat broken down as of late as the complexity of Java outstrips all previous programming languages and APIs, but it has held intact remarkably long for a committee-driven standards process.

    From a standpoint of the evolution of standards, I believe the evidence is strongly in favor of tightly-held standards eventually being opened up in order to flourish. Examples:

    * C was the product of essentially two people (K&R) initially, but then became an American and then an International standard, finally to give way to C++, the very model of a committee-driven process.
    * LISP was originally the product of a few minds, John McCarthy and friends, but it gave way to Common Lisp, an International Standard and committee-driven process.
    * OpenGL began as a tightly knit group, but then expanded into a committee once it reached critical mass. DirectX went through the same process.
    * Pixar’s RenderMan API and Shading Language were major API advances in 1987 mostly written by 3 people at Pixar. Pixar tightly controlled the API for a long time, despite making it a standard, and this was detrimental to the development of a standard shading language. Competitive issues and IP disputes developed and this deterred the evolution of the technology.
    Some Pixar refugees with some engineers at NVIDIA are finally moving this forward with their Cg language. This is driving innovation in shading, a cornerstone of next-generation computer graphics. (I see RSS/ATOM in much the same light as this)
    * And of course, before there was SOAP, there was XML-RPC.

    There is often a higher degree of conceptual integrity and “elegance” to the original standards. This deservedly drives a reverence for the predecessor technology, much in the same way we admire early inventions in museums or as textbook examples. But the original technologies inevitably run into problems from a professional engineering perspective due to limitations that evolved over time as the landscape changed. What I see from these examples is a pattern of a language or API as the product of a small, tightly-knit group is necessary to get an idea “off the ground.” Premature committee interest can kill the idea before it ever reaches fruition. Many argue that this has happened to XML Schema. However, once the idea reaches critical mass, its growth is best enhanced by relatively liberal access in a controlled process- a.k.a. the standards committee. Call this the API lifecycle if you will- independently born, eventually committee controlled. The reason for this is well-stated by Mike Champion. From a political perspective, the ideas and API benefits from a larger competetive landscape where some alignment of interests and organization can direct creative energy to increase the value of the idea. Many times it does seem to go awry, but it is unavoidable from political necessity. Without some “skin in the game” people will go off and do their own thing and everyone loses. The real tragedy to the fight over RSS/ATOM is that the two sides currently have no means of jointly working together without it seeming like one side lost. As long as that is the case, the two will fight.

    It is inevitable that for RSS/ATOM to grow in the long term, it will become part of an existing mature standards group. IETF, W3C, or OASIS would seem to be good areas for a meeting of the minds.

    Like

  8. Thank you, Andrew! Points evidently still not apparent to some…

    ~~~~~~~~~~~~~~~~~~~~~~~~~~~~

    “This is the key difference between RSS 2 and Atom in my mind — the Atom people have embraced open discussion…”

    Unfortunately, the facts are otherwise.

    And this “It is inevitable that for RSS/ATOM to grow in the long term,” would be the FUD that the group has been spreading, contrary to almost ALL the facts.

    Danny, one thing I don’t know is how many RDFers have converted to Atom so far. This would be an indicator of how well Atom has done so far as accomodating the needs of the RDF folks. I’ve not seen much on this (and may have easily missed it). My contention is that Atom is a compromise solution that helps neither RSS (factual) nor RDF (dunno).

    Can’t agree more with “Additionally, if a spec is to serve a wide range of needs, there needs to be some knowledge of those needs, which is hard to get with only a small number of people.”

    That would be a large part of the conundrum. However, I accept as factual what Tim Bray posted about how the entire PURPOSE of standards is contra to innovation, which I believe is the larger problem. (Couldn’t find link.)

    Like

  9. JayT, actually quite a few RDFers lost interest in Atom when it became clear that Atom wouldn’t be RDF/XML. But there are still quite a few folks around that want to use Atom in RDF systems (self included) and really the only essential requirements are that the specification is well-defined and consistent with web architecture. XSLT etc can do the rest.

    I’m optimistic that Atom can be more than a compromise, getting the best bits out of RSS 2.0 (simple syntax) and RSS 1.0 (powerful model), and also adding innovations like a single syntax for all operations rather than having to use XML-RPC etc for posting.

    A key benefit long-term should be that there is only one syndication format to aim for (Atom 1.0). There was a good opportunity when Dave was talking about RSS 0.94/2.0 for that to be a unified format, but he only managed to alienate the RSS 1.0 people even more by ignoring their suggestions. Another downside to single-person organisations…

    Like

  10. Pingback: Critical Section

Comments are closed.