Dreaming in Code: Two Dozen Programmers, Three Years, 4,732 Bugs, and One Quest for Transcendent Software – Scott Rosenberg

Chapter 2
The Soul of Agenda
.
.
.
.
In fact, though, there is an entire software disaster genre; there are shelves of books with titles like Software Runaways and Death March that chronicle the failures of one star-crossed project after another. These stories of technical ambition dashed on the shoals of human folly form a literature that is ultimately less interesting as fact than as mythic narrative. Over and over the saga is the same: Crowds of ardent, ambitious technologists march off to tackle some novel, thorny problem. They are betrayed by bad man­agers, ever-changing demands, and self-defeating behavior. They abandon the field to the inevitable clean-up squad of accountants and lawyers.

The genre’s definitive work to date is The Limits of Software, a disjointed but impassioned book by an engineer named Robert Britcher. Britcher is a veteran of big software’s ur-disaster, the train wreck against which all other crack-ups can be measured: the Federal Aviation Administration’s Advanced Automation System (AAS). AAS was a plan to modernize the air traffic control system; it began in 1981 and “terminated” in 1994 after bil­lions of dollars had been spent, with virtually nothing to show.

Britcher, who had worked on the original air traffic control system built by IBM in the 1960s and then labored in the trenches on its planned replacement, writes that AAS “may have been the greatest debacle in the history of organized work.” The FAA laid down a set of rules for its new system: Every bit of software created for it must be reusable. The entire sys­tem had to be “distributed,” meaning that each air traffic controller’s work­station would synchronize its information in real time with all the others. The cut-over to the new system had to be seamless, and all changes had to be made while the existing system was running. (This was “like replacing the engine on a car while it is cruising down the turnpike.”) Paper printouts might help that process-air traffic controllers relied on them when the old computer system got flaky-but the FAA forbade printers: They were too old-school. And on and on.

At its peak, the AAS program was costing the government $1 million a day; two thousand IBM employees labored on the project, producing one hundred pages of documentation for each line of program code. But in the end, Britcher concludes, “The software could not be written.” The FAAs demands were simply beyond the capacity of human and machine. “If you want to get a feel for how it was,” he writes, “you can read the Iliad.” With its sense of tragic inevitability, of powerful men trapped by even more pow­erful forces, The Limits of Software is itself a kind of digital Iliad populated by doomed desktop warriors. In the programming campaigns Britcher chronicles, the participants-not so much overworked as overtaxed by frustration-smash their cars, go mad, kill themselves. A project manager becomes addicted to eating paper, stuffing his maw with bigger and bigger portions at meetings as the delays mount. No one-including, plainly, the author-escapes scar free.

One engineer I know described the AAS this way. You’re living in a modest house and you see the refrigerator going. The ice sometimes melts, and the door isn’t flush, and the repairman comes out, it seems, once a month. And now you notice it’s bulky and doesn’t save energy, and you’ve seen those new ones at Sears. So it’s time. The first thing you do is look into some land a couple of states over and think about a new house. Then you get I. M. Pei and some of the great architects and hold a design run-off. This takes a while, so you have to put up with the fridge, which is now making a buzzing noise that keeps you awake at night. You look at several plans and even build a prototype or two. Time goes on and you finally choose a design. There is a big bash before building starts. Then you build. And build. The celebrating continues; each brick thrills. Then you change your mind. You really wanted a Japanese house with redwood floors and a formal garden. So you start to re-engineer what you have. Move a few bricks and some sod. Finally, you have something that looks pretty good. Then, one night, you go to bed and notice the buzzing in the refrigerator is gone. Something’s wrong. The silence keeps you awake. You’ve spent too much money! You don’t really want to move! And now you find out the kids don’t like the new house. In fact, your daughter says “I hate it.” So you cut your losses. Fif­teen years and a few billion dollars later, the old refrigera­tor is still running. Somehow.
.
.
.
.
Chapter 3
Prototypes and Python
[2001-NOVEMBER 2002]

How do you organize a music collection? If you have a mountain of CDs, unless you’re content to leave them in a state of total disorder, you have to pick one approach to begin with: alphabetical by artist, maybe. Or you’ll start organizing by genre-rock in one pile, jazz in another, classical in another-and then sort alphabetically by artist within those piles. If you’re more unorthodox, maybe you’ll order them alphabet­ically by title or by label or by year of release. Or maybe you just let them accumulate according to when you bought them, like geological strata layered in chronological sequence.

Once you pick an approach to filing your albums, though, you’re com­mitted; switching to a different scheme requires more work than most of us are willing to invest. (In 2004, a San Francisco conceptual artist decided to re-sort all the books in a used bookstore according to the color of their bind­ings. He wrote that the effort required “a crew of twenty people pulling an all-nighter fueled by caffeine and pizza.”) That’s a limitation of the world of physical objects. But there are advantages, too. Once you’ve lived with your arrangement a while, you discover you can locate things based on sense memory: Your fingers simply recall that the Elvis Costello section is over at the right of the top shelf because they’ve reached there so many times before.

As music collections have begun to migrate onto our computers, we find that we’ve gained a whole new set of possibilities: We can instantly reorder thousands of songs at a momentary whim, applying any number of criteria in any combination. Our music software even lets us view our col­lections through previously unavailable lenses, like “How many times have I listened to this before?” We’ve gained the same power over our troves of recordings that we get any time we take something from the physical world and model it as data. But we’ve also lost some of the simple cues we take for granted in the physical world-the color of the CD box’s spine that subliminally guides our fingers, or the option to just toss a box on top of the CD player to remind yourself that that’s what you want to hear next.

When we move some aspect of our lives into software code, it’s easy to be seduced by novel possibilities while we overlook how much we may be giving up. A well-designed program will make the most of those new capa­bilities without attempting to go against the grain of the physical world ori­entation that evolution has bequeathed us. (I remember the location of this button from yesterday because my brain is wired to remember locations in space, so it had better be in the same place tomorrow!)

Too often, though, we end up with software that’s not only confusingly detached &om the world we can touch and feel but also unable to deliver on the promise of flexibility and versatility that was the whole point of “going digital” in the first place. You want to organize your music by genre? Go ahead, but please don’t expect to add to the list of genres we’ve pre­loaded into the program. You’re an opera fan and need a different scheme for sorting files than the one that works for the Eagles and Eminem? Sorry, we baked the Artist and Song Title categories so deeply into the software’s structure that you’ll need a whole new program if you expect to track com­posers and singers and arias.

This happens not because programmers want to fence you in, but because they are so often themselves fenced in by the tools and materials they work with. Software is abstract and therefore seems as if it should be infinitely malleable. And yet, for all its ethereal flexibility, it can be stub­bornly, maddeningly intractable, and it is constantly surprising us with its rigidity.
That paradox kicks in at the earliest stages of a programming project, when a team is picking the angle of attack and choosing what languages and technologies to use. These decisions about the foundations of a piece of software, which might appear at first to be lightweight and reversible, turn out to have all the gravity and consequence of poured concrete.
.
.
.
Chapter 5
Managing Dogs and Geeks
.
.
.
“Management,” wrote Peter Drucker, the late business philosopher, “is about human beings. Its task is to make people capable of joint perfor­mance, to make their strengths effective and their weaknesses irrelevant.” We’re accustomed to thinking of management as the application of business school techniques that carry a scientific sheen: uniform measurements of productivity and metrics of return-on-investment. Drucker’s definition sounds awfully squishy; he could be talking about an orchestra conductor or a stage director. But in emphasizing the art of management over the science, the human realm over the quantitative dimension, Drucker-who first invented the term knowledge worker and then offered invaluable insights into its implications-was trying to remind us that numbers are only a starting point for management, not its ultimate goal.
.
.
.
.
Chapter 6
Getting Design Done
.
.
.
.
A favorite story at management meetings is that of the three stonecutters who were asked what they were doing. The first replied, “I am making a living.” The second kept on hammering while he said, “I am doing the best job of stonecutting in the entire country.” The third one looked up with a visionary gleam in his eyes and said, “I am build­ing a cathedral.”

The third man is, of course, the true “manager.” The first man knows what he wants to get out of the work and manages to do so. He is likely to give a “fair day’s work for a fair day’s pay.”

It is the second man who is a problem. Workmanship is essential, . . . but there is always a danger that the true workman, the true professional, will believe that he is accomplishing something when in effect he is just polishing stones or collecting footnotes.
.
.
.
.
By now, I know, any software developer reading this volume has likely thrown it across the room in despair, thinking, “Stop the madness~ They’re making every mistake in the book!”
A good number of those programmers, I imagine, are also thinking, “I’d never do that. I’d do better.”

A handful of them might even be right.

After a year of sitting in on OSAF’s work, I, too, found myself wonder­ing, “When are they going to get going? How long are they going to take? What’s the holdup?” Software time’s entropic drag had kicked in, and noth­ing seemed able to accelerate it.

Couldn’t the Chandler team see how far off the road they had driven? At every twist of the project’s course, Kapor would freely admit that he would have proceeded differently if he had known before what he later learned. Then he would sigh, shrug, say, “Hindsight is always twenty-twenty,” and get back to work.

In the annals of software history, Chandler’s disappointing pace is not the exception but the norm. In this field, the record suggests that each driver finds a different way to run off the road, but sooner or later nearly all of them end up in a ditch.

If you are one of those programmers who are certain they could do bet­ter, ask yourself how many times on your last project you found yourself thinking, “Yes, I know that we probably ought to do this”-where “this” is any of the canon of best practices in the software field-“but it’s a special situation. We’re sort of a unique case.” As Andy Hertzfeld put it, “There’s no such thing as a typical software project. Every project is different.”

In June 2004, Linux Times published an interview with Linus Torvalds, Linux’s Benevolent Dictator.

“Do you have any advice for people starting to undertake large open source projects?” the interviewer began.

“Nobody should start to undertake a large project,” Torvalds snapped. “You start with a small trivial project, and you should never expect it to get large. If you do, you’ll just overdesign and generally think it is more impor­tant than it likely is at that stage. Or, worse, you might be scared away by the sheer size of the work you envision. So start small and think about the details. Don’t think about some big picture and fancy design. If it doesn’t solve some fairly immediate need, it’s almost certainly overdesigned.”

Torvalds didn’t mention Chandler by name, but anyone familiar with the project who read his words couldn’t help seeing the parallels, and any­one working on the project who heard them would doubtless have winced.

Yet even if you took Torvalds’s advice-even if you started small, kept your ambitions in check, thought about details, and never, ever dreamed of the big picture-even then, Torvalds said, you shouldn’t plan on making fast progress.

“Don’t expect to get anywhere big in any kind of short time frame,” he declared. “I’ve been doing Linux for thirteen years, and I expect to do it for quite some time still. If I had expected to do something that big, I’d never have started.”

Chapter 7
Detail View
.
.
.
.
But it was in the spec! Or, It wasn’t in the spec! These cries are the inevitable refrain of every software project that has hit a snag. Writing the spec, a doc­ument that lays out copiously detailed instructions for the programmer, is a necessary step in any software building enterprise where the ultimate user of the product is not the same person as the programmer. The spec translates requirements-the set of goals or desires the software develop­er’s customers layout-into detailed marching orders for the programmer to follow. For a personal finance software product, for instance, require­ments would sound like this: Must support ledgers for multiple credit card accounts. Needs to be able to download account information from banks. The specs would actually spell out how the program fulfills these requirements. Visual specs exhaustively detail how each screen looks, where the buttons and menus are, typefaces and colors, and what happens when there is too much text to fit in a line. Interaction specs record how a program behaves in response to the user’s every click, drag, and typed character.

The spec is the programmer’s bible, and, typically, the programmer is a fundamentalist: The spec’s word is law. Programmers are also, by nature and occupational demand, literal-minded. So the creation of specs calls for care and caution: You need to be careful what you wish for, as in a fairy tale. (Everlasting life? Don’t forget to specify eternal youth as well!)

One of the oldest jokes in computing tells of the flummoxed user who calls a support line complaining that although the manual says, “Press any key to begin,” he can’t find the “any” key anywhere. The joke looks down its nose at the hapless ignoramus user, but the novice’s misunderstanding actu­ally mirrors the sort of picayune context-free readings that are the specialty of master programmers. Specs, as their name indicates, are supposed to bridge the realm of human confusion and machine precision through sheer exhaustive specificity. The programmer is to be left with no ambiguities, no guesswork. But the effort is inevitably imperfect. In part that’s because the specs’ authors are human, and the language they are written in is human. But it’s also because the effort to produce a “perfect spec”-one that deter­mined and specified every possible scenario of a program’s usage and behavior-would prove an infinite labor; you’d never finish spec writing and start coding.

In practice, human impatience and the market’s demands conspire to make underspecification the norm. And at OSAF the problem wasn’t knowing when specs were finished but, rather, figuring out how to get them started. Despite the new feeling of momentum on the team, a few complex areas remained, in the terminology of OSAF old-timers, snakes.
.
.
.
.
One day in April 2004, Chao Lam sent Mimi Yin a link to an article that he had found in a blog posting by a writer named Clay Shirky, a veteran commentator on the dynamics of online communication. Shirky had writ­ten about his rediscovery of an old article by Christopher Alexander, the philosopher-architect whose concept of “patterns” had inspired ferment in the programming world. The 1965 article titled “A City Is Not a Tree” ana­lyzed the failings of planned communities by observing that typically they have been designed as “tree structures.” “Whenever we have a tree structure, it means that within this structure no piece of any unit is ever connected to other units, except through the medium of that unit as a whole. The enor­mity of this restriction is difficult to grasp. It is a little as though the mem­bers of a family were not free to make friends outside the family, except when the family as a whole made a friendship.”

Real cities that have grown organically-and real structures of human relationships-are instead laid out as “semi-lattices,” in Alexander’s termi­nology. A semi-lattice is a looser structure than a tree; it is still hierarchical to a degree but allows subsets to overlap. Why do architectural designs and planned communities always end up as “tree structures”? Alexander suggests that the semi-lattice is more complex and harder to visualize and that we inevitably gravitate toward the more easily graspable tree. But this “mania every simpleminded person has for putting things with the same name into the same basket” results in urban designs that are artificially constrained and deadening. “When we think in terms of trees, we are trading the humanity and richness of the living city for a conceptual simplicity which benefits only designers, planners, administrators, and developers. Every time a piece of a city is torn out, and a tree made to replace the semi-lattice that was there before, the city takes a further step toward dissociation.”

Yin read the Alexander piece-the path it took to reach her, via Clay Shirky’s blog and Chao Lam’s em ail, seemed to illustrate the article’s point about complex interconnection-and immediately applied its ideas to Chandler’s design. She emerged with an elaborate design for a Chandler browser that was loosely inspired by iTunes; it would allow users to navi­gate the items in their repository by mixing and matching (or “slicing and dicing”) from three columns of choices (for instance, you might have a first column with only “emails” checked, a second with a set of selected names, and a third with a date range).

The browser design occasioned lengthy discussion but did not win any kind of fast consensus. In trying to imagine a different, less hierarchical structure for Chandler’s information, Yin was smacking headfirst into the reality that hierarchies are embedded deep in the nature of software and in the thinking of its creators. A city may not be a tree, as Alexander said, but nearly every computer program today really is a tree-a hierar­chical structure of lines of code. You find trees everywhere in the software world-from the ubiquitous folder trees found in the left-hand pane of so many program’s user interfaces to the deep organization of file systems and databases to the very system by which developers manage the code they write.
.
.
.
.
Stamping aimed to introduce a kind of productive ambiguity to the computer desktop that more closely mirrored the way people think. It was not a simple concept even for the designers who had invented it; for the developers who had to make it work, it was even trickier. Computer programs used silos and trees and similar unambiguous structures because they helped keep data organized and limited confusion. If an item belonged to one group, it did not belong to another; if it lived on one branch of a tree, it did not live on another.

Human language is more forgiving: One word can mean more than one thing. This flexibility provides a deep well of nuance and beauty; it is a foundation of poetry. But it leads only to trouble when you are trying to build software. As OSAP’s developers struggled to transform the innova­tions in Chandler, such as stamping, from sketch to functioning code, they repeatedly found themselves tripped up by ambiguity. Over and over they would end up using the same words to describe different things.

Take item. To the designers an item in Chandler was anything a user thought of as a basic piece of data. A single email. An event on a calendar. A task or a note. But the back-end world of the Chandler repository also had items, and its items were subtly but substantially different from the front end’s items. A repository item was a single piece of information stored in Chandler’s database, and in fact you needed many of these repository items to present a user of Chandler with a single user item like an email: Each attribute of the user item-the subject line, the date sent, the sender’s address, and so on-was a separate repository item. At different times in Chandler’s evolution, proposals arose to resolve this problem-to “disam­biguate” the word item. Maybe the term user item could always be capital­ized. (This helped in written material, when people remembered to do it, but not in conversation.) Maybe another term for one or the other type of item could be adopted. (But some of those proposed, like thing, were even more ambiguous, and none of the proposals stuck.)

The Chandler universe was rife with this sort of word overlap. The design team kept using the term data model to refer to the list of definitions of data types that the user would encounter along with all the attributes associated with that data type. For example, the data model would specify that every note had a “date created,” an “author,” and a “body text.” But to the developers, data model referred to a different, more technical set of definitions that they used to store and retrieve data at the level of their code. What the design team called the data model was really, in the devel­opers’ vocabulary, the content model.

Then there was the problem with the term scheduled task. In the design world a scheduled task meant an item on a user’s to-do list that had a date and time assigned to it. But for the developers a scheduled task was some­thing that Chandler itself had been told to perform at a particular time, such as download email or check for changes in shared information. Or consider the term event notification. For the designers this meant things like telling the user that new mail had arrived; for the developers an event was some change in the status of a code object, like a window closing or a click on a menu item, and notification meant sending news of that change to other code objects.

Kapor would observe these little linguistic train wrecks and shudder. “We need to speak one language,” he would say. “We should all speak Chan­dlerese. We have to fix the vocabulary.”

Finally, Brian Skinner stepped forward to try to do just that. Skinner had joined OSAF as a volunteer and helped Andi Vajda and Katie Parlante sort out the subtleties of the data model back when Vajda was just trying to get the repository started. Now a full-time OSAF programmer, Skinner had a knack for explaining developer-speak to the designers and design-talk to the developers. When the groups talked past each other, he was often the one to sort out the language. Why not, he proposed, set up a Chandler glossary on the woo? It would provide a single, authoritative, but easily amended reference point for all the terminology floating around the proj­ect. It could literally get everyone on the same page.

Skinner took up arms against the sea of ambiguity. He produced dozens of glossary pages. He built a system for linking to them from the rest of the woo. He built a system for linking to them from the rest of the wiki.

It was a heroic effort, but it didn’t seem to make much difference. For one thing, usage of the terms continued to change faster than his wiki editing could keep up. More important, the developers, who were already drowning in emails and bug reports and woo pages, didn’t seem to pay much attention to the glossary, and the pages languished, mostly unread.

The glossary’s futility might have been foreshadowed by the outcome of another naming effort in which Skinner participated. OSAF’s Howard Street headquarters had a half-dozen conference rooms, and Kapor decided that it would be useful for them to have names, so that instead of saying, “Let’s meet in the little conference room around the comer from Donn’s desk,” you could just say the room’s name. OSAF held a contest and solicited proposals from the staff Skinner suggested using names from imaginary places; he won the contest and ponied up his $150 prize to fund a happy hour for his colleagues at Kate O’Brien’s, the Irish bar down the street.

The fanciful names-the two main conference rooms were Avalon and Arcadia-captured the spirit at OSAF, where imagining new worlds was on the collective to-do list. It was only later that everyone realized what a bad idea it was to have the names of the most frequently used rooms start with the same letter: No one could ever remember which was which.

In a cartoon that you can find on more than one programmer’s Web site ­and, I would bet, on the walls of cubicles in software companies everywhere-Dilbert, the world’s favorite downtrodden geek, says to his supervisor (the celebrated Pointy-Haired Boss, or PHB), “We still have too many software faults. We’ll miss our ship date.”The PHB replies, “Move the list of faults to the ‘future development’ column and ship it.” In the final window, the PHB, pleased with himself, thinks: “Ninety percent of this job is figuring out what to call stuff.”
.
.
.
.
Chapter 9
Methods
.
.
.
.
It turned out that improving how you organize code was a cakewalk compared with improving how you organize people and their work. As the industry’s poor record held its depressing course through the sixties and seventies, the proponents of new software methodologies turned their attention to the human side of the enterprise. In particular, as they grappled with the tendency for large-scale projects to run aground, they began to focus on improving the process of planning. And, over time, they split into two camps with opposite recommendations. Like open source projects whose contributors have a falling out, they forked.

One school looked at planning and said, “Good plans are so valuable and so hard to make, you need to plan harder, plan longer, plan in more detail, and plan again~ Plan until you can’t plan any further-and then plan some more.” The other school said, essentially, “Planning is so impossible and the results are so useless that you should just give up. Go with the f1ow~ Write your code, listen to feedback, work with your customers, and change it as you go along. That’s the only plan you can count on.” I’m exaggerating, plainly, but the divide is real, and most software developers and managers fall by natural inclination or bitter experience on one side of it or the other.

Watts Humphrey, the guru of disciplined project management, lays out the “Must. Plan. More!” argument: “Unless developers plan and track their personal work, that work will be unpredictable. Furthermore, if the cost and schedule of the developers’ personal work is unpredictable, the cost and schedule of their teams’ work will also be unpredictable. And, of course, when a project team’s work is unpredictable, the entire project is unpre­dictable. In short, as long as individual developers do not plan and track their personal work, their projects will be uncontrollable and unmanageable.”

On the other hand, here is Peter Drucker, the father of contemporary management studies: “Most discussions of the knowledge worker’s task start with the advice to plan one’s work. This sounds eminently plausible. The only thing wrong with it is that it rarely works. The plans always remain on paper, always remain good intentions. They seldom turn into achievement.”

Drucker published those words in 1966. As it happened, that was just about the time that a young Watts Humphrey was taking over the reins of software management at IBM.
.
.
.
.
Chapter 10
Engineers and Artists
.
.
.
.
When the Garmisch organizers named their event a conference on software engineering, they “fully accepted that the term . . . expressed a need rather than a reality,” according to Brian Randell, a British software expert who organized the conference’s report. “The phrase ‘software engineering’ was deliberately chosen as being provocative, in implying the need for software manu­facture to be based on the types of theoretical foundations and practical disciplines, that are traditional in the established branches of engineering.” The conference attendees intended to map out how they might bring the chaotic field of software under the scientific sway of engineering; it was an aspiration, not an accomplishment.

Yet something “apparently happened in the interval between the 1968 conference, which left participants enthusiastic and excited, and its succes­sor in Rome the following year. The second gathering aimed to focus on specific techniques of software engineering. “Unlike the first conference. . . in Rome there was already a slight tendency to talk as if the subject [soft­ware engineering] already existed,” Randell wrote in a later memoir of the events. The term software engineering had almost overnight evolved from a “provocation” to a fait accompli.

In the original conference report, Randell wrote that the Rome event “bore little resemblance to its predecessor. The sense of urgency in the face of common problems was not so apparent as at Garmisch. Instead, a lack of communication between different sections of the participants became, in the editors’ opinions at least, a dominant feature. Eventually the serious­ness of this communications gap, and the realization that it was but a reflec­tion of the situation in the real world, caused the gap itself to become a major topic of discussion.”

The divisions that arose in Rome, along with a debate about the need for an international software engineering institute, led one participant, an IBM programmer named Tom Simpson, to write a satire titled “Masterpiece Engineering.” Simpson imagined a conference in the year 1500 trying “scientificize” the production of art masterpieces, attempting to specify the criteria for creation of the Mona Lisa, and establishing an institute to promote the more efficient production of great paintings.

They set about equipping the masterpiece workers with some more efficient tools to help them create master­pieces. They invented power-driven chisels, automatic paint tube squeezers and so on. . . . Production was still not reaching satisfactory levels. . . . Two weeks at the Institute were spent in counting the number of brush strokes per day produced by one group of painters, and this criterion was then promptly applied in assessing the value to the enterprise of the rest. If a painter failed to turn in his twenty brush strokes per day he was clearly underproductive. Regrettably none of these advances in knowledge seemed to have any real impact on master­piece production and so, at length, the group decided that the basic difficulty was clearly a management problem. One of the brighter students (by the name of L. da Vinci) was instantly promoted to manager of the project, putting him in charge of procuring paints, canvases and brushes for the rest of the organisation.

Simpson’s spoof of the Rome conference’s work had teeth: “In a few hundred years,” he wrote, “somebody may unearth our tape recordings on this spot and find us equally ridiculous.” It was sufficiently painful to the conference organizers that they pressured Randell and the other authors of the report to remove it from the proceedings. (Years later Randell posted it on the Web.) “Masterpiece Engineering” could not have been more prescient in laying out the central fault line that has marked all subsequent discussions of the Software Crisis and its solutions-between those who see making software as a scientific process, susceptible to constant improvement, perhaps even perfectible, and those who see it as a primarily creative endeavor, one that might be tweaked toward efficiency but never made to run like clockwork.

The two 130-page reports of the NATO software engineering confer­ences foreshadow virtually all the subjects, ideas, and controversies that have occupied the software field through four subsequent decades. The participants recorded their frustration with the lumbering pace and uncer­tain results of large-scale software development and advocated many of the remedies that have flowed in and out of fashion in the years since: Small teams. “Feedback from users early in the design process.” “Do something small, useful, now.” “Use the criterion: It should be easy to explain.”

George Santayana’s dictum that “those who cannot remember the past are condemned to repeat it” applies here. It’s tempting to recommend that these NATO reports be required reading for all programmers and their managers. But, as Joel Spolsky says, most programmers don’t read much about their own discipline. That leaves them trapped in infinite loops of self-ignorance.
.
.
.
.
If you report a bug to a programmer, the first thing she will do is ask, “Have you duplicated the problem?” -meaning, can you reliably make it happen again? If the answer is yes, that’s more than half the battle. If it is no, most of the time the programmer will simply shrug her shoulders and write it off to faulty hardware or cosmic rays.

“‘Software engineering’ is something of an oxymoron,” L. Peter Deutsch, a software veteran who worked at the fabled Xerox Palo Alto Research Center in the seventies and eighties, has said. “It’s very difficult to have real engineering before you have physics, and there isn’t anything even close to a physics for software.”

Students of other kinds of science are sometimes said to have “physics envy,” since, as computing pioneer Alan Kay has put it, physicists “deal with the absolute foundations of the universe, and they do it with serious math.” The search for a “physics for software” has become a decades-long quest for software researchers. If we could only discover dependable principles by which software operates, we could transcend the overpowering complexity of today’s Rube Goldberg-style programs and engineer our way out of the mire of software time. One day we could find ourselves in a world where software does not need to be programmed at all.

For many on this quest, the common grail has been the idea of “auto­matic software” -software that non programmers can create, commands in simple English that the computer can be made to understand. This dream was born at the dawn of the computer era when Rear Admiral Grace Hopper and her colleagues invented the compiler. “Hopper believed that programming did not have to be a difficult task,” her official Navy biograph­ical sketch reports. “She believed that programs could be written in English and then translated into binary code.” Flow-Matic, the language Hopper designed, became one of the roots of Cobol, the business-oriented program­ming language that was still in wide use at the turn of the millennium.

Hopper’s compiler was a vast advance over programming in machine language’s binary code of zeros and ones, and it provided a foundation for every software advance to come. But it is no slur on her work to point out that writing in Cobol, or any other programming language since invented, is still a far cry from writing in English. Similarly, the inventors of Fortran, who hoped that their breakthrough language might “virtually eliminate coding,” lived to see their handiwork cursed for its crudity and clumsiness.

Yet the dream of virtually eliminating coding has survived in several forms, breeding new acronyms like an alphabetical algae bloom. There is the Unified Modeling Language, or UML, which provides an abstract vocabulary-boxes, blobs, and arrows-for diagramming and designing software independent of specific programming languages. There is Model Driven Development (MOD) and Model Driven Architecture (MDA), terms for approaches to development built around the UML. There is “gen­erative programming,” in which the final software product is not written by hand but produced by a generator (which is actually another program). And there is a new buzzphrase, “software factories,” which mixes and matches some of these concepts under the metaphoric banner of automat­ing the software building process Henry Ford-style.

The highest-profile effort today to realize a version of the dream of eliminating coding is led by Charles Simonyi, the inventor of modern word processing software at Xerox and Microsoft, whom we have already met as the father of Hungarian notation. For years Simonyi led research at Microsoft into a field he called “intentional programming.” In 2002, he left the company that had made him a billionaire and funded a new venture, Intentional Software, with the goal of transforming that research into a real-world product.

Simonyi-a dapper, engaging man whose voice still carries a trace of Hungary, which he fled in his teens-is thinking big. He is as aware of the long, sorry history of software’s difficulties as anyone on the planet. “Software as we know it is the bottleneck on the digital horn of plenty,” he says. Moore’s Law drives computer chips and hardware up an exponential curve of increasing speed and efficiency, but software plods along, unable to keep up.

In Simonyi’s view, one source of the trouble is that we expect program­mers to do too much. We ask them to be experts at extracting knowledge from nonprogrammers in order to figure out what to build; then we demand that they be experts at communicating those wishes to the com­puter. “There are two meanings to software design,” he says. “One is design­ing the artifact we’re trying to implement. The other is the sheer software engineering to make that artifact come into being. I believe these are two separate roles-the subject matter expert and the software engineer.”

Intentional Software intends to “empower” the “subject matter experts” -the people who understand the domain in which the software will be used, whether it is air traffic or health care or adventure gaming. Simonyi wants to give these subject matter experts a set of tools they can use to explain their intentions and needs in a structured way that the com­puter can understand. Intentional Software’s system will let the nonpro­grammer experts define a set of problems-for a hospital administration program, say, they might catalog all the “actors,” their “roles,” tasks that need to be performed, and all other details-in a machine-readable format. That set of definitions, that model, is then fed into a generator program that spits out the end-product software. There is still work for programmers in build­ing the tools for the subject matter experts and in writing the generator. But once that work is done, the nonprogrammers can tinker with their model and make changes in the software without ever needing to “make a humble request to the programmer.” Look, Ma, no coding!

In an article describing Intentional’s work, technology journalist Claire Tristram offers this analogy: “It’s something like an architect being able to draw a blueprint that has the magical property of building the structure it depicts-and even, should the blueprint be amended, rebuilding the struc­ture anew.”

Simonyi’s dream of automating the complexity out of the software development process is a common one among the field’s long-range thinkers. He is unusual in having the singlemindedness and the financial resources to push it toward reality. “Somebody once asked me what lan­guage should be used to write a million-line program,” Simonyi told an interviewer in 2005. “I asked him how come he has to write a million-­line program. Does he have a million-line problem? A million lines is the Encyclopedia Britannica-twenty volumes of a thousand pages each. It’s almost inconceivable that a business practice or administrative problem would have that much detail in it.” Really, Simonyi argues, you should be dealing with maybe only ten thousand lines of “problem description” and another ten thousand lines of code for the generator-a much more man­ageable twenty thousand total lines containing the “equivalent information content” of that million-line monstrosity.

As a young man, Simonyi led the development of Bravo, the first word processing program that functioned in what programmers now call WYSIWYG fashion (for “what you see is what you get,” pronounced “wizzy wig”). Today we take for granted that a document we create on the com­puter screen, with its type fonts and sizes and images, will look the same when we print it out. But in the mid-1970s, this was a revolution.

Simonyi’s Intentional Software is, in a way, an attempt to apply the WYSIWYG principle to the act of programming itself But Simonyi’s enthusiastic descriptions of the brave new software world his invention will shape leave a central question unanswered: Will Intentional Software give the subject matter experts a flexible way to express their needs directly to the machine-or will it demand that nonprogrammer experts submit themselves to the yoke of creating an ultra-detailed, machine­ readable model? Is it a leap up the ladder of software evolution or just a fancy way to automate the specification of requirements? Can it help com­puters understand people better, or will it just force people to communi­cate more like computers? Simonyi is bullish. But as of this writing, Intentional Software has yet to unveil its products, so it’s hard to say.
.
.
.
.
In an essay titled “The Law of Leaky Abstractions,” Joel Splosky wrote, “All non-trivial abstractions, to some degree, are leaky. Abstractions fail. Sometimes a little, sometimes a lot. There’s leakage. Things go wrong.” For users this means that sometimes your computer behaves in bizarre, perplex­ing ways, and sometimes you will want to, as Mitch Kapor said in his Software Design Manifesto, throw it out the window. For programmers it means that new tools and ideas that bundle up some bit of low-level computing com­plexity and package it in a new, easier-to-manipulate abstraction are great, but only until they break. Then all that hidden complexity leaks back into their work. In theory, the handy new top layer allows programmers to forget about the mess below it; in practice, the programmer still needs to under­stand that mess, because eventually he is going to land in it. Spolsky wrote:

Abstractions do not really simplify our lives as much as they were meant to. . . . The law of leaky abstractions means that whenever somebody comes up with a wizzy new code-generation tool that is supposed to make us all ever-so-efficient, you hear a lot of people saying, “Learn how to do it manually first, then use the wizzy tool to save time.” Code-generation tools that pretend to abstract out something, like all abstractions, leak, and the only way to deal with the leaks competently is to learn about how the abstractions work and what they are abstracting. So the abstractions save us time working, but they don’t save us time learning. . . . And all this means that paradoxically, even as we have higher and higher level programming tools with better and better abstractions, becoming a pro­ficient programmer is getting harder and harder.

So even though “the abstractions we’ve created over the years do allow us to deal with new orders of complexity in software development that we didn’t have to deal with ten or fifteen years ago,” and even though these tools “let us get a lot of work done incredibly quickly,” Spolsky wrote, “sud­denly one day we need to figure out a problem where the abstraction leaked, and it takes two weeks.”

The Law of Leaky Abstractions explains why so many programmers I’ve talked to roll their eyes skeptically when they hear descriptions of Intentional Programming or other similar ideas for transcending software’s complexity. It’s not that they wouldn’t welcome taking another step up the abstraction ladder; but they fear that no matter how high they climb on that ladder, they will always have to run up and down it more than they’d like-and the taller it becomes, the longer the trip.

If you talk to programmers long enough about layers of abstraction, you’re almost certain to hear the phrase “turtles all the way down.” It is a reference to a popular-and apparently apocryphal or at least uncertainly sourced­ anecdote that Stephen Hawking used to open his popular book A Brief History of Time:

A well-known scientist (some say it was Bertrand Russell) once gave a public lecture on astronomy. He described how the earth orbits around the sun and how the sun, in turn, orbits around the center of a vast collection of stars called our galaxy. At the end of the lecture, a little old lady at the back of the room got up and said: “What you have told us is rubbish. The world is really a flat plate sup­ported on the back of a giant tortoise.” The scientist gave a superior smile before replying, “What is the tortoise standing on?” “You’re very clever, young man, very clever,” said the old lady. “But it’s turtles all the way down!”

For cosmologists the tale is an amusing way to talk about the problem of ultimate origins. For programmers it has a different spin: It provides a way of talking about systems that are internally coherent. Software products that are, say, “objects all the way down” or “XML all the way down” have a unified nature that doesn’t require complex and troublesome internal translation.

But for anyone weaned, as I was, on the collected works of Dr. Seuss, “turtles all the way down” has yet another association: It recalls the classic tale of “Yertle the Turtle,” in which a turtle king climbs atop a turtle tower so he can survey the vast extent of his kingdom. Finally, the lowliest turtle in the stack burps, and down comes the king, along with the rest of the turtles in the pile. The moral of Yertle is-well, there are plenty of morals. For a programmer the lesson might be that stacks of turtles, or layers of abstractions, don’t respond well to the failure of even one small part. They are, to use a word that is very popular among the software world’s malcon­tents, brittle. When stressed, they don’t bend, they break.

“If builders built houses the way programmers built programs, the first woodpecker to come along would destroy civilization,” Gerald Weinberg, the pioneer of computer programmer psychology, once wrote. Leave out a hyphen, the builders of Mariner 1 learned, and your rocket goes haywire.

For Charles Simonyi, the proponents of the UML, and many other soft­ware optimists, it is enough to dream of adding another new layer to today’s stack of turtles. That’s how the field has always advanced in the past, they point out, so why not keep going? But there are other critics of today’s software universe whose diagnosis is more sweeping. In their view, the problem is the stack itself The brittleness comes &om our dependence on a pile of unreliable concepts. The entire field of programming took one or more wrong turns in the past. Now, they say, it’s time to start over.
.
.
.
.
Lanier laid out this critique of the present state of programming in a 2003 essay on the problem of “Gordian software”-software that is like the impossible-to-untie Gordian knot of classical Greece:

If you look at trends in software, you see a macabre parody of Moore’s Law. The expense of giant software projects, the rate at which they fall behind schedule as they expand, the rate at which large projects fail and must be abandoned, and the monetary losses due to unpre­dicted software problems are all increasing precipitously. Of all the things you can spend a lot of money on, the only things you expect to fail frequently are software and medicine. That’s not a coincidence, since they are the two most complex technologies we try to make as a society. Still, the case of software seems somehow less forgivable, because intuitively it seems that as complicated as it’s got­ten lately, it still exists at a much lower order of tangled­ness than biology. Since we make it ourselves, we ought to be able to know how to engineer it so it doesn’t get quite so confusing.

“Some things in the foundations of computer science are fundamen­tally askew,” Lanier concluded, and proceeded to trace those problems back to “the metaphor of the electrical communications devices that were in use” at the dawn of computing. Those devices all “centered on the sending of signals down wires” or, later, through the ether: telegraph, telephone, radio, and TV. All software systems since, &om the first modest machine­language routines to the teeming vastness of today’s Internet, have been “simulations of vast tangles of telegraph wires.” Signals travel down these wires according to protocols that sender and receiver have agreed upon in advance.

Legend has it that Alexander the Great finally solved the problem of the Gordian knot by whipping out his sword and slicing it in two. Lanier proposed a similarly bold alternative to tangled-wire thinking and spaghetti code. Like Alan Kay, he drew inspiration from the natural world. “If you make a small change to a program, it can result in an enormous change in what the program does. If nature worked that way, the universe would crash all the time.” Biological systems are, instead, “error-tolerant” and homeo­static; when you disturb them, they tend to revert to their original state.

Instead of rigid protocols inherited from the telegraph era, Lanier pro­posed trying to create programs that relate to other programs, and to us, the way our bodies connect with the world. “The world as our nervous systems know it is not based on single point measurements but on surfaces. Put another way, our environment has not necessarily agreed with our bodies in advance on temporal syntax. Our body is a surface that contacts the world on a surface. For instance, our retina sees multiple points of light at once.” Why not build software around the same principle of pattern recognition that human beings use to interface with reality? Base it on probability rather than certainty? Have it “try to be an ever better guesser rather than a perfect decoder”?

These ideas have helped the field of robotics make progress in recent times after long years of frustrating failure with the more traditional approach of trying to download perfect models of the world, bit by painful bit, into our machines. “When you de-emphasize protocols and pay atten­
tion to patterns on surfaces, you enter into a world of approximation rather than perfection,” Lanier wrote. “With protocols you tend to be drawn into all-or-nothing high-wire acts of perfect adherence in at least some aspects of your design. Pattern recognition, in contrast, assumes the constant minor presence of errors and doesn’t mind them.”

Lanier calls this idea “phenotropic software” (defining it as “the inter­action of surfaces”). He readily grants that his vision of programs that essentially “look at each other” is “very different and radical and strange and high-risk.” In phenotropic software, the human interface, the way a pro­gram communicates with us, would be the same as the interface the pro­gram uses to communicate with other programs-“machine and person access components on the same terms” -and that, he admits, looks inefficient at first. But he maintains that it’s a better way to “spend the bounty of Moore’s Law,” to use the extra speed we get each year from the chips that power computers, than the way we spend it now, on bloated, failure-prone programs.

The big problem with software, according to Lanier, is that program­mers start by making small programs and learn principles and practices at that level. Then they discover that as those programs scale up to the gar­gantuan size of today’s biggest projects, everything they have learned stops working. “The moment programs grow beyond smallness, their brittleness becomes the most prominent feature, and software engineering becomes Sisyphean.”

In Lanier’s view, the programming profession is afflicted by a sort of psychological trauma, a collective fall from innocence and grace that each software developer recapitulates as he or she learns the ropes. In the early days of computing, lone innovators could invent whole genres of software with heroic lightning bolts. For his 1963 Ph.D. thesis at MIT, for example, Ivan Sutherland wrote a small (by today’s standards) program called Sketchpad-and single-handedly invented the entire field of computer graphics. “Little programs are so easy to write and so joyous,” Lanier says. “Wouldn’t it be nice to still be in an era when you could write revolution­ary little programs? It’s painful to give that up. That little drama is repeated again and again for each of us. We relive the whole history of computer sci­ence in our own lives and educations. We start off writing little programs­ ‘Hello World,’ or whatever it might be. Everyone has a wonderful experience. They start to love computers. They’ve done some little thing that’s just marvelous. How do you get over that first love? It sets the course for the rest of your career. And yet, as you scale up, everything just degenerates.”

Lanier’s “Gordian Software” article provoked a firestorm of criticism on the Edge.org Web site, John Brockman’s salon for future-minded scientists. Daniel Dennett accused Lanier of “getting starry-eyed about a toy model that might-might-scale up and might not.” But Lanier is unfazed. He says his critique of software is part of a lifelong research project aimed at harnessing computers to enable people to shape visions for one another, a kind of exchange he calls “post-symbolic communication” and likens to dreaming-a “conscious, waking-state, intentional, shared dream.”

His motivation is at once theoretical and personal. In the abstract, he argues, solving software’s “scaling problem” could help us solve many other problems. “The fundamental challenge for humanity is understanding com­plexity. This is the challenge of biology and medicine. It’s the challenge in society, economics. It’s all the arts, all of our sciences. Whatever intellectual
path you go down, you come again and again into a complexity barrier of one sort or another. So this process of understanding software-what’s often called the gnarliness of software-is in a sense the principal underly­ing theoretical approach to this other challenge, which is the most impor­tant universal one. So I see this as being very central, cutting very deep-if we can make any progress.”

More personally, Lanier’s quest is driven by his “disgust” with what he sees as the stagnation of the entire computing field. At a speech at an ACM conference in 2004, he pointed to his laptop as if it had insulted him and­using exactly the same image that Mitch Kapor had two decades before in the “Software Design Manifesto”-exclaimed: “I’m just sick of the stupid­ity of these things! I want to throw this thing out the window every day! It sucks! I’m driven by annoyance. It’s thirty years now. This is ridiculous.”
.
.
.
.
Epilogue
A Long Bet
.
.
.
.
In the spring of 2002, around the time Mitch Kapor and the early members of the Chandler team were beginning to zero in on their new software’s architecture, Kapor made the tech news headlines for something entirely different: He entered into a Long Bet about the prospects for artificial intel­ligence. Long Bets were a project of the Long Now Foundation, a nonprofit organization started by Whole Earth Catalog creator Stewart Brand and a group of digital-age notables as a way to spur discussion and creative ideas about long-term issues and problems. As the project’s first big-splash Long Bet, Kapor wagered $20,000 (all winnings earmarked for worthy nonprofit institutions) that by 2029 no computer or “machine intelligence” will have passed the Turing Test. (To pass a Turing Test, typically conducted via the equivalent of instant messaging, a computer program must essentially fool human beings into believing that they are conversing with a person rather than a machine.)

Taking the other side of the bet was Ray Kurzweil, a prolific inventor responsible for breakthroughs in electronic musical instruments and speech recognition who had more recently become a vigorous promoter of an aggressive species of futurism. Kurzweil’s belief in a machine that could ace the Turing Test was one part of his larger creed-that human history was
about to be kicked into overdrive by the exponential acceleration of Moore’s Law and a host of other similar skyward-climbing curves. As the repeated doublings of computational power, storage capacity, and network speed start to work their magic, and the price of all that power continues to drop, according to Kurzweil, we will reach a critical moment when we can technologically emulate the human brain, reverse-engineering our own organic processors in computer hardware and software. At the same time, biotechnology and its handmaiden, nanotechnology, will be increasing their powers at an equally explosive rate.

When these changes join forces, we will have arrived at a moment that Kurzweil, along with others who share his perspective, calls “the Singular­ity.” The term, first popularized by the scientist and science fiction author Vernor Vinge, is borrowed from physics. Like a black hole or any similar rent in the warp and woof of space-time, a singularity is a disruption of con­tinuity, a break with the past. It is a point at which everything changes, and a point beyond which we can’t see.

Kurzweil predicts that artificial intelligence will induce a singularity in human history. When it rolls out, sometime in the late 2020s, an artificial intelligence’s passing of the Turing Test will be a mere footnote to this singularity’s impact-which will be, he says, to generate a “radical transfor­mation of the reality of human experience” by the 2040s.

Utopian? Not really. Kurzweil is careful to layout the downsides of his vision. Apocalpytic? Who knows-the Singularity’s consequences are, by definition, inconceivable to us pre-Singularitarians. Big? You bet.

It’s easy to make fun of the wackier dimension of Kurzweil’s digital eschatology. His personal program of life extension via a diet of 220 pills per day-to pickle his fifty-something wetware until post-Singularity medical breakthroughs open the door to full immortality-sounds more like something out of a late-night commercial pitch than a serious scientist’s choice. Yet Kurzweil’s record of technological future-gazing has so far proven reliable; his voice is a serious one. And when he argues that “in the short term we always underestimate how hard things are, but in the long term we underestimate how big changes are,” he has history on his side.

But Kapor thinks Kurzweil is dead wrong. He thinks the whole proj­ect of “strong artificial intelligence,” the effort from the 1950s to the pres­ent to replicate human intelligence in silicon and code, remains a folly. In an essay that explained his side of the Long Bet wager, Kapor wrote that the entire enterprise has misapprehended and underestimated human intelligence.

As humans:
– We are embodied creatures; our physicality grounds us and defines our existence in a myriad
of ways.

– We are all intimately connected to and with the environment around us; perception of and inter­action with the environment is the equal partner of cognition in shaping experience.

– Emotion is as or more basic than cognition; feelings, gross and subtle, bound and shape the envelope of what is thinkable.

– We are conscious beings, capable of reflection and self-awareness; the realm of the spiritual or trans­personal (to pick a less loaded word) is something we can be part of and which is part of us.

How, Kapor wrote, could any computer possibly impersonate a human being whose existence is spread across so many dimensions? Or fool a human judge able to “probe its ability to communicate about the quintes­entially human”?

“In the end,” he declared, “I think Ray is smarter and more capable than any machine is going to be.”

Kapor’s arguments jibe with common sense on a gut level (that very phrase embodies his point). It will be another quarter century before we know if they are right.

By then I should finally be traveling over a new Bay Bridge. Chandler may have become a thriving open source ecosystem or may have been interred in the dead-code graveyard. And maybe, just maybe, the brains and talents and creativity of the world’s programmers will have found a way out of the quicksand hold of software time.

But I’m not making any bets.

In 2005, three years after placing his Long Bet with Kurzweil, years spent clambering through the trenches of real-world software develop­ment, Mitch Kapor stands by his wager.

“I would double down with Ray in an instant,” he says. “I don’t run into anybody who takes his side. Now, I don’t talk to people inside the Singu­larity bubble. But just your average software practitioner, whether they’re twenty-five or forty-five or sixty-five-nobody takes his side of the bet.”

A half-smiling glint-of innocent mischief, and also perhaps of hard­ earned experience-widens his eyes. “The more you know about software, the less you would do that.”

Leave a comment