"The best way to predict the future is to invent it."
It was the moment I read that quote a few years ago that I decided that Alan Kay was one of my absolute top software heroes. Of course it also had to do with his pioneering work at Xerox PARC (read all about it in Dealers of Lightning), around object oriented programming, conceiving the first laptop, and a general adherence to both elegance and execution in software design that make him truly stand out among an already pretty impressive class of peers at PARC.
Yesterday I attended a TTI Vanguard conference on behalf of HP and had the chance not only to meet Alan, but to spend 90 minutes talking with him about software design and the state of engineering, as well as a whole bunch of related topics. I've been unwinding the conversation for most of the past twelve hours, and if I have one regret, it's that I don't have a transcript of it so that I could spend some time really digesting what he was getting across to me. He has a way of using very concise terms that carry a tremendous amount of meaning and then backing them up with references to work done by colleagues across a broad range of disciplines that is incredibly dense. If Alan himself were a Smalltalk object, I think he might need a little refactoring on the messages he sends; they are compact for sure, but depend on such a rich shared context of meaning that it can be hard for the rest of us to follow.
And speaking of compactness, I really dig his latest project. He's gotten funding from the NSF (and some other folks) to rebuild an entire personal computing system in 20,000 lines of code. And by personal computing system he doesn't mean a VM like the JVM or .NET, but in fact the "whole stack—" from the interface the user sees to the instruction set on the processor. Yeah, crazy right? When he first said it to me, I wasted the first 10 minutes trying to understand what kind of "whole system" definition he was going to use to cheat his way to the 20K LOC constraint, but it soon became clear that he was deadly serious about doing this soup-to-nuts.
Why? Because according to Alan, the edifice that is any major computing "stack" (Windows, Linux, OSX + drivers + frameworks + applications) can easily run into 100-300M lines of code— far too much for any one person to even hope to begin to understand (20K lines is by comparison, about the equivalent of a 400 page book). And if we can't understand it, there is no way that we could ever hope to begin to fix the entropy that is slowly eating these systems from the inside out, or to innovate enough in software development practices to allow software to experience its own Moore's law-like exponential increase in power per line of code written.
I'm torn over whether I think that working the sort of alchemy that Alan & team are going to have to undertake to pull off this Herculean task so that one person can truly understand the entire computing environment is going unleash the type of revolution that he hopes it will. On the one hand, I love the notion that building this type of system will usher in new tools and ways of thinking about software development that will allow us to keep teams small and productive. I've always been very proud of the small size of our team at Tabblo (especially relative to what we are able to do), and have been a little shocked since joining HP about how many other "lab managers" scoff when I tell them that our team is fewer than 10 people, following it with some statement of size about their own multi-hundred person team. It shouldn't be this way— on this front both Alan and Google are absolutely correct. Small teams make the magic happen; in fact, I can not remember the last piece of software that I was blown away by that had more than 25 people working on the core of it (one of my favorite analogies that he used while we chatted yesterday was that of the pyramids, "hunks of rubble covered with limestone," that took thousands of people years to build and could not stand up to the simple Roman arch built by 2-3 masons).
On the other hand, one of my favorite things about working in software is how well abstractions work to isolate me from the stuff that I don't care to know about. As I type this, I have a vague idea of what the CPU and GPU are doing together to make the characters appear on the screen, but most of the time I don't want to have to think about it. And if I wanted to build a new kind of word processor, I'm not sure I'd really want to think about it either. Furthermore, there is a whole generation of people just like me who probably don't have the training and experience to think that deeply about the low levels of what the OS and the hardware are doing to provide us with our computing environments— and each generation of kids coming out of school knows less and less about this arcane stuff. Today's PHP hacker wants to build the next Facebook, but he is likely to know very little about how PHP executes, how a webserver is built, or even how TCP works to send bytes all over the Internet. Should he have to worry about this if his goal is to build social applications?
Obviously, I am simplifying his argument as I think that what he would argue is that in a properly self-describing, self-bootstrapping system, it's turtles all the way down which would make it a lot easier for our PHP-hacker friend to understand the system to its core.
In fact, it is the pursuit of this elegance that is the most inspiring part of Alan's new project— and of his whole life's work. The fact that he is always looking to make things more logical and concise, to find a new kind of science (and art) in the way that most of us will build software in the future is a very good thing indeed.
And in the meanwhile, the rest of us still working on the pyramids should take a pause to think a bit about how we could move towards that arch.
[Postnote: After writing this, I went and read his NSF proposal. I'm not an expert in grant writing, but this proposal is so good that anyone looking to write any sort of pitch should read it (especially people writing business plans for risky new ventures). It's grand while remaining incredible humble in what is known and what is really hard to do. It covers the depth of experience the team has concisely, and gives a great history of "water under the bridge." But most of all, it inspires with its broad vision of what computing could be for everyone, and why it's so important that we be commissioning this type of work. I don't know who you are NSF person who approved this, but you have definitely spent my tax dollars well here!]