One day long ago, when the IBM PC was still new, my friend Mike asked me to imagine my ideal computer. I described something very like the IBM PC, but with more memory and a bigger hard drive — 50 megabytes, say, instead of 10 or 20. I couldn’t imagine any use for much more than that. (Today of course you can’t even buy a
thumb drive that tiny.) I grudgingly allowed that a bitmap display might be more useful than the 80-column-by-24-line character terminal that PC’s had, but that was all I would consider adopting from the then-brand-new Apple Macintosh, which I dismissed as a silly toy unworthy of Real Programmers.
“Why?” I asked Mike. “What’s your ideal computer?”
Mike described something no bigger than an 8.5×11 sheet of paper and no more than an inch or so thick, whose entire surface was a full-color display. It could be carried in the hand or slipped into a backpack. “What about the CPU, where would that go?” I asked. I wasn’t getting it. Mike patiently explained that the whole system — CPU, RAM, video driver, power supply — was inside that little slab. I scoffed. Cramming everything into such a small space was obviously impossible, and no battery that could fit in such a thing would ever have enough power to spin a floppy disk drive for long. “Anyway, even if you could build it,” I told him, “it wouldn’t be as convenient as you’d like. You’d have to carry around a keyboard too and plug it in every time you wanted to use it.” No you wouldn’t, said Mike. The display could be touch-sensitive. The keyboard could be rendered on the screen as needed and input accepted that way.
This was 1984. What Mike described was pure science fiction. (In 1987 that became literally true, when the touch-controlled “padd” became a staple prop on Star Trek: The Next Generation.) Yet here I am, the proud new owner of a Nexus 7, the latest in high-powered touch-sensitive computing slabs that put even Mike’s audacious vision to shame.
It wasn’t the first time I’d had a failure of technological vision, nor was it the last.
Several years earlier, before even the IBM PC, I was spending a lot of afterschool hours at my friend Chuck’s house, and a lot of those hours on his dad’s home computer, one of the only ones then available: the beloved but now mostly forgotten Sol-20. (The TRS-80 and the Apple ][ were brand new and just about to steal the thunder from hobbyist models like the Sol-20.) It had a small black-and-white monitor that could display letters, numbers, typographical marks, and a few other special characters at a single intensity (i.e., it really was “black and white,” not greyscale). It looked like this:
The display was so adequate for my meager computing needs there in the late 1970’s that when the computer magazines I read started advertising things like Radio Shack’s new Color Computer (that’s what it was called — the “Color Computer”), I dismissed them as children’s toys.
Once, Chuck and I entertained the idea of making a little science fiction movie. A scene in Chuck’s script had a person’s face appearing on a computer monitor and speaking to the user. It was his plan to film this scene using his father’s computer. I said, “How are we going to make a face appear on a computer monitor?” I had only ever seen letters and numbers blockily rendered on it. Chuck pointed out that the monitor was really just a small TV. “Oh yeah,” I said, feeling stupid. It ought to be able to display anything a TV could. Of course we’d have to hook it up to a different source; obviously no computer could handle rendering full-motion video. Yet here I am, a software engineer at YouTube.
There’s more. In the mid 80’s, my sometime boss Gerald Zanetti, the commercial food photographer and computing technophile, once described his vision for composing and editing photographs on a high-resolution computer display. If a photograph included a bowl of fruit, he explained, he wanted to be able to adjust the position of an orange separately from the grapes and the bananas surrounding it. I said that such technology was far in the future. I’d seen graphics-editing programs by then, but they treated the image as a grid of undifferentiated pixels. Separating out a foreground piece of fruit from other items in the background simply was not feasible. Yet just a couple of years later Photoshop exactly realized Zanetti’s vision.
In the mid 90’s, when the web was new, my friend and mentor Nathaniel founded a new company, First Virtual, to handle credit card payments for Internet commerce. At the time there was no Internet commerce. Nathaniel and company invented some very clever mechanisms for keeping sensitive credit-card information entirely off the Internet while still enabling online payments. But I felt their system was too complicated to explain and to use, that people would prefer the familiarity and convenience of credit cards (turns out I was right about that), and that since no one would (or should!) ever trust the Internet with their credit card information, Internet commerce could never amount to much. Yet here I am, receiving a new shipment of something or other from Amazon.com every week or two.
Oh well. At least I’m in good company. I’m sensible enough finally to have learned that however gifted I may be as a technologist, I’m no visionary. Now when someone describes some fantastical new leap they imagine, I shut up and listen.