thumbnail of The Machine That Changed the World; Interview with Alan Kay, 1990; The Machine That Changed the World: Interview with Alan Kay, 1990
Transcript
Hide -
This transcript was received from a third party and/or generated by a computer. Its accuracy has not been verified. If this transcript has significant errors that should be corrected, let us know, so we can add it to FIX IT+.
THE MACHINE THAT CHANGED THE WORLD - TAPES F182-186, F188-191 ALAN KAY
Interviewer:
ALAN, WHAT DO YOU THINK ABOUT THE FIRST COMPUTERS WERE LARGE EXPENSIVE
MACHINES THAT HAD BEEN BUILT TO DO CALCULATIONS. HOW DID PEOPLE THINK
ABOUT THIS NEW TECHNOLOGY?
Kay:
Well, I think that some people thought they were never going to get any
smaller. There were calculations about how many Empire State buildings
you needed full of vacuum tubes to do this and that. And other people
at Bell Labs who had been working on the transistor had a sense that
this could be used as a switch, cause AT&T wanted something to replace
vacuum tubes. And so I think there are several perceptions of how big
the thing was going to be. I think it wasn't until the sixties that
people started thinking that computers could get quite small.
Interviewer:
PART OF THE PROBLEM WAS THE SIZE, THE OTHER PROBLEM WAS JUST WHAT IT
WAS SUPPOSED TO BE FOR....
Kay:
Yeah I think that when they were doing ballistic calculations there was
this notion that you would run a program for a very long time. It was
really after UNIVAC in the 1950s that people started thinking, "Oh,
there, might have to do some programming." And people, most notably,
Grace Hopper, very early on started thinking about higher level
programming languages. So that was sort of one, one flow. And I think
the other thing was the urgency of the Cold War got people thinking of
air defense. And so scientists at MIT started working on systems that
could assess radar and show it to you on a display. And those are the
first computers that had displays and pointing devices.
Interviewer:
FOR THE MOST PART THOUGH THOSE WERE KIND OF EXCEPTIONS, CAN YOU
DESCRIBE WHAT THE BATCH PROCESSING EXPERIENCE WAS LIKE, WHAT IT WOULD
HAVE BEEN LIKE?
Kay:
Well, I always thought of it as something like a factory. You have this
very large thing and there are raw ingredients going in the front and
there are some very complex processes that have to be coordinated, then
sometime later you get what you were hoping for on the out of the rear
end. And you can think of them also as like a railroad, somebody else
is deciding the schedules. I can certainly remember having maybe one
run a day, or two runs a day, and you could only get your results then,
and you had to work around it and so forth. And so it was very much of
an institutional way of thinking about computing.
Interviewer:
DID THIS PUT MANY PEOPLE OFF COMPUTING?
Kay:
I don't think so. I think you, you know the happiness is the how much
expectations you know, the reality exceeds expectations. I think that
most people were happy, you know in the '50s there were still in
business predominantly business computing was done with punch card
machines. And IBM was the company that made those punch card machines.
So there was an enormous disparity between what you could do with
computers and what most people did.
Interviewer:
BUT STILL FOR MOST PEOPLE THE PHYSICAL SIZE AND APPEARANCE OF THE
MACHINE WAS A COMPELLING THING AND THAT GAVE IT THIS MYTHOLOGY?
Kay:
Well, I think, yeah my feeling is, is that anything that is larger
human scale invokes mechanisms concerned with religion so you have a
priesthood with white coats, you know all the paraphernalia were there
and some people thought it would take over the world and some people
wanted it to take over the world, and some people were afraid it would
take over the world, and none of those things happened. But that was
what it was when it was this large thing and, and towards the late '50s
many people in different places started saying, "Well, this thing has
the potential of being more like a partner maybe a complementary
partner. It can do some things that we can't do well and vice versa, so
we should find some way of bringing it down to human scale.
Interviewer:
THE COMPLEMENT TO THE INDIVIDUAL?
Kay:
The partner to people, not so much directly connected to the
institutions. Although of course, the first way it was done using
time-sharing the mainframe was still owned by the institution.
Interviewer:
OKAY NOW, OF THE PEOPLE, OF THESE SORT OF VERY EARLY VISIONARIES I MEAN
YOU YOU'VE DIVIDED THEM INTO SORT OF TWO GROUPS, HAVEN'T YOU?
Kay:
Yeah.
Interviewer:
COULD YOU TALK A BIT ABOUT THAT?
Kay:
Which two groups? Give me a cue.
Interviewer:
WELL, ONE GROUP WAS THE ONES WHO WERE INTERESTED IN AMPLIFYING
INTELLIGENCE, SORT OF...
Kay:
You know there are lots of lots of different groups... Okay, must be
okay because we're still rolling, right? So, well I think in the in the
late '50s there were a variety of different conceptions of what you
should do with it. Some of those conceptions were had by the very same
person. So for instance, John McCarthy, who is a professor at MIT both
wrote memos suggesting we should time-share the computer, and he also
thought more into the future that we'd be all networked together, and
there would be these huge information utilities it would be like our
power and lighting utilities that would give us the entire wealth of
the knowledge of man, and he suggested on that what we'd have to have
is something like an intelligent agent. An entity not maybe as smart as
us, but an expert in finding things, and we would have to give it
advice. He called it the "advice-taker."
Interviewer:
TELL ME ABOUT TIME-SHARING BECAUSE OBVIOUSLY ONE THING PEOPLE WOULD SAY
IS, "HOW COULD EVERYBODY HAVE THEIR OWN COMPUTER?"
Kay:
Well, I think that was it. They cost several million dollars back then
and people had been fooling around with things little bit like
time-sharing before sort of the official invention of it at MIT. And
part of it was just to debug programs. The idea is that debugging is
one of the most painful processes. Incidentally you probably know that
the there was actually a bug. The first bug was one found by it was a
moth, I think found by Grace Hopper. So it actually has an official,
and a friend of mine, just recently was having trouble with his laser
printer. And nothing that they thought of could possibly work, and, and
finally somebody decided to open it up and look in it, and what it was,
was a mouse. The mouse had moved in, it was nice and warm. It had, on
top of the circuit board had set up various, various places you know so
it was like a real...and course nobody understood what it meant when it
said that this guy's computer has a mouse problem. Cause mice don't
have problems. So, you know. So it's an ill wind that blows nobody
good.
Interviewer:
TELL ME ABOUT THE BASIC CONCEPT OF TIME-SHARING. WHAT WAS THE IDEA?
Kay:
Well, time, the idea was that when humans use computers the best way of
having them use it, is instead of using all the power for five minutes
a day, is to take those five minutes of computer power and spread it
out over several hours. Cause people are slow at typing, and when
you're debugging you don't need to do long runs of things. And what you
want is lots of interaction, lots of finding out that you're wrong and
this is wrong, you can make a little change and this is wrong. So you
want to go from something that requires enormous planning to something
that is much more incremental. And this is the urge that the people who
developed time-sharing had.
Interviewer:
BUT HOW COULD YOU GIVE AN INDIVIDUAL ACCESS EXCLUSIVE ACCESS TO A
COMPUTER?
Kay:
Well, you couldn't. So the idea is that if the thing cost several
million dollars as McCarthy pointed out one of the things that you
could do is roll in one person's job and do it for a couple of seconds,
then roll in another person's job and do it for a couple of seconds.
And if you had a fast enough disc for holding all of these jobs then
you would be able to handle twenty, thirty or more at once and they
wouldn't feel that the lag of a few seconds was going to hurt them.
Interviewer:
SO THE COMPUTER IS SWITCHING BETWEEN THE JOBS SO FAST, IS THAT THE WAY
IT WORKS?
Kay:
Right, right. Well, that's the way it's supposed to work. And of
course, the thing that drove personal computing in the '60s into
existence was that it proved to be extremely difficult to get reliable
response time.
Interviewer:
BUT THIS IS LIKE ONE OF THE EARLY IDEAS OF THE USER HAVING AN ILLUSION.
Kay:
Yes, indeed. Right, in fact the best one of these was a system called
JOSS at RAND Corporation in which the system was devoted entirely to
running this single language. That made things much better. The
language was designed for end users. It was the first thing that
effected people the way spreadsheets do today. It was designed -- it
had eight users on this 1950s machine, but the guy who did it, Cliff
Shaw, was a real artist, and the feeling of using this system was
unlike that of using anything else on a computer. And people who used
that system thought of wanting all of their computing to have the same
kind of aesthetic, warm feeling that you had when using JOSS.
Interviewer:
IF YOU HAD TO LIST -- IF YOU WERE, I MEAN A VERY CLEAR THINKING PERSON
BACK THEN IN THE LATE '50S, YOU KNEW EVERYTHING THEY WANTED COMPUTING
TO BE, WHAT WOULD THEY HAVE, SMALL, INTERACTIVE, REALTIME...?
Kay:
Well, I think in the '50s the emphasis, you know these things go in
waves, the emphasis in the '50s was first on being able to do things
incrementally, second, people wanted to share, so the idea of
electronic mail was very early at MIT, and at some point people started
thinking that the form of inaction started to have a lot to do with how
good you felt, and how puzzled you were and so forth. The invention of
computer graphics, modern computer graphics, by Ivan Sutherland in 1962
had a lot to do with people's perceptions. Cause once you saw that you
couldn't go back. It established a whole new way of thinking about
computer interaction, and to this day it has remained an inspiration.
Interviewer:
TELL ME ABOUT THAT PIECE OF WORK. HE WAS A GRADUATE STUDENT, WASN'T HE?
Kay:
Yes, he had gone to Carnegie Tech, now Carnegie Mellon, and came to MIT
to do his Ph.D. work. One of his advisors was Claude Shannon, and
another one was Marvin Minsky. And as he has told the story several
times, is that they were then in just the early stages of Jack
Licklider's dream to have the computer be a sort of a symbiotic
partner. And when he went out to Lincoln Labs people were starting
about that, and there was a marvelous machine there called the TX-2.
This was one of the last computers in the US large enough to have its
own roof, you know, it was one of these enormous machines originally
built for the air defense system. And Ivan made friends with people
there and started thinking about something having to do with computer
graphics. This you know, the air defense system used displays for
putting up radar plots and so forth. And light guns were already around
and so forth. And so he started thinking about maybe doing a drafting
system, and as I recall it one of his original reactions when seeing
the kind of graphics you could put on a screen -- because the screens
couldn't even draw lines. When they put up a line it was put up with
lots of individual dots. And done fairly slowly, so it would flicker
and it was pretty awful looking. And Ivan at that point said the best
words a computer scientist can ever say, which is, "What else can it
do?" And so he got, in fact having the display not be great helped what
he tried to do on it, because he started thinking of what was the
actual power and value of the computer. Now today we have a problem
because the displays are good so our tendency is to simulate paper. But
what Ivan started thinking about, is what could the computer, what kind
of value could the computer bring to the transaction so it would be
even worth while to sit down and use such an awful looking display. And
the thing he came out with was that the computer could help complete
the drawings, so you could sketch -- this is where the idea of
Sketchpad -- you could sketch something in. You could sketch in, if you
were trying to make a flange, well you just put in the lines for the
flange, and then you would tell Sketchpad to make all of these angles
right angles. And you could make these two things proportional to each
other and so forth. And Sketchpad would solve the problem and
straighten out the drawing in front of your eyes into something that
was more like what you wanted. So that was a terrific idea. And then he
took it another step further, because he realized he could solve real
world problems. So you put a bridge into Sketchpad it'd never been told
about bridges before but you could put bridges in and tell Sketchpad
about pulling and pushing of things and hang a weight on the bridge and
Sketchpad would, would generate the stresses and strains on the bridge.
So it was now acting as a simulation. You put in an electric circuit.
Sketchpad had never heard about electric circuits before, but you could
put in Ohm's law, and what batteries do, and it would in order to
settle the constraints -- or one of the nicest things was you could put
in mechanical linkages. So you could do something like a reciprocating
arm like on a locomotive wheel; for going from reciprocating to
circular motion. And Sketchpad's problem solver if it had a problem
that it couldn't get an exact solution for, there wasn't just one
solution, and of course there isn't for this. What it would do is
iterate through all the solutions, so it would actually animate this
thing and right on the screen you would see this thing animating, it
was the very thing that you were thinking of.
Interviewer:
AND THIS IS VERY EARLY, IT SOUNDS VERY MODERN.
Kay:
'62, yeah. Well, you can't buy a system today that does all the things
that Sketchpad could back then. That's what's really amazing. It had,
the first system that had a window, first system that had icons.
Certainly the first system to do all of its interaction through the
through the display itself. And for a small number of people in this
community, the Advance Research Projects Agency research community,
this system was like seeing a glimpse of heaven. Because it had all of
the kinds of things that the computer seemed to promise in the '50s.
And practically everything that was done in the '60s in that community,
and into the '70s, had this enormous shadow cast by Sketchpad, or you
could maybe think of it better as a light that was sort of showing us
the way.
Interviewer:
AND THAT'S ONE GLIMPSE OF HEAVEN, ANOTHER GLIMPSE OF HEAVEN I SUPPOSE
IS THE WORK OF DOUG ENGELBART.
Kay:
Yes, and he was also funded by the Advance Research Projects Agency and
his original proposal I think was also 1962. 1962 is one of those
amazing years. And Engelbart had read about the Memex device that
Vannevar Bush, who was President Roosevelt's science advisor and a
former professor at MIT, he had written an article in 1945 called "As
We May Think." And most of the article was devoted to predictions of
the future and one of them was, he said that sometime in the future
we'll have in our home a thing like a desk and inside the desk on
optical storage will be the contents of a small town library, like
5,000 books. There'll be multiple screens, there'll be pointing
devices, and ways of getting information in, and pointing to things
that are there and he said that you can form trails that will connect
one piece of information to another. He invented a profession called
"Pathfinding" that there'd be people called Pathfinders who sold paths.
You could buy a path that would connect some set of interesting things
to some other set of interesting things. And so this is a complete
vision in 1945. And a lot of people read that. I read it in the '50s
when I was a teenager because I had seen it referred to in a science
fiction story. Engelbart had read it fairly early, when he was in
military service. And once you read that thing you couldn't get it out
of your mind, because the thing that anybody who deals with knowledge
would desperately like to have. And so Engelbart in the early '60s
started writing proposals, and he finally got ARPA interested in
funding it and they started building a proposal of his. And a couple of
years later, 1964, he invented the mouse -- to have both a better
pointing device than the light pen, and a cheaper one. And they built a
system that by 1968 was able to give a very large scale demonstration,
to 3,000 people in San Francisco.
Interviewer:
AND YOU WERE THERE.
Kay:
I was there.
Interviewer:
TELL ME ABOUT IT.
Kay:
Well I had, I had seen the system beforehand, because of course I was a
graduate student in this community. But still even having seen the
system, the scale of the demo and the impact it had, it was
unbelievable. I remember it started off, there was about 3,000 people
in this auditorium. It was at one of the, I was the Fall Joint Computer
Conference, I think and all you could see on the stage was this figure
with something in his lap and a box in front of him and a couple of
things that looked like TV cameras around him. And he had on a headset
and he started talking he said, "I'd like to welcome you to our
demonstration." All of a sudden his face appeared 20 x 30 ft wide on
this enormous screen, because they borrowed one of the NASA... one of
the first one of the first video projectors. On this, and they used
this huge situation display, and then they, they used video editing so
you could see while he was giving this demonstration what he was doing
with his hands, with the mouse on the key set, what was going on, on
the screen and so forth. And that, that is video taped, I mean that is
something that you can use for your--
[END OF TAPE F182]
Interviewer:
YOU WERE TALKING ABOUT DOUG, WHAT DID HE DEMONSTRATE, WHAT SORT OF
THINGS DID HE SHOW YOU?
Kay:
Douglas, Douglas Engelbart he started off just showing us how you could
point to things on the screen and indicate them and started off
actually fairly, fairly simply just showing how you could look at
information in various ways. He showed something very much like
HyperCard -- so he had a little map of how he was going to go home that
night, how he was going to go to the library and the grocery store and
the drug store and so forth. You could click on each one of those
things and it would pop up and show him what he had to get there. And
what he demonstrated were the early stages of what we call hypertext
today. Lots of response in the system. One of the big shockers was
midway through the thing he started doing some collaborative work, and
all of a sudden you saw the, in an insert on the screen, you saw the
picture of Bill Paxton who was 45 miles away down in Menlo Park, live.
Both of them had their mouse pointers on the screen and they were
actually doing the kinds of things that people still dream about today.
So this was a complete vision. And I think of it as the vision of what
we today call personal computing or desktop computing. Except for the
fact that the processor was a big time-shared computer, all of the
paraphernalia -- Engelbart used a black and white display, 19 inch
display, using video to blow up a calligraphic screen and have a mouse.
If you looked at the thing today, you'd see something that looked like
somebody's office that you could walk into. And what remained was to do
something about the the problems in response time and, and all that
stuff. And that was something that I had gotten interested in a few
years before. And the first machine that I did was a thing called the
FLEX Machine and...
Interviewer:
JUST BEFORE WE GO INTO FLEX, TELL ME WHAT WAS THE REACTION LIKE AMONG
THE COMPUTER COMMUNITY?
Kay:
Well, I didn't take a, in fact, I was actual as I recall I actually
would rather, I had the flu or something, but I was determined to go
see this thing. ARPA had spent something like $175,000 on this demo and
everybody in the ARPA community wanted to show up. I recall that the,
the crowd, you know he got a standing ovation and he won the best paper
in the Fall Joint Computing Conference and so forth. And what was even
better is that he had bought up four or five terminals to the system
and had them in a room and people could go in and actually learn to
interact with the system a bit. So it was a large scale demonstration.
I don't think that anybody has ever traced what people did in, over the
next 15 or 20 years as a result of having been at that demonstration,
that would be interesting.
Interviewer:
DOUG THOUGHT IT, HOPED IT WOULD CHANGE THE FACE OF COMPUTING AND IS
VERY DISAPPOINTED THAT IT DIDN'T. HE THINKS IT DIDN'T REALLY HAVE MUCH
IMPACT.
Kay:
Well, I mean there are a couple of things that Doug, I mean we thought
of Doug as Moses opening the Red Sea. You know he was like a biblical
prophet. And like a biblical prophet he believed very much in his own
particular vision. And that vision was not a one, a 100 percent good
idea. One of the things that they neglected completely was the
learnability of the system. People who used the system were all
computer experts who loved it, were willing to memorize hundreds of
commands in order to be... If you memorized hundreds of commands and
you learned how to use the key set you could fly through this
n-dimensional space. It was quite exhilarating to watch people doing
this and exhilarating to learn how to do it. The problem with it though
was that there were so many barriers towards learning. And there were
many other, many other things that were problems. It will, it wasn't a
particularly good simulation of paper, partly because he didn't want it
to be. And so the idea that there would be a transition period where
you would be producing documents, of course, they printed documents
out, but there was no notion of desktop publishing there. The whole
system was rather like a violin. And if you were willing to learn how
to become a violinist you could play incredible music through the
thing. And of course all of us were so completely sold on this system
in the in the late '60s. And the first machine that I, that I did, the
FLEX Machine, was an echo of this.
Interviewer:
NOW THE FLEX MACHINE. THE OTHER ELEMENT WHICH IS IN THIS, APART FROM
THESE FANTASTIC SOFTWARE ACHIEVEMENTS OF SUTHERLAND AND ENGELBART IS OF
COURSE THE SIZE OF THE MACHINE. THERE WERE SOME PRECEDENTS, WEREN'T
THERE?
Kay:
Oh yes, the first personal computer in my opinion, was the machine
called the LINC. If you include size as one of the important things. Of
course, you could say that the Whirlwind was a personal computer at
MIT, or the TX-2. Some people tried to get Ivan Sutherland to write a
paper called "When There Was Only One Personal Computer." And that was
him using Sketchpad on the TX-2, which is this thing bigger than a
house. But in 1962 Wes Clark did a machine in which part of the design
parameter was it was small enough to look over when you're sitting down
at a desk, you know. So, it was not supposed to loom over you, it was
something you could actually see over it. And many important things
were done on that machine. In fact quite, quite a few hundred, if not a
few thousand of them were built and used in the biomedical community.
It was designed for biomedical research, designed to be programmed by
its who were not computer scientists, even designed to be built by
non-computer scientists. They used to have gatherings in the summertime
where 30 people or so would come and build their own LINCs and then
take them back and stuff. It was great little machine. It had a little
display and display editors and, and so forth. And so it was something
that you could point to when you were trying to think of what to do.
And there were other small things. There was a machine called the IBM
1130, which was really an abortion of a machine. It was sort of a
keypunch keyboard hooked to a, one of the early removable disc packs. I
mean this was a mess of a machine, but it was the size of a desk and
you could sit down. It wasn't, it wasn't designed really to be
programmed except by storing programs on cartridge, very funny, you
could only store data on the disc. IBM forced you to put the programs
on punchcards and that was the only way you could feed them in, it was
really hilarious. So there were lots different kinds of things like
that.
Interviewer:
WHAT WERE YOU TRYING TO DO WITH FLEX?
Kay:
Well, I worked on this machine with a guy by the name of Ed Cheadle who
was, he was trying to do, really trying to invent what today we would
call personal computing. And he had a little machine and he had a
little Sony television set, and what he wanted was something for
engineers that would allow them to -- he was an engineer -- and he
wanted something that allow them to flexibly do calculations beyond the
kinds of things that you do with a calculator. So you should be able to
program the machine in something. You should be able to store the
programs away. You should be able to get it to do things. And then I
sort of came and corrupted the design by wanting it to be for people
other than engineers. I'd seen JOSS, and I'd also recently seen one of
the first object-oriented programming languages, and I realized how
important that could be. And then Cheadle and I escalated -- the
problem is that he and I got along really well, and so we escalated
this design beyond the means of the time to build it practically. But
we did build one, and it had many things that people associate today,
it fit on top of a desk -- special desks, because it weighed hundred of
pounds. But it, it had a fan likened to a 747 taking off because the
integrated circuits back then had maybe 8 or 16 gates on a chip. So
this thing had about 700 chips in it. It had a high-resolution,
calligraphic display. It had a tablet on it and it had a user interface
that included multiple windows, things like icons and stuff. But it
was, it was rather like trying to assemble a meal, maybe make an apple
pie from random things that you find in kitchen -- like no flour, so
you grind up Cheerios, you know? You wind up with this thing that looks
sort of like an apple pie, but actually it isn't very palatable. So the
result of this, this, this machine was a technological success and a
sociological disaster. And it was the, the magnitude of the rejection
by non-computer people we tried it on, that got me thinking about user
interface for the first time. And I realized that what Cliff Shaw had
done in JOSS was not a luxury, but a necessity. And so it, it led to
other, other ways of looking at things.
Interviewer:
SO IF WE GO BACK TO ORIGINALLY YOU WERE SAYING PEOPLE THOUGHT OF
MAINFRAMES AS LIKE FACTORIES, RIGHT? ...THESE EARLY ATTEMPTS AT
PERSONAL COMPUTERS, ARE WHAT, LIKE MODEL T....?
Kay:
Yeah, I think Engelbart -- I think one of the ways that, that we
commonly thought about Engelbart's stuff was he was trying to be Henry
Ford. That the, you could think of the computer as a railroad and the
liberating thing for a railroad is the personal automobile. And
Engelbart thought of what you were doing on his system as traveling, so
you're moving around from link to link in the, in the hyperspace and he
used terms like "thought vectors" and "concept space" and stuff. Nobody
knew what it meant, not sure he did either but it, it's, it was that
kind of a, that kind of a metaphor.
Interviewer:
WHAT'S WRONG WITH THAT METAPHOR, ULTIMATELY?
Kay:
Well I don't think there is anything particularly wrong with it but
it's it, when you're doing things by analogy, you always want to pick
the right analogy because there's so many ways of making them. And, the
thing that was limiting about it when you apply it to humanity, as an
example, is a car you expect to take a few months to learn how to do
it, that was certainly true. It's something that doesn't extend into
the world of the child. There's a whole bunch of things, but of course
we didn't think of it that way. We thought of the car as one of the
great things of the 20th century and it changed our society and
everything. So, we were, we were definitely, definitely using that as
a, as a metaphor. And, in fact the thing that changed my mind had
nothing to do with rejecting the car as a metaphor, it was finding a
better metaphor that, one that was completely possessed me. And that
came about from seeing a, quite a different system. I had called the
FLEX machine a personal computer. I think that was the first use of
that term. While I was trying to figure out what was wrong with it I
happened to visit RAND Corporation over here in Santa Monica and saw
sort of a follow-on system to JOSS that they had done for their end
users who were people like RAND economists. These people loved JOSS but
they hated to type. And so, in the same year the mouse was invented,
the RAND people had invented the first really good tablet. It was a
high-resolution thing and they decided that the thing to do was to get
rid of keyboards entirely, and so the first really good hand character
recognizer was developed there. And they built an entire system out of
it called GRAIL, for GRAphical Input Language. So there's no keyboard
at all. You interacted directly with the things on the screen. You
could move them around. If you did a, a square on the machine, if you
drew a square, it recognized you were trying to draw a square, it would
make one. If you put in your hand printed characters it would recognize
them and straighten them up and the system was designed for building
simulations of the kinds that, that economists and other scientists
would like to build. And using this system was completely different
from using the Engelbart system. And this system, it felt like you were
sinking your fingers right through the glass of the display and
touching the information structures directly inside. And if what
Engelbart was doing was the dawn of personal computing, what the RAND
people were doing was the dawn of intimate computing.
Interviewer:
AND IN INTIMATE COMPUTING YOU FORGET THAT IT'S A MACHINE AND YOU THINK
OF IT MORE AS A MEDIUM.
Kay:
Yeah, you start...and, one of the things that completely took hold of
me in using the GRAIL system was it felt more like a musical instrument
than anything because a musical instrument is something that -- most
musicians don't think of their instruments as machines. And it's that
closeness of contact, the fitness that you're directly dealing with
content more than the form of the content that, that possessed me very
strongly. And I saw, that was in 1968 as well, and I saw another
several things. I saw Seymour Papert's early work with LOGO for, here
were children writing programs and that happened because they had taken
great care to try and combine the power of the computer with an easy to
use language. In fact they used the RAND JOSS as a model and used the
power of LISP which had been developed a few years before as a an
artificial intelligence language, put them together and that was the
early LOGO. And to see children, confidently programming, just blew out
the whole notion of the automobile metaphor. And the thing that
replaced it was that this is a medium. This is like pencil and paper.
We can't wait until the kids are seniors in high school to give them
driver's ed on the thing, they have to start using it practically from
birth the way we, they use pencil and paper. And it was destined not to
be packaged on the desktop, because we don't carry our desks with it-
It had to be something much smaller. And that was when I first started
seriously thinking about a notebook computer. And of course the, the
first thing I, I wanted to know after deciding that it had to be no
larger than this was when would that be possible, if ever? And, so I
started looking at what the integrated circuit people were doing,
Gordon Moore and Bob Noyce and stuff and there were these nice
projections that they, as confident physicists had made, about where
silicon can go. And what's, what's wonderful is, these projections have
only been off by a few percent now more than 20 years later. And
those... and of course I was very enthusiastic, I would believe
anything that was in the right direction. So I took this hook, line,
and sinker and said, - okay, 20 years from now, we'll be able to have
notebook size computer that we can not only do all the wonderful things
on computers, we can do mundane things to, because that's what paper is
all about. You can write a Shakespearean sonnet on it, but you can also
put your grocery list on it. So one of the questions I asked back in
1968 is, - what kind of a computer would it have it to be for you to do
something so mundane as to put your grocery list on it, be willing to
carry it into a supermarket and be willing to carry it out with a
couple of bags of groceries? There is nothing special about that. You
can do it with paper. So the, the question is -- see the question is,
not whether you replace paper or not. The question is whether you can
cover the old medium with the new. And then you have all these
marvelous things that you can do with the new medium that you can't do
with the old.
Interviewer:
MANY PEOPLE FIND THE IDEA OF A MEDIUM A TRICKY CONCEPT. IN THE SENSE
YOU WERE TALKING ABOUT WRITING AND SO FORTH OR MUSIC, WHAT DO YOU MEAN?
Kay:
Well, I think most of my thoughts about media were shaped by reading
McLuhan which not everybody agrees with but one of his points is that
the notion of intermediary is something that is not added on to humans.
It's sort of what we are. We deal with the world through
intermediaries. We deal with our, we can't fit the world into our
brain. We don't have a one to one representation of the world in our
brain. We have something that is an abstraction from it and so we, the
very representations that our mentality uses to deal with the world is
an intermediary. So we kind of live in a waking hallucination. And, we
have language as an intermediary. We have clothes as an intermediary.
So this, this whole notion of what we think of as technology, could
also be replaced by the word "medium." And I think there's a, even
though media has this connotation of, you know, the news business and
stuff like that, I think it's an awfully good word because it, because
it gives us this notion of something being between us and direct
experience.
Interviewer:
NOW, OF THOSE MEDIA, WRITING AND PRINTING OBVIOUSLY AS A MEDIA WE CAN
ALL THINK ABOUT HAVE GREAT POWER, AND MUSIC AS WELL. WHAT DOES THAT
MAINLY CONSIST OF?
Kay:
Well, I think that, you know, the trade off with, with using any kind
of intermediary is that any time you put something between you and
direct experience you're alienating a bit. And, what you hope to get
back from it is some kind of amplification. Various people who have
studied evolution talk about the difference between the direct
experience you get from kinesthetically touching something to the
indirect experience you get of seeing it. And one is less involving
than the other. The seeing is a good one because it means you don't
have to test out every cliff by walking over it. And so there's an
enormous survival value about stepping back. To have a brain that can
plan and envision things that might happen, whether as images or even
more strongly in terms of symbols, is of tremendous survival value
because it allows you to try out many more alternatives. And so as the
existentialists of this century have pointed out that the, we have
gained our power over the world by, at the cost of alienating
ourselves. So the thing that we--
[END OF TAPE F183]
Interviewer:
...A COMPUTER IS ANYTHING BUT A MACHINE IN A CERTAIN WAY. WHY SHOULD
THAT BE A MEDIUM?
Kay:
Well, I think, well, I think machine is a word that has a connotation
that's unfortunate. Yeah, maybe what we should do is either elevate
what we think of when you say machine, you know mechanism, kind of
thing or maybe use a different word. To most scientists, machines are
not a dirty word. Because we, one of the ways we think about things
even in biology these days is that what we are is an immense, immensely
complicated mechanism. That's not to say that it's predictable in the
sense of free will because there's this notion now people are familiar
with what's called chaotic systems, systems that are somewhat unstable.
There's a lot of instability and stuff built in. It's a little more
stochastic in a way, but the idea that there are things operating
against each other and they make larger things and those things are
part of larger things, and so forth. To most scientists it seems like a
beautiful thing, it's not a derogatory term.
Interviewer:
I THINK IT'S ALSO THAT MOST MACHINES WERE BUILT TO DO USEFUL WORK, BUT
THIS MACHINE IN PROCESSING INFORMATION HAS PROPERTIES WHICH ARE SORT OF
LINGUISTIC AS WELL AS...
Kay:
Yeah, well I think the connotation of machine is something that has to
do with physical world. Most people who most people haven't learned
mathematics, have never encountered the notion of an of an algorithm,
which is a process for producing -- you could of your moving around
physical things called symbols. I mean something physical is always
being moved around, that's the way scientists look at the world. And
it's the real question is is this a trivial example of it or is this an
unbelievably interesting example of it? So flower is a very interesting
example of a machine, and a lever is a somewhat trivial and like more
understandable notion of a machine.
Interviewer:
NOW THESE MACHINES CAN MOVE AROUND SYMBOLS PRESUMABLY, LIKE THE
ELECTRONIC VOLTAGES OR WHATEVER, SO FAST THAT THEY CAN DYNAMICALLY
SIMULATE VIRTUALLY ANYTHING YOU WANT...
Kay:
Lots of things, yeah. One way to think of it is the, a lot of the what
a computer is is markings. And just as you it doesn't matter whether
you make the markings for a book on metal or paper or you make them in
clay, all kinds of ways of making the equivalent of a book. The main
thing about a marking is you have to have some way of distinguishing
from other markings, and once you've got that then they can act as
carriers of different kinds of descriptions. And the range of the
markings that you can have in a book, and the range of markings you can
have in a computer are the same.
Interviewer:
SO LIKE IN A COMPUTER IT STARTED WITH PUNCH CARDS.
Kay:
Yeah, it doesn't matter what it is. You can make, there's a wonderful
computer built out of Tinker Toy in the computer museum of Boston. It
was done some MIT grad students. You can make it out of anything. And
so that's one set of ideas, that is, the representational properties of
things are very very much like a book, but then the computer has this
even more wonderful thing, is that it's a book that can read and write
itself. And moreover it can do it very quickly. So it has this
self-reflexivity that you usually only find in biological systems. And
it can carry through these descriptions very, very rapidly. Okay and
then you get, then everything that's the big nirvana experience where
you suddenly realize "Holy Smokes this thing is a pocket universe." And
it has a nice complementary role to the way we deal with our physical
universe as scientists. The physical universe especially in the
nineteenth century was thought that it was put there by God and it was
the job of scientist to uncover this glorious mechanism. So in the
computer what you do is you start off with a theory, and the computer
will bring that theory to life so you can have a theory of the universe
that is different than our own, like it has an inverse cube law of
gravity. And you can sit down and in not too much time you can program
up a simulation of that and discover right away that you don't get
orbits anymore with planets. And that's curious and a little more
thinking and deeper delving will soon make you discover that only the
inverse square law will actually give you orbits.
Interviewer:
THAT'S CURIOUS, BUT THE OTHER WAY YOU PUT IT ONCE IS YOU COULD INVENT A
UNIVERSE WITH NO GRAVITY, RIGHT?
Kay:
Yes, and in fact, I think, it's again the difference between a physical
book which when, you can drop it and it will fall down to the floor,
and the story that's in the book that could be about a place that has
no gravity, like out in space. And the computer is at once a physical
thing and the stories that it holds don't have to have anything to do
with the physical universe its components reside in. So, in other words
you can lie, both with books and with computers -- and it's important
because if you couldn't lie you couldn't get into the future. Because
we are living a world of things that were complete lies 500 years ago.
Interviewer:
NOW THE THING ABOUT THE COMPUTER AS WELL, APART FROM BEING DYNAMIC,
IT'S ABILITY TO SIMULATE MEANS THAT IT CAN EMBODY ALL OTHER MEDIA. IT
IS THE FIRST META-MEDIUM, ISN'T IT?
Kay:
Yes, yes, I called it the first meta-medium in the, in the early '70s
when I was trying to explain what it was as being distinct from big
mainframes with whirling tape drives, that... The physical media that
we have like paper and the way it takes on markings are fairly easy to
simulate. The higher the resolution display you get, the more you can
make things look like a high-resolution television camera looking at
paper. That was one of the things that we were interested in at at
Xerox PARC was to do that kind of simulation. And there are deeper
simulations than the simulation of physical media. There's the
simulation of physical processes, as well.
Interviewer:
NOW, IF THEN -- THIS IS A PRETTY MIND-BLOWING, BECAUSE THE COMPUTER
THEN IS NOT JUST IMPORTANT AS A VERSATILE TOOL, A NEW MACHINE, WHAT
WE'RE TALKING ABOUT IS SOMETHING OF THE OF THE GRAVITY OF THE INVENTION
OF WRITING OR PRINTING. IN POPULAR TERMS, WE'RE TALKING ABOUT SOMETHING
REALLY PRETTY FANTASTIC.
Kay:
Right. It's sort of sneaking its way in the way the book did. The the
Catholic Church didn't, I think the Catholic Church thought that the
book might be a way of getting more bibles written in Latin, and so at
the by the time they started thinking about suppressing it, it was too
late because of course, you could print all these other things, and all
of a sudden you go from having 300 or 400 books in the Vatican Library
that, in the year 1400 to something 100 or 150 years later where there
are some 40,000 different books in circulation in Europe and 80 percent
of the population could read them. And so all of a sudden you have
something totally at odds with most ways of dealing with religion,
which is multiple points of view.
Interviewer:
WHAT CAN WE LEARN, LOOKING AT THE HISTORY OF THE BOOK. CLEARLY IT TOOK
A LOT LONGER TO HAPPEN, RIGHT, FOR ONE THING. BUT WHAT WERE THE STAGES
IT HAD TO GO THROUGH TO GET TO THAT POINT? .
Kay:
Well, I think in doing in doing these analogies, analogies can be
suggestive, you know. And of course, the the amount that analogy
carries through from one place to another depends on lots of things. I
happen to like the analogy to the book, quite a bit because you have
several stages, you have the invention of writing, which is a huge
idea, incredibly important. And the difference between having it and
not having it is enormous. The difference between having computers even
if they're big, just big mainframes and not having them, is enormous.
There're just things you cannot do without them. Then the next stage
was the Gutenberg stage and Gutenberg just as McLuhan liked to say,
"when a new medium comes along it imitates the old." So Gutenberg's
books were the same size as these beautiful illuminated manuscripts
that were done by hand. And big like this and in the libraries of the
day, back then the books generally weren't shelved. There were so few
books in a given library that they were actually had their own reading
table. And if you look at woodcuts of the day. It looks for all the
world like a time-sharing bullpen. You know, you go into the library,
there's one library in Florence that's set up that way. You go over to
the table that the particular book is at, they are chained in because
they were too valuable, they were priceless. And Gutenberg imitated
that, but of course he could produce many more books. But he didn't
know what size they should be, and it was some decades later that,
Aldus Manutius was a Venetian publisher decided that books should be
this size. He decided they should be this size because that was the
size that saddle bags were in Venice in the late 1400s. And the key
idea that he had was that books could now be lost. And because books
could now be lost, they could now be taken with you. And they couldn't
be like this, they had to be something that was a portable size. And I
think in very important respects where we are today is before Aldus.
Because this notion that a computer can be lost is not one that we like
yet. You know, we protect our computers. We bolt them to the desk and
so forth. Still quite expensive. They'll be really valuable when we can
lose them.
Interviewer:
THE OTHER POINT ABOUT THE POINT OF LITERACY, IS ALSO WELL TAKEN THERE.
WHILE THERE ARE VERY, WHILE THERE ARE NO BOOKS TO READ I ASSUME THERE'S
NO POINT IN IT...
Kay:
Right, well if reading and writing is a profession you don't have the
notion of literacy and illiteracy. Like, there's no word in our
language that stands for "il-medicine." There is medicine, which is
practiced by professionals, and there is no notion of il-medicine. Now
if we all learned medicine, if that were part of, and it may be some
day that staying healthy might be something that becomes an important
part of everyone's life then there'll be a notion of medicine and
il-medicine. And so it wasn't until there was a potential for the
masses to read that we had the notion of literacy and illiteracy. And
to me literacy has three major parts that you have to contend with. One
is you have to have a skill to access material prepared for you by
somebody else, regardless of what it is that you're you're trying to be
literate in. In print mediums it's called reading, reading/accessing. I
don't think many people in our society would think that a person who
just could read was literate. Because you have to be able to create
ideas and put them into the information stream, so you have creative
literacy, which is the equivalent of writing. And then finally after a
medium has been around for awhile you have the problem that there are
different genre of these things so that you have to go into a different
gear to read a Shakespearian play. Now there's a different way of doing
it. The way they wrote essays 200 years ago is different than the way
they write essays today, and so you have to have some sort of literacy
in the different styles for representing things. And so I think those
three things when something new comes along, since we have we're
talking about media, we could talk about "mediacy" and "il-mediacy," or
"computeracy" and "il-computeracy," or something like that. And I think
each one of those things is a problem that has to be solved. There's
the problem of how can you, when you get something new made for a
computer, be able to do the equivalent of read it - of access what it
is. You go to a computer store and get some new wonderful thing, slap
it into your computer and if you have to learn a new language in order
to use the thing, then things are not set up for you being computer
literate.
Interviewer:
SO WRITING IN LATIN WOULD BE ANALOGOUS TO RATHER DIFFICULT...
Kay:
Yeah I think it has a lot to do with that, and again to sort of strain
the analogy we're making with with printing my other hero back then was
Martin Luther who a few years later had this intense desire to let
people have a personal relationship with God without going through the
intermediaries of the of the church. And he wanted people to be able to
read the bible in their own language and knew that German which was a
bunch of different regional dialects back then, mostly for talking
about farming and stuff was not going to be able to carry the weight of
the bible, as it was portrayed in Latin which is a much more robust and
mature language. So Martin Luther had to invent the language we today
call High German, before he could translate the bible into it and so
the language that Germans speak today is is in a very large part the
invention of a single person who did it as a user interface, so that he
could bring the media much closer to the people, rather trying to get
everybody in Germany to learn Latin.
Interviewer:
NOW WITH THAT THAT SORT OF INSIGHT, WE COULD THINK OF THIS WHOLE
PROCESS LITERALLY TOOK A LONG TIME TOOK MANY CENTURIES AND WE CAN YOU
SAY, THE EARLY PHASES OF THE COMPUTER HAVE GONE QUITE MUCH MORE RAPIDLY
HAVEN'T THEY?
Kay:
Right, I think, well I think we're part you know, things are going more
rapidly. But also to the extent that analogies help, you know,
certainly that progression, I happen to know, be a big fan of books,
and I happen to know that progression of of illuminated manuscripts
which I absolutely adored when I was a child. The Gutenberg story, the
Aldus story; he was one of my great heroes, because he was the guy who
decided he had to read more than one book. And then Martin Luther.
Those immediately sprang to mind as soon as I had decided that the
computer wasn't a car anymore. And then, then the thing becomes the two
most powerful things that you have to do right away. You don't have to
wait to gradually figure them out. You have to get the thing small and
you have to find a language that will allow people, that will bring
what it is closer to the people rather than the other way around.
Interviewer:
NOW AN OPPORTUNITY CAME FOR YOU, I KNOW YOU WERE GOING TO MAKE YOUR
DYNABOOK ONE WAY OR ANOTHER, BUT AN OPPORTUNITY CAME FOR YOU TO GET TO
A BRAND NEW CENTER WHEN YOU WENT TO XEROX PARC.
Kay:
Yeah, I thought up the Dynabook in 1968 and made a cardboard model of
it, and a couple of years later I was about to go to Carnegie Mellon
University to do the thing, and I had been consulting a little bit for
the newly formed Xerox PARC, and it just got to be too much fun, so I
never wound up going to Pittsburgh.
Interviewer:
WHAT WAS XEROX PARC?
Kay:
Xerox PARC was an attempt by Xerox to spend part of its research and
development money on really far out stuff that was not going to be
particularly controlled by them. In the late '60s they bought
themselves a computer company, they bought a whole bunch of publishers.
They were trying to get into a larger sphere than just office copiers.
The the one of the ringing phrases of the late '60s was "Xerox should
be the architects of information." And so they had this expansive
feeling about getting into other areas, and they got Jack Goldman in,
who was a scientist at Ford and as the chief new chief scientist at
Xerox and Jack realized that what we actually had to have at Xerox was
something like a long range research center. Xerox was a big company
even then, and what Jack decided to do was to hire George Paik who was
the chancellor of Washington University in St. Louis and George was
quite familiar with ARPA and so he hired Bob Taylor who had been one of
the funders of ARPA in the '60s. Taylor was one of the champions of
Engelbart, one of the champions of the work at RAND and so forth. What
Taylor did was to, instead of hiring random good scientists, what he
did was to decide to set up a miniature concentrated ARPA community. So
he went out after all of the people he thought were the best in the
ARPA research community and those people were somewhat interested
because the Mansfield Amendment was shutting down, constricting the
kinds of far out things that ARPA could do, so it was one of those
things where the..
Interviewer:
WHAT WAS THE MANSFIELD AMENDMENT?
Kay:
The Mansfield Amendment was an overreaction to the Vietnam War. A lot
of people were worried about various kinds of government and especially
military funding on campus and secret funding and all of this stuff.
And the the ARPA information processing techniques projects got swept
up in this even though every single one of them was public domain and
had already been spun off into dozens of companies including DEC. And
so this is this is just one of these broad brush things done by
Congress that happened to kill off one of the best things that's ever
been set up in this country. And so a lot of smart people in
universities who believed more or less the same dream about the destiny
of computers were gathered up by Taylor, and I was one of those.
[END OF TAPE F184]
Interviewer:
SO HOW GOOD WERE THEY -- YOU WERE TALKING ABOUT THE PEOPLE THAT WERE
GATHERED, HOW GOOD WERE THEY?
Kay:
Well, I thought they were the best.
Interviewer:
OKAY. HOW GOOD WERE THEY, ALAN?
Kay:
I thought the people that were there were among the very best. In my
personal pick, by 1976 which was about six years after the place was
started there were 58 out of my personal pick of the top 100 in the
world. And divided into a couple of labs. People who had very strong
opinions who normally would not work with each other at all, in any
normal situation, were welded together by having known each other in
the past, having come up through the ARPA community believing more or
less the same things, of course, when you have that kind of level of
people, the disagreements in the in the tenth decimal place become
significant. But it worked out very well because we were able to get
together completely on joint hardware projects, so we built all of our
own hardware as well as all of our own software there. And the hardware
was shared amongst the many different kinds of groups that were there,
so there was one set of hardware, as we moved on from year to year and
then many different kinds of software design.
Interviewer:
AS XEROX HAD THIS IDEA OF ARCHITECTURE OF INFORMATION, A RATHER GENERAL
CONCEPT, WHAT DID YOU GUYS THINK YOU WERE DOING?
Kay:
Well, the the psychology of that kind of person is, would be thought as
being arrogant in most situations. We we thought we knew we didn't know
exactly what we were going to do, but we were quite sure that nobody
else could do it better, whatever it was. And the architecture of
information, that was a phrase that sounded good, it didn't actually
mean anything. And there was some mumbling also about if paper goes
away in the office in the next 25 years, Xerox should be the company to
do it. Because that was their heartland business, but Taylor had was
setting the place up to realize the ARPA dream. Which was to do
man-computer symbiosis in all of the different directions that ARPA had
been interested in the past which included interactive computing, which
included artificial intelligence and the notion of agents. So there're
projects that range from being intensely practical ones, like Gary
Starkweather doing the first laser printer. Metcalf and other people
doing the Ethernet the first packet switching local area net. Thacker
being the main designer of the Alto which was the first work station
and the first computer to look like the Macintosh. I was head of the
group that was interested in the Dynabook design and a lot of the
things that we came up with for the Dynabook were simulated on the Alto
and became part of the first notion of what work stations and personal
computers should be like. And those included things like the icons and
the multiple overlapping windows and the whole, the whole paraphernalia
that you see on machines like the Macintosh today.
Interviewer:
THE BASIC IDEA WAS, THE SORT OF MACHINE YOU WERE INTERESTED IN WAS ONE
WHICH HAD THE CAPACITY TO HANDLE RICH GRAPHICAL INTERFACES, BUT COULD
BE NETWORKED TOGETHER. SO IT HAD TO HAVE MORE STAND ALONE POWER THAN
NORMAL TIME-SHARING, IS THAT TRUE?
Kay:
Yeah, in fact one of the things we were incredibly interested in was
how powerful did it have to be. We had we used phrases like we didn't
know how powerful it had to be, but it had to be able to something like
90 or 95 percent of your, the things that you would generally like to
do before you wanted to go to something more -- of course, we didn't
know what that meant. We thought it would be somewhere around ten MIPS
(million instructions per second), that was sort of the rule of thumb.
The machine we built was effectively some, it was somewhere between one
and three MIPS, depending on how you measured it. And so it was less
powerful than we thought. On the other hand it was about 50 times as
powerful as a time-sharing terminal. So the magnification of what we
could aspire to was enormous. And the machine was not done blindly. It
was done after about a year of studying things like fonts, and what
happens when the eye looks at little dots on screens, and what
animation means, and we did a bunch of real time music very early
synthesis kinds of stuff and so we knew a lot about what the machine
should be and how fast it should run.
Interviewer:
SO TO DESIGN THIS MACHINE AND THE SOFTWARE THAT WENT WITH IT YOU
STUDIED HUMAN BEINGS?
Kay:
Yeah. A lot of us did. I mean what was nice here is I was one I was one
of the drivers on one of these small machines. There were other groups
at this time in 1971 when PARC was forming who were building a large
time-sharing system to act as a as a kind of an institutional resource
for PARC. There were lots of different things going on and I wanted to
do stuff with children because my feeling after doing the FLEX machine
was that we, and I especially, were not very good at designing systems
for adults. And so then I thought, well children have to use this
thing, why don't we forget about adults for a while, we'll just design
a tool for children, a medium for children, and we'll see what happens.
So I desperately wanted to get a bunch of these systems. I needed like
fifteen of them. I needed to have them in a school somehow, with
children, Because I wanted to see how children reacted to these things,
and improve the design that way.
Interviewer:
WE'LL KEEP GOING DON'T WORRY ABOUT IT. SO THE IDEA WAS THAT WHEN YOU
USED CHILDREN IT THREW INTO FOCUS MANY OF THE PROBLEMS YOU HAD BEFORE?
Kay:
Yeah. I don't know that it threw into -- the shock when you go out to
real users of any kind is enormous. Because technical people live in
this tiny little world actually. We like to think it's a big world, but
it's actually a tiny little world. And it's full of phrases that we
learned when we were taking math classes, and it's, it's hermetic. And
it's full of people who like to learn complicated things, they delight
in it. And so one of the one of the hardest things is for these kind of
people to do any kind of user interface for people who are not like
that at all. That is the number one dictum of user interface design, is
the users are not like us. And so what you need to have is some way of
constantly shocking yourself into realizing that the users are not like
us. And children do it really well, because they don't care about the
same kinds of things that adults do, you know, they're not like this,
they're not like that, they can always go out and play ball if, they
haven't learned to feel guilty about not working yet. And the last
thing you want to do is subject the children to more school. You don't
want to put them through a training course to learn a computing system.
So we use the term forcing function. It was a setup where in many ways
it's harder to design a system for children than it is for adults,
along a couple of couple of avenues. And it forced us to start thinking
about how human mentalities might work.
Interviewer:
THESE WERE YOUR SMALLTALK KIDS, RIGHT?
Kay:
These eventually became the Smalltalk kids. And the lead into that was
that Seymour Papert had gotten a lot of good ideas from having worked
with Piaget. And Piaget had a notion of the evolution of mentality in
early childhood as being sort of like a a caterpillar into a butterfly.
And the most important idea of it was that each stage of development,
that the kid was not a deficient adult. That it was a fully functioning
organism. Just didn't think about the world quite the same way as
adults did. And other psychologists like Jerome Bruner had similar
ideas, also inspired by by Piaget.
Interviewer:
WHAT WERE THESE BASIC STAGES?
Kay:
Well, if you break them down into just three stages, one in which a lot
of thinking is done just by grabbing. A hole is to dig it, an object is
to grab it. And then the next stage is very visual, in which many of
the judgments that you make about the world are dominated by the way
things look, so in this stage you have the Piaget water pouring
experiment, you know you pour from a squat glass into a tall thin one
and the kid in this stage says there's more water in the tall thin one.
He doesn't have an underlying notion, a symbolic notion, that
quantities are conserved. He doesn't have what Piaget calls
"conservation." And conservation is not a visual idea. Visual is what
does it look like when you compare it. And then later on you get a
stage that Piaget called the "symbolic stage" where facts and logic
start to dominate the ways thinking, and a child in that stage knows as
a fact that water doesn't appear and disappear. Nor does anything else.
And so it's able to give you judgments or inferences based on these
facts rather than by the way things appear. Now Bruner did a lot of
these experiences over again and added some sort of side trips into
them. So one of the things he did in the water pouring experiment was
to intersperse a cardboard after the child had said there's more water
in the tall thin glass, and the kid immediately changed what he had
just said, he said, "Oh no, there can't be more because where would it
come from." And Bruner took the card away, and the kid changed back, he
said, "No, look, there is more." Bruner put the card back and the kid
said, "No, no there can't be more, where would it come from?" And so if
you have any six year olds you'd like to torment, you know this is a
pretty good way of doing it. And Bruner the way interpreted this is
that we actually have separate ways of knowing about the world. One of
them is kinesthetic, one of them is visual, one of them is symbolic,
and of course we have other ways as well. And that these ways are in
parallel. They really constitute separate mentalities. And because of
their evolutionary history they are both rather distinct in the way
that they deal with the world, and they are not terribly well
integrated with each other. So you can get some remarkable results by
telling people an English sentence which they'll accept and then
showing them the picture that represents the same thing and have an
enormous emotional reaction.
Interviewer:
HOW WAS THIS GOING TO HELP YOUR INTERFACE DESIGN?
Kay:
Well, you know we were clutching at straws! You'll do anything, when
you don't know what to do. But it seemed like a reasonable thing to
since we're going to be dealing with humans, it seemed like a
reasonable thing to try and get something out of the newly formed
cognitive of psychology to do it, and the thing that got me. The Piaget
stuff didn't help me much. Because one of the ways of interpreting it
is that for example, you shouldn't probably teach first graders
arithmetic because they aren't quite set up to receive it yet. You'd be
better off teaching them something that's more kinesthetic like
geometry or topology. They're just, they're learning, they're tying
their shoes and stuff and there are a whole bunch of things you can do
with knots and things that are that are, higher mathematics. But the
particular symbolic manipulations of arithmetic are not easy for them
at that stage. Well, that's that's important when you're designing
curriculum in schools but it didn't tell us much about a user
interface. The Bruner thing though was really exciting. Because one of
the ways of interpreting it is that at every age in a person's life
there are these multiple mentalities there. Now when your kid is six
years old, the symbolic one may not be functioning very much, but there
are tricks that you can get it to function. And Bruner had this notion
that you could take an idea that's normally only taken, taught
symbolically like some physical principle, and teach it to a six
year-old kinesthetically, and to a ten year-old visually. And later on
to a fifteen year-old you can teach it symbolically. This notion of a
spiral. And Pappert had the same idea. If you want to learn how a
gyroscope works, the last thing you want to do is look at the equation.
What you want to do is do a little bit of visual reasoning about the
thing and then have one in your hand in the form of a big bicycle
wheel, and when you turn that bicycle wheel and feel what it does, it
clicks together with a small amount of reasoning you are doing and all
of a sudden you understand why it it has to flop the other way. But
there's nothing that mysterious about it, but from the outside and
looking at the equation, you know and thinking about cross products and
stuff like that it just doesn't make any sense. So Bruner's idea was
when you don't know what else to do in developing a curriculum, you try
recapitulating the Piagetian stages, cast this adult level knowledge
down into a kinesthetic form and recast it later on. Teach it over and
over again, recast it later on in a visual form. And he thought of the
these lower mentalities kinesthetically and iconic mentalities as being
what we call intuition. They can't talk much but they have their own
figurative ways of knowing about the world that supply a nice warm way
of thinking about physical problems.
Interviewer:
ENGLEBART HAD SOMETHING...ALREADY THE MOUSE IS A KINESTHETIC...
Kay:
Yes, but not a good one. It's okay. But if you only use it to point on
the screen it's not very, it's not very kinesthetic. Because major
operation in kinesthesia is grabbing something and moving it. And
Engelbart did not have that. And that was one of the first, of course
people been moving things around in in computer graphics and most
people that had ever done this liked it better. The first little paper
I wrote about the Dynabook talked about moving things around and GRAIL
moved things around. Sketchpad moves things around. It's a natural
thing when you're do when you're thinking about things from a graphic
center, which Engiebart wasn't even though he had little bit of
graphics he was thinking about it from text. Think about things from a
graphic center. It's natural to want to just move something around. And
so the connection with what had already been done, with what Bruner was
saying got us thinking a little bit more strongly about what the mouse
was supposed to be about. The mouse was not just for indicating things
on the screen, it was to give you a kinesthetic entry into this world.
Now the important thing about it is that the I don't know how all
scientific discoveries are made. They usually are connected together
with little flashes of insight over long periods of time, but in fact
the we're when I started off I was quite sure that Bruner had the key
and we got the ingredients rather early for doing the user interface.
But it was remarkable to me how many people we had to try ideas out on
for years. Three or four years we had to not only build the system but
in fact the early versions of the system were not as good as later
versions of the system, and actually took about five years and hundred,
dealing with hundreds of people to come out with the first thing that
was sort of looked like the Mac.
Interviewer:
AND THIS IS THE DESKTOP...
Kay:
Yeah, the desktop was actually an idea we didn't like particularly.
That was something that was developed as a -- when Xerox was thinking
specifically of going at offices.
Interviewer:
THIS WAS THE ICONS, WINDOWS...
Kay:
Yeah, in the in the Dynabook idea, see when you have a Dynabook and you
have simulations and multidimensional things as your major way of
storing knowledge, the last thing you want to do is print any of it
out. Because you are destroying the multidimensionality and you can't
run them. So if you have, so one of the rules is if you have anything
interesting in your computer, you don't want to print it. And that's
why if your computer is always with you you don't want to, you won't
print it. When you give it to somebody else what you're sending to them
is something that was in your Dynabook to their Dynabook. Okay, and the
user interface that we worked on was very much aimed at that, and so it
had a different notion than the than the desktop was sort of a smaller
version of it. What we had was this, you can think of it as multiple
desks if you want, but we had this notion that people wanted to work on
projects, and I especially liked to work on multiple projects and
people used to accuse me of abandoning my desk. You know it got piled
up and I'd go to another one and pile it up. And when I'm working on
multiple things I like to have a table full of all the crap that goes
there and another table and so forth, so I can everything is in the
state it was when I last left it. And that was user interface that we
wound up doing where you could move from one project to another project
and you could think of it on, if you did it on the Macintosh you could
think of it as having multiple Macintosh desks, each one of which was
all the tools for that particular project.
[END OF TAPE F185]
Interviewer:
THE REASON WHY THESE, YOU USED THE TERM INTUITIVE BEFORE, OR WE FEEL
THINGS THAT USE OUR KINESTHETIC AND ICONIC MENTALITIES ARE INTUITIVE.
ALSO THE TERM LEVERAGE IS USED. CAN YOU TELL ME WHAT YOU MEAN WHEN YOU
SAY, WHEN YOU SUCCEED, WHEN YOU CONFER LEVERAGE ON A USER?
Kay:
Yeah, well, I think -- I mean, another part of my background before I
was heavily into computer science, was theater. And if you think about
it, theater is kind of an absurd setup, because you have perhaps a
balding person in his forties impersonating a teenage in triumph,
holding a skull in front of cardboard scenery, right? It can't possibly
work. But in fact it does all, all the time. And the reason it does is
that theater, the theatrical experience is a mirror that blasts the
audience's intelligence back out at them. So it's a as somebody once
said, "People don't come to the theater to forget, they come tingling
to remember." And so what you -- what it is is an evoker. And again,
having the thing go from an automobile to a medium started me thinking
about user interface as a theatrical notion. And the -- what you do in
the theater is to take something that is impossibly complex, like human
existence or something, and present it in something that is more like
stereotypes, which the user can then resynthesize human existence out
of the, the -- I think it was Picasso who said the, that art is not the
truth, art is a lie that tells the truth. And so I started thinking
about the user interface as something that was a lie, namely, what it
really isn't what's going on inside the computer, but it nonetheless
tells the truth because it evokes, it's a different kind of computer.
It evokes what the computer can do in the user's mind and that's what
you're trying to do. So that's the computer, they wind up going. And,
and that's what I think of as leverage.
Interviewer:
SO JUST AS THE TIME-SHARING, YOU SORT OF HAD THE ILLUSION THAT YOU HAD
EXCLUSIVE USE OF THE COMPUTER. HERE, YOU HAVE A DIFFERENT ILLUSION THAT
YOU'RE MOVING THINGS ON, THAT THERE ARE OBJECTS ON YOUR SCREEN.
Kay:
Yes. In, in fact, we use the term user illusion. I don't think we ever
used the term user interface, or at least certainly not in the early
days. And we did not use the term metaphor, like as in desktop
metaphor. That was something that I think came along later. The idea
was what, what can you do theatrically that will cover the amount of
magic that you want the person to be able to experience. Because we
again the understanding was in the theater, you don't have to present
the world as physicists see it, but if you're going to have magic in
Act III, you better foreshadow it in Act I, so there's this whole
notion of how do you dress the set, in order to set up expectations
that will later turn out to be true in the, in the audience's mind.
Interviewer:
NOW WHILE YOU WERE WORKING ON THESE THINGS, OTHER PEOPLE AT PARC WERE
BUILDING A MACHINE?
Kay:
They were building various machines. And it was a -- it's kind of
funny, because in the way it worked out is I had come up with a, took
Wes Clark's LINC design and came up with a version, sort of a modern
version of that in a little handheld, you know, sort of a like a
suitcase, that I called MiniCOM, and has done some design of it, how
things would run on it, and the problem was it wasn't powerful enough.
But it was something I figured I could get 15 or 20 of for the
experiments that we wanted, and I had designed the first version of
Smalltalk for this machine. And much to my amazement I got turned down
flat by a nameless Xerox executive. And I was crushed that, you know,
no, we're doing time-sharing ... sorry.
Interviewer:
WE'LL START AGAIN...JUST GO FROM WHERE YOU SAID YOU WERE TURNED DOWN BY
A XEROX EXECUTIVE.
Kay:
Okay, so ... there was a I couldn't believe that a, this, an unnamed, a
nameless Xerox executive turned me down flat as far as the idea of
building a bunch of these things. And I was crushed. You know, I was
only, I was 31 years old then, and, just, ach, it was terrible. And so
then I evolved another scheme of getting a bunch of little
minicomputers, like Data General Novas, and I could only afford five of
those -- I had something like $230,000 or so, and I figured five of
those, and take the system they were experimenting with fonts and
bitmap displays on and I'll replicate that, I'll get five of them --
it's an incredibly expensive way to do it, but that would give me five
machines to do stuff. And then this this Xerox executive went away on
a, on a task force, and Chuck Thacker and Butler Lampson came over to
me in September of 1972 and said to me, "Have you got any money?" And I
said, "Well, I've got about $230,000." And they said, "Well, how would
you like us to build you your machine?" And I said, "I'd like it fine."
And so what I gave them was the results of the experiments I had done
in fonts and bitmap graphics, plus we had done some prototyping of
animations and music. So to do all these things that had to be so and
so fast. And what happened there was a combination of my interest in a
very small machine and Butler Lampson's interest in a $500 PDP-10
someday. And Chuck Thacker wanted to build a machine 10 times as fast
as a Data General Nova. And Chuck was the hardware genius, so he got to
decide what it was in the end, and he went off and in 3 1/2 months with
two technicians completely built the first machine that had the bitmap
display. And he built it not as small as I wanted it, but ten times
faster than I had planned, and it was a larger capacity machine. So it
was simultaneously both the first thing that was like a Mac, and the
first work station.
Interviewer:
THIS WAS THE ALTO.
Kay:
An absolute triumph. Yeah, this was the Alto.
Interviewer:
WHAT WAS THE BITMAP DISPLAY? WHY WAS THAT SO . . . ?
Kay:
Well, it was -- I mean that was fairly controversial, because the, the
problem is that bits were expensive back then. And before about 1971,
bits were intolerably expensive, because they're, in memory, they're in
forms of these little magnetic cores, and just even a modest memory was
many cubic feet, and required expensive read sense, amplifiers and
power supply -- it was a mess. And in the early '70s, Intel came out
with a chip called the 1103, that had a thousand bits on it. Right? The
chips you buy today have like 4 million bits, and the Japanese are
making ones with 16 million -- 64 million bits. So this was a thousand
bits on a chip. This is the best thing that ever happened. Because all
of a sudden, everything changed. This is one of these predictions that
Bob Noyce and Gordon Moore, that it completely changed the relationship
of us to computing hardware. And the people who wanted to build a big
time-sharing machine at PARC, including Chuck Thacker and Butler
Lampson, immediately started building one out of this memory. And it
was also the right kind of thing to build the Alto out of. But we still
couldn't have a lot of it. This was 1972 that we're talking about. And
so this notion -- bitmap display is something that we didn't invent.
People had talked about it, there even were small versions of it.
Television --
Interviewer:
EACH PART --
Kay:
Television is something that is discrete in one dimension and analog in
another dimension but you can think of it as being 300,000 separate
pixels on a screen, and it's something everybody would like, because
it's perfectly general. The nicest thing about television is that you
don't have to determine beforehand what kind of picture you're going to
show on it. You can show anything. You don't have to worry. Whereas
every other kind of display, you had to have some notion about what you
were going to display in order to know what kind, how much power you
should build into the display. So we spent a year and a half or so
trying different kinds of displays and making calculations and stuff,
and at some point we realized, oh, the heck with it, you know, we just
have to go for a bitmap display, and of course, we didn't know that
people would settle for tiny little displays like this, you know, so
the first display on the Alto actually was about two and a half times
the size of the first Mac. It was about 500,000 pixels and it was an 8
1/2 x 11 page display, the exact size that the Dynabook was supposed to
be-
Interviewer:
IS IT TRUE YOU PROGRAMMED THE COOKIE MONSTER? IS THAT TRUE?
Kay:
The Cookie Monster the first I invented the first painting program in
the summer of 1972 and it was programmed up by Steve Purcell, who was
one of our students, and again the idea was to get away from having
just crisp computer graphics, as they had been in the past, because the
idea was that people don't design on computers, and one of the reasons
they don't design on computers is they can't make things messy enough.
And my friend Nick, Nicholas Negroponte at MIT was also thinking
similar thoughts. He wanted people to be able to sketch as well as to
render. And so I invented the, the painting program. The painting
program was a direct result of thinking seriously about bitmap
displays. It, it's maybe the first thing you think of. Because you are
always painting on it, when you're putting up characters, you're
painting characters. And it was very much like MacPaint, it turned out
to be many years later. I have a nice video showing it being used. And
the second thing I thought of was that having a bitmap display with a,
combined with multiple windows like the FLEX machine had would
automatically allow us to do this overlapping idea, which we all
thought was terribly important, because the, we thought this 8-1/2 x 11
display was impossibly small for all the stuff we wanted to put on, and
the overlapping windows would give you a way of having much more
effective screen space, as you, as you move things around.
Interviewer:
NOW THIS MACHINE, THE ALTO, THE PROTOTYPE WAS PRODUCED IN 1973. HOW
DOES IT COMPARE IN TERMS OF WHAT IT'S GOT, COMPARED WITH THE
MACINTOSHES AND THE MACHINES WE HAVE NOW.
Kay:
Well, I think the, the first thing is the Macintosh is, represents an
amount of work, even to get the first Mac out, an amount of work
equivalent to what we did at Xerox added on to it. So the screen design
in the Macintosh is unique to the Macintosh and much nicer than we
used. I mean there all of these extra, extra things on it. And of
course the number of tools you can buy -- there's something like 9,000
pieces of software rendered in the Macintosh user interface that you
can buy for it. So the scope is enormous. But in terms of the
categories it's about the same you know. We had both painting with
bitmaps, we had a crisp computer, computer graphics, we did the first
desktop publishing, the first WYSIWYG editors, and most of the
paraphernalia that people work with, with one exception, were part of
the original concept of the software that went on the Alto. And that
exception was spreadsheets.
Interviewer:
SO ONE OF THE THINGS THAT IS QUITE INTERESTING FOR AN OUTSIDER, THEY
SEE THIS, EARLY, MID-'70S, MOST OF THE KEY IDEAS OF THAT WE NOW REGARD
AS SORT OF INTIMATE COMPUTING ARE THERE. YET XEROX IS NOT A COMPUTER
COMPANY. DID YOU HAVE HOPE? WHAT WERE YOUR HOPES WHEN THE ALTO WAS
BUILT? DID YOU HOPE THAT THE WORLD WOULD SEE THIS PRODUCT?
Kay:
We -- I for one oscillated between -- because I thought of the Alto as
just an intermediate stage. I wanted the Dynabook. And I thought of
that user interface as a stopgap measure for the real thing, which is
-- see, I was basically interested not in the access literacy part,
which the user interface represents, but in -- [INTERRUPTION]
Interviewer:
DID YOU HOPE THE WORLD WOULD GET TO SEE THIS? BECAUSE THIS WAS NO
LONGER A GENERAL RESEARCH PROGRAM...
Kay:
Well, one of the things that we realized early in the game with Xerox
and its executives was that we were going to have to do a lot of
demonstrations. Because we originally started out trying to talk to
them and we wrote some, a bunch of papers called the Pendery Papers
that expressed sort of what we were trying to do and, those did not get
received into any worthwhile corners of their minds. So we thought that
we would have to do demonstrations. So we were set up to do
demonstrations, we did lots of demonstrations. Also we didn't want to
be cut off from the research community that we had come from. So we
wanted to be able to give demos. And the experience with Xerox went
from, with the lab being fairly open to periods of paranoia, where the
lab was completely closed to visitors. And we used to say to Xerox,
look, you know, we're years ahead of everybody else, you know -- lead
time is everything in this business. And you just carry through on this
stuff -- don't worry, you know, you can show everybody in the world
this stuff, it doesn't matter, they won't be able to build it in time.
Interviewer:
WERE THEY --
Kay:
It's still true. I mean, if you look at the Macintosh, which came out
years later, look how long it's taken IBM to not quite get there yet.
They're still pushing that idea, Windows 3.0 has just come out. It
takes, this stuff takes years to do. If you have a few years lead time
on anybody and your people are capable, you can sustain it.
Interviewer:
SO XEROX COULD'VE HAD IT ALL? THERE MIGHT NOT BE AN APPLE.
Kay:
Well, I don't know about that, because it's really hard to predict that
kind of future. I was amazed at the number of people, for instance, who
bought IBM PCs, considering how useless I thought they were. And the,
the interest of everybody in switching over to the overlapping window
interface with icons and mouse, I think shows that what people bought
millions of is not what they want. They want what we wanted early in
the '70s. But I think predicting what people will do under conditions
of expediency is impossible.
Interviewer:
IN THOSE DAYS, RIGHT, THERE WERE JUST A FEW ALTOS, YOU COULDN'T BUY
THEM RIGHT? THERE WEREN'T THAT MANY, AND XEROX DIDN'T SEEM TO HAVE ANY
IMMEDIATE PLANS TO SELL THEM.
Kay:
Yeah. Well, we actually set up a production line for the things,
because I, as I say, I needed quite a few, and they started getting
popular, and so we set up a prodcution line and I believe that upwards
of 2,000 of them were built in the decade of the, of the '70s.
Certainly by 1975 or '76, almost everybody at PARC including the
secretaries had Altos and they were connected together by ethernet and
they were page-a-second laser printers nearby that you could go, so it
was a very similar model to the offices that you see, that you see
today. There were enough -- when Xerox made its fateful turndown in
1976 there were at least 500 machines operating the way machines do in
the, in the 1980s. So we're not turning down a paper idea or a single
prototype, they were turning down something that had been functioning
for three years.
Interviewer:
AND THIS WAS TURNING DOWN AN IDEA TO SELL A COMPUTER WITH A WORD
PROCESSOR AS OPPOSED TO A --
Kay:
Well, the story, the story is, is complicated. It has to do with a, a
bunch of things. I think the simplest way of characterizing what
happened is that, just as when IBM in the '50s turned down their chance
at the Xerox machine, because they ultimately thought they were in the
computer business rather than the communications business, Xerox when
push came to shove thought they were a copier company. Thought, you
know, so the problem with railroads, right, is they tried to make
faster locomotives rather than investing in the airlines. So there was
that problem. There was a problem of slight, more cost in doing the
Alto versus a word processor that they wanted to do. All these things
contributed. I think one of the main problems is that the executive who
made the final decision had no way of personally evaluating what he was
being told from different sides. So, so it was complicated. I mean,
the, the best they could've -- 1976, when they had to make the
decision, you figure, under, under best conditions it would take a
couple of years to get a machine out, they would've had to charge
around $20,000 or so for it, which is not bad that early, because
that's what word processors were selling for, but it was right on the,
right on the edge, and by the time the Star came out, which was too
little too late, it was I guess a $20,000 machine that should've been
sold for around $10,000. The Lisa had some of the same problem, it was
a $10,000 machine that should've sold for a couple of thousand.
[END OF TAPE F186]
Interviewer:
ALL RIGHT NOW IF YOU CAN RECOLLECT WHERE WE WERE YESTERDAY. WE WERE
TALKING ABOUT XEROX PARC. DO YOU LOOK BACK ON IT AS SORT OF A GOLDEN
AGE IN COMPUTING? THAT SO MANY PEOPLE WERE GATHERED FOR A PERIOD
TOGETHER.
Kay:
Well I think, I know I sort of feel it would be a little bit
presumptuous to declare it a golden age in computing. It was certainly
a golden age for me. I think the five years between '71 and '76 were
the most fun I've ever had doing research. And I think that there've
been five or six critical places in computing over the last forty years
that have effected the way people think about things and PARC was one
of them.
Interviewer:
NOW YOU WEREN'T THE RUN OF THE MILL XEROX EXECUTIVES, AND STEWART BRAND
WROTE AN INTERESTING ARTICLE ABOUT YOU, DIDN'T HE?
Kay:
Oh yes, well that was that was a hilarious little side excursion.
Stewart Brand was the person who put the Whole Earth Catalog together
and he got interested in computer people because he lived right across
the street from Englebart's work at SRI. So he decided he'd do a piece
about the culture. And one of the places that was just starting up in
1971 and '72 was Xerox PARC and so we you know we all knew him and he
was a friend and we invited him in. He wrote this great article which
was published in the Rolling Stone which was really considered to be a
rag back then, especially by the East Coast establishment.
Interviewer:
IT DESCRIBES YOU PEOPLE AS SITTING ON BEAN BAGS AND...
Kay:
Yes, the whole culture and we were photographed, the photographs were
all taken by Annie Leibowitz and you know it was it was a very
Hollywood, California type scene, and the Xerox people did not like it
at all. They had an enormous reaction, so, so large a reaction to it
that when Stewart republished the article in his book, they forced him
to not use the name Xerox, and so he, he referred constantly in this
article to "Shy Corporation" which is what he called it.
Interviewer:
NOW YOU WERE TALKING YESTERDAY ABOUT, ABOUT HAVING DONE ALL THIS WORK
ON THE ALTO, THE QUESTIONS OF WHETHER YOU REALLY WANTED TO GET IT OUT
THERE AND SO FORTH, AND YOU MADE LOTS OF DEMOS YOU SAID, NOW ONE IN
PARTICULAR, BIG EFFORT WAS MADE OF A SALES MEETING IN BOCA RATON WASN'T
IT?
Kay:
Yes, that was rather late in the game, though. That was quite a few
years after the Alto had been around, but there were constantly
recirclings, of try...this wasn't so much trying to get the Alto out as
a product, although there were people who were very interested in it. A
lot of these meetings particularly the Boca Raton one had to do with
just getting Xerox at the top and its major executives to have some
sense of how the world was changing and what their future was likely to
be.
Interviewer:
DID YOU FEEL KIND OF DISAPPOINTED NOT SO MUCH AS YOU SAY, THAT THEY
MIGHT ___ HAVE COMMERCIAL REASONS FOR NOT BACKING IT, BUT DID YOU FEEL
THAT THEY REALLY UNDERSTOOD WHAT YOU'VE DONE, THAT THEY GOT IT?
Kay:
No, I don't think they got it. But I think the important thing about
the Xerox deal is what they did promise to do is to give us relatively
unsupervised support for a period of ten years. And that is what they
did. We had our skirmishes with them, but and there were some people
who had to sacrifice their body and soul to keep the umbrella over the
group, but in fact Xerox turned out to be a very good funder. And there
was no promise or even any intent that I ever heard in the beginning
that Xerox was going to turn these into products. Most large
corporations use their long range research more as a hedge against
future disaster than as pure product development.
Interviewer:
SO IN A SENSE YOU WERE PARTICULARLY SURPRISED THEN.
Kay:
Yeah, I was surprised. Sure, I mean it it's one thing for them not to
make any representations about doing it as a product, but the whole
setup was so obviously the way it was going to be that I was surprised,
sure, I was amazed.
Interviewer:
NOW WHILE THIS HAD BEING GOING ON, TED HOFFMAN HAD DONE THE WORK ON
MICROCHIP, AND SO FORTH, AND HOBBYIST MACHINES STARTED TO APPEAR. WHAT
DID YOU GUYS MAKE OF THE HOBBYIST MACHINES?
Kay:
Well you know there was something that, with for instance a 4-bit wide
chip and then a 8-bit chip, and so forth and they were sort of
inevitable that they would appear. For me I was surprised that so many
people were interested in them. I realized that an enormous number of
people felt disenfranchised from computers completely and these were a
way of getting in, I could can touch the new technology and so forth.
But they, the kinds of software you could run on them was really
limited. And so there're various opinions at PARC. Larry Tesler was
much more interested in the 8-bit machines than I was as an example. My
feeling was is that you had to have at least a 16-bit machine that went
so and so fast in order to do all of this stuff, and in fact that's
exactly the way it worked out. The Macintosh was a 16-bit machine that
went thus and so fast and you had to go that fast in order to do all
the things that we were doing at PARC. And so from my standpoint I
would have just been just as happy if no machines had been built up
until 1984 or so. Just from the standpoint, you think of all of the
unfortunate standards that were set, like MS-DOS and so forth that are
holding back the world even today. I think it would have been... but
you never know.
Interviewer:
YES, BUT ON THE OTHER HAND AS YOU SAY THEY RELEASED THIS, THIS PENT UP
SORT OF MASS OF DISENFRANCHISED PEOPLE. I MEAN THAT WAS WHAT WAS
SURPRISING EVEN THOUGH AS YOU SAID THERE WERE TOYS IN ONE SENSE.
Kay:
Yes, yes, and I think that depending on how you look at it certainly
one great thing was done on an 8-bit micro and that was VisiCalc. That
was one of the best things that's ever been done on a personal computer
as an idea. The reaction to that at PARC was both admiration and shock.
You know, we couldn't believe that we hadn't seen it. That's how
arrogant we, we were. But I think that the, aside from that almost
everything else done on the 8-bit micro was a sort of a reversion back
to the early '60s when machines were very weak. Most of the 8-bit
micros had either no operating system or terrible operating systems.
It's not clear which is worse. And many of those operating systems are
still hanging around today.
Interviewer:
NOW WHAT YOU SAW HAPPEN AFTER THIS WORK WAS DONE AT PARC, IN THE NEXT
YEARS AHEAD, THIS WAS GOING TO BE EXPORTED FROM A COMPANY LIKE XEROX,
THAT WASN'T REALLY A COMPUTER COMPANY INTO ONE OF THESE NEW FLEDGLING
POPULIST HOBBYIST COMPANIES. AND THAT WAS A REMARKABLE TRANSITION
WASN'T IT?
Kay:
Yeah, I actually thought that Xerox was the right place to launch from
since it wasn't in the computer business. It really didn't have any
history to give there. Whereas, I think it was more remarkable for a
company that was deeply wedded to a certain kind of 8-bit computing,
like Apple was, to be willing to throw it all out and start in a
completely new way. But that, this is very much one of Steve Jobs'
biggest strengths.
Interviewer:
DO YOU HAVE ANY RECOLLECTION OF THE EVENTS LEADING UP TO JOBS' VISIT?
WERE YOU INVOLVED AT ALL IN THAT?
Kay:
No, no, I was at the the famous demo that Larry Tesler gave Steve Jobs
and some of the other people from, from Apple, but...
Interviewer:
WHAT HAD YOU HEARD? HAD YOU HEARD THAT XEROX WAS INTERESTED IN BUYING A
STAKE IN APPLE, WAS THAT KNOWN?
Kay:
No Xerox, Xerox had a stake in Apple by then. There's a company called
Xerox Development Corporation and they had stakes in various companies.
I forget how much it was, 10 or 20 percent or something like that. But
I don't think, you know, of course we, we thought the, at PARC we
thought the Apple was the better of the 8-bit micros that were around
and so forth. But it was not unusual to have, we gave many, many demos.
So it was not unusual to have somebody like Steve Jobs and other people
come over.
Interviewer:
DO YOU REMEMBER TALKING TO JOBS THAT DAY?
Kay:
Sure, sure-
Interviewer:
DID HE GET IT THEN?
Kay:
He got it right away. I mean there are two kinds of demos. There are
ones where you are struggling to get the person to see what's going on.
We found for many people we had to put our -- they weren't used to
looking at a computer screen so we had to put our finger on the screen
and get them to watch our finger and then see if they could focus in
another quarter of an inch and stuff. And then there are other people
who tell you what is going to happen next. I mean they are so on top of
the demo that they know. And now you're going to show me, - yes, here
it is, you know. Those are great demos. I had the great pleasure of
showing Smalltalk to Kristen Nygaard who is the inventor of Simula,
which is one of the major influences on Smalltalk. And it was that kind
of demo, he just knew everything I was going to show him. It was stuff
that he had been dreaming about for years and here it was. And we've
been friends ever since.
Interviewer:
NOW, WHAT DO YOU THINK STEVE JOBS GREATEST ACHIEVEMENT IS THEN? HE TOOK
THIS, HE GOT IT AND HE TOOK IT BACK.
Kay:
Well, I think his, I mean what he, what he took back was an idea that
things could be done a different way. And, particularly Steve who is
such a visual person, very sensitive to how things look and appear and
stuff. And the whole notion of the graphics interface was something
that really captured his imagination. But ... and eventually some
people from PARC went over to Apple, Larry Tesler was one of them. But
what happened there was that they pretty much took what we had done as
a departure point and did a completely new design. And I remember, I
didn't see any of it until the Lisa came out in 1983. And when I saw it
I thought it was just the best thing I had ever seen. It was just
incredible what... the Lisa was beautiful.
Interviewer:
OKAY, BUT THE LISA WAS STILL TOO EXPENSIVE. SO REALLY THE MACINTOSH WAS
THE THRESHOLD MACHINE WHICH REALLY CHANGED...
Kay:
Yeah, their, I mean and the Macintosh in many ways is not a good a
design as the Lisa, but it was a severe compromise. And the thing that
was great about it is that it used the weak power of the then 16-bit,
68,000 to great advantage. Where the Lisa couldn't quite make it with
all the things it was trying to do. And, so in many ways the, you can
think of the Macintosh II as Apple's return to the Lisa.
Interviewer:
NOW WHEN THE MACINTOSH CAME OUT, AND CERTAINLY THE HISTORY OF THE
COMPUTER SINCE THEN HAS BEEN VERY MUCH A SORT OF A VINDICATION OF
EVERYTHING YOU DID AT PARC, ISN'T IT? IN A SENSE, HADN'T THE MAIN
VICTORY BEEN WON BY THAT POINT? PEOPLE HAD REALIZED AT LEAST COMPUTERS
COULD BE DIFFERENT FROM WHAT THEY WERE...
Kay:
Well I don't, well, I wasn't I don't think any of us at PARC were
fighting a war so it wasn't clear who we were victorious against. But
if it, if it meant getting lots of people convinced, I don't think that
it was won because the majority of people who compute today still don't
interact with a computer in that way. We're still, you know, there's
this big urgency for -- (hang on one second I can hear a, my Macintosh
dinging. It needs, it needs paper).
Interviewer:
SO YOU WERE SAYING THAT WHILE IT INTRODUCED A LOT OF PEOPLE TO THIS
TECHNIQUE, IT HADN'T...
Kay:
Yeah, I think that the, I think the important thing is that the, what
we did at PARC was not an ultimate answer to the, the problem of
interacting with computers. So, it's, I think that a lot of people are
going to be interested in it more than the millions that are now and
what will inevitably happen is that people will continue to be
interested in it long after it is worthwhile interacting with computers
that way. So you have this thing where people have this tendency to,
once they like something, they get religious about it and it hangs on
and hangs on long beyond it's actual use.
Interviewer:
NOW, AT THE TIME MACINTOSH CAME OUT, APPLE ESSENTIALLY WAS BETTING THE
COMPANY ON IT, RIGHT? AND THERE WAS THIS VERY POPULAR STANDARD MS-DOS
IN EXISTENCE ON MILLIONS OF COMPUTERS. WHAT ARE THE REASONS WHY MOST
PEOPLE FIND ONE MORE INTUITIVE AND EASIER THAN THE OTHER? AND I WANT
YOU TO TALK ABOUT SOME OF THE THINGS YOU TOLD ME YESTERDAY.
Kay:
Yeah, well I don't think we ultimately know. But the --
I, I don't think we ultimately know what, what the reasons are, but we
certainly were guided by some theories that we have different ways of
knowing about the world and only one of them is through language. We
have a kinesthetic way of knowing about the world through touch. We
have a visual way of knowing about the world. The kinesthetic and
visual ways seem to be more intuitive for people. They're less crisp.
They deal more with analogy, and the touch thing makes you feel at
home. You're not isolated from things when you're in contact with them,
you're grounded. And so, I think that for me the, the major reason the
Macintosh works is because the mouse gives you just the tiniest way of
putting your hand in through the screen to touch the objects that
you're working on. The screen gives you a theatrical representation of
something much more complex, something that computer designers don't
even want to think about as a computer executing two million
instructions per second with thousands and thousands of instructions in
there. And then finally the least successful thing that we did, that
we're still working on, is a symbolic underpinning for all of this that
allows you to express back into this little pocket universe.
Interviewer:
CERTAINLY THE IMAGE YOU HAVE -- YOU HAVE A FILM WHERE YOU SHOW A TWO
YEAR-OLD CHILD -- THE IMAGE OF THE COMPUTER UNDERWENT QUITE A
TRANSFORMATION.
Kay:
Yes, I think so. That film of the little girl, 22 months old, using a
Mac very confidently -- she'd already been using it for about six
months or so -- strikes a lot of people in a way that words don't.
Because they see here's this little child, about 70 percent literate in
the access part of the Macintosh user interface. She can start up
applications, and do things in them, save them away, and all of those
things. That's what we were trying to do, to extend this thing from
being something like a specialized tool or a car to something that is
more like media that extends down into childhood and up through the
elderly.
Interviewer:
I WANT TO TURN A BIT NOW TO THE PRACTICE OF WRITING BOTH INTERFACES AND
APPLICATION SOFTWARE ITSELF, AND SOME OF THE PROBLEMS WITH THE
DIFFERENCES BETWEEN SOFTWARE AND HARDWARE. BASICALLY, HOW WOULD YOU
CHARACTERIZE THE DIFFERENCE BETWEEN SOFTWARE AND HARDWARE? BECAUSE THE
WORD SOFTWARE, AS FAR AS I CAN SEE, ONLY GREW UP ABOUT 1960, IT'S NOT,
IT WAS SOMETHING NEW IN A WAY.
Kay:
That's true although the notion, the notion of the stored program goes
back a long way. But I think the, for me there isn't any real
difference, that hardware is just software that's crystalized early.
Because basically what you always have is something that you can
remember in terms of some medium that can take on markings. And there
are different kinds of markings. Then you have to have something that
allows you to change those markings and read them and change them. And
the simplest kind of computer you can build is one that only has a
couple of states in it and it's all memory. It's practically a clock
and everything is out on the memory. So there's almost no hardware
there at all. And the main reason there's a fair amount of bits in the,
the hardware of today's computers is there are a lot of functions you
would like to compute rapidly and so there are special purpose little
pieces of logic in there for doing fast arithmetic and so forth.
Interviewer:
AND THAT GREW OUT OF THIS HISTORICAL ACCIDENT THAT THE FIRST FUNCTIONS
PEOPLE WANTED TO COMPUTE WERE COMPUTATIONAL. SO IT MADE SENSE TO PUT IN
SPECIAL CIRCUITS TO DO...
Kay:
To do arithmetic and stuff. Yeah, I think, I think it definitely is
true and there, there have been computer designs and computers built
that look at their memory in a completely different way, of doing
content-addressed storage and having many thousands of processors
looking at storage at the same time where hardly anything resembling
arithmetic is done most of the time. But in fact, it doesn't matter,
because arithmetic itself is just a manifestation of a particular way
of putting the logic elements together.
Interviewer:
SO AT ITS VERY BASIC LEVEL -- THE WAY TURING MIGHT HAVE THOUGHT ABOUT
THIS -- THIS IS A GENERAL PURPOSE MANIPULATOR OF SYMBOLS. IT TAKES SOME
MARKINGS, TRANSFORMS THEM, AND PUTS OUT OTHER MARKINGS.
Kay:
Right, right and there's a trade-off between the number of different
kinds of markings that you want to have the memory store, and the
amount of logic that you need to be able to carry out computing
functions.
[END OF TAPE F188]
Interviewer:
IF WE TAKE AN AVERAGE COMPUTER IT'S HARDWARE CAN CARRY OUT, WHAT, JUST
ABOUT A HUNDRED BASIC FUNCTIONS?
Kay:
Sometimes, yeah, these days they're usually up in the range of 300 or
so. I think for, for most people trying to understand how a computer
works down inside, it's actually mostly memory, a vast amount of very
undifferentiated stuff that is just for holding markings. Then there's
a very small amount of logic that carries out a few basic instructions.
And you can think of all of the rest of the hundreds of instructions as
things that the basic instructions could carry out but have been
encoded especially for speed.
Interviewer:
AND THEN WHAT THE PROGRAMMER HAS TO DO, IS USING THESE AVAILABLE
FACILITIES IT HAS TO INSTRUCT IT TO DO OTHER THINGS...
Kay:
Yeah, well most good programmers are relatively lazy, so the last thing
they want to do is spend a lot of time grubbing around down in the
machine code of the computer. So, usually what they do is, they write a
couple of pieces of software, one is called an operating system,
another one is called the programming language. And often they are both
the same, as Smalltalk was. And what, what that piece of software does
is to create a virtual machine which is much more hospitable. And so
it's a, the machine simulating a much nicer machine and all of a sudden
life is much more fun. And often you will use that machine to simulate
an even nicer machine. Eventually things slow down so that the, the
most wonderful machine you could have might run too slowly to be
interesting, but some people still like to program in terms of these
highly idealized machines because maybe they'll be built one of these
days.
Interviewer:
SO IN A SENSE ARE YOU TELLING US THAT WHAT WE SEE ON OUR MACINTOSHES IS
A MUCH MORE PLEASANT MACHINE...?
Kay:
Yes, in fact the user interface you can think of as the last layer that
is designed to further sweeten the realities of all these different
layerings that are going on. Most important thing is it's, it's, when
your doing when you're cooking a cake, baking a cake in the kitchen,
you don't have to worry about the details of organic chemistry, because
they're already subsumed by the ingredients that you have and the
recipes are there to make sure that what you have converges on a cake
rather than a mess. And for exactly the same reason people like to
confine where they're working into an area where they know what roughly
is going to happen and if they, that area doesn't work out well, then
they'll go to a deeper lever and make a few changes.
Interviewer:
NOW ALL OF THIS OF COURSE DEPENDS ON THE COMPUTER BEING ABLE TO CARRY
OUT INSTRUCTIONS FAST? OTHERWISE WE WOULDN'T BE HAVING THIS
CONVERSATION...
Kay:
Simulation would not be that much fun if it were really slow. Right, as
it was when people, when they had to just calculate simple trajectories
of, of shells in World War II, they'd have 300 or 400 people on desk
calculators just calculating a simple ballistic trajectory. That was a
simulation and in wartime in was deemed important enough to put these
400 people to work for days doing these things.
Interviewer:
NOW, BEFORE THE COMPUTER WAS BUILT, ALAN TURING, APPROACHING THE
SUBJECT FROM A DIFFERENT POINT OF VIEW SPOKE OF A VERY SIMPLE MACHINE
WHICH COULD IMITATE ALL OTHER MACHINES. THE DIGITAL COMPUTER IS AN
EXAMPLE OF ONE SUCH SORT OF MACHINE, ISN'T IT? WHAT DOES IT MEAN THEN
IN THAT SENSE TO CALL IT A UNIVERSAL MACHINE?
Kay:
Well I think it's one of the niftiest things to wrap your head around.
And that is that regardless of what kind of thing you have, if it has a
few basic properties like a memory and the ability to put symbols into
memory and take them out and make changes and make a few tests on them.
That is enough machinery to enable you to simulate any computer that
has ever existed or any computer that will ever exist. How interesting
the result will be depends on fast the simulation runs. But in fact
it's, people sometimes think that a little machine on a desk or even a
Wizard, a Sharp Wizard calculator is a different kind of thing than a
big Cray computer. But in fact, they are the same kind of thing. You
could calculate one of the greatest pieces of 3D graphics on a Sharp
Wizard given enough centuries to do it and enough external memory to
put parts of the calculation.
Interviewer:
SO THIS MACHINE, EVEN THOUGH IT WAS BUILT MAINLY TO DO SCIENTIFIC
CALCULATIONS, SOME PEOPLE REALIZED RIGHT FROM THE BEGINNING THAT IT HAD
JUST ENORMOUS POTENTIAL.
Kay:
Yes, in fact the, the first person who seemed to realize it was Ada
Augusta Lovelace who was the side-kick of Babbage and, and maybe
history's first programmer. And she wrote in one of her papers that,
she said that "the analytic engine weaves algebraic patterns just as
the Jacquard loom weaves patterns in silk." And she understood that the
generality of representation was the same kind as you can represent in
books which is namely the, the kinds of things that we can talk about.
It wasn't restricted to just numeric calculations but it extended into
the realm of general symbolisation of -- of models.
Interviewer:
THIS AMAZING CAPACITY HAS A VERY SEDUCTIVE QUALITY ABOUT IT.
Kay:
It is to us. Right.
Interviewer:
I'M THINKING OF SAY IN THE LATE '50S AT MIT, SOME OF THE EARLY HACKERS
-- ONCE PEOPLE WHO REALIZED THE POTENTIAL THAT THEY COULD MAKE THIS
MACHINE DO THIS, THAT, AND THE OTHER -- OR IVAN SUTHERLAND OR WHOMEVER
YOU MENTIONED...
Kay:
Yeah, I think two, two things happened, the, the Turing thing was there
and most of the early computer people either were mathematicians or had
a lot of math in their background. And there were other formulations
like that of Turing. Godel's theorem also was a way of building a
universal machine. And there is a thing called Post's production system
which is a similar -- they're all studying mathematical problems, but
they translated well to computer machinery. And most computer people
were aware of them. Then the second thing that had to happen was that,
there had to be some way of believing that the computer could get big
enough and fast enough so that any, anything that Turing said made any
practical sense. You know, so you have to have both of those things.
And what's happened is that very early on in the -- even in the '60s
but especially in the '70s -- people regularly would build very simple
computers and then use those simple computers that ran very quickly to
simulate the kind of hardware they really wanted. That's what we did at
Xerox PARC. Chuck Thacker who did the Alto, built a machine with almost
no logic for that period of time. I mean 1972 it had two, it had about
160 chips, two boards worth of chips. So, that was very, very few
gates. But it ran extremely fast. And it ran about five times faster
than it's, than its main memory. And because of that the Alto could be
many different machines. It was a Smalltalk machine when we ran
Smalltalk on it. It was a Mesa machine when they ran Mesa on it. And it
could take on different personalities. And I've always thought that was
a really good way to go until you actually know what the, the biblical
truth is on what your computer architecture should be, why not have a
computer that you can mold into the kind of virtual machine that you
want, right at the lowest level.
Interviewer:
THIS TRACTABILITY, THIS MOULDABILITY GIVES THE COMPUTER A ROMANTICISM
THAT OTHER MACHINES LACK, DOESN'T IT?
Kay:
Yeah, well I don't -- I agree on the romantic part but of course, I'm
fairly romantic about musical instruments so that, you know, most
musicians are. Most musicians adore their instruments. And, so, I think
that the, from, from my standpoint my romance is very much connected to
the same way I think about other marvelous contraptions that we've
made, including musical instruments but sail planes, the kind of
marvelous inventions that Paul McCready makes, and so forth.
Interviewer:
BUT THEY DON'T HAVE THIS CAPACITY OF BUILDING A PRVATE UNIVERSE DO
THEY?
Kay:
No.
Interviewer:
THIS IS SPECIAL TO THE COMPUTER...
Kay:
No, in the, with a musical instrument you can build a private universe
for somebody else. That is, you can make one, but it doesn't have the
tangibility that that the computer has. On the other hand, it's
worthwhile remembering though that no matter what we build on the
computer and no matter what appears on the screen, it doesn't have any
particular sense to it unless some human is there comprehending it. So,
it does have something in common with, with building a beautiful piece
of music, that there has to, ultimately there is a listener, it might
be just the composer, but what comes out, has to, in some way, come
back into the human sensorium and be understood.
Interviewer:
NOW, YOU USE A METAPHOR WHICH IS QUITE HELPFUL IN UNDERSTANDING
PROGRAMMING OF A PUPPET THEATRE. I WONDER IF YOU COULD GIVE US THAT.
Kay:
Yeah, I think I got talked into that by some, some magazine. One of the
traditional ways of programming on a computer is to think of the memory
part of it as being inert like ingredients in a kitchen. And to think
of the programs that you write as being like the recipes and then the
central processor is kind of like the cook whose looking at the recipe
book and then goes over and stirs the ingredients and, if you're lucky,
you wind up with, with something good to eat. And, another way of
thinking about that is, is that's like a puppet theatre because the
puppets are all inert and there are puppet masters going around. In the
computer it's a very energetic puppet master because there's one,
generally, goes around twitching all of the strings of all of the
puppets fast enough so that it seems like something real is going on.
And, but another way of looking at programming is to say, - well, why
not let the puppets pull their own strings. So, we'll let each puppet,
have, in effect, its own little computer within this larger computer
and we'll let them be much more self contained, there won't be puppet
masters from, from the outside. And that's called - object-oriented
programming. And the benefits of it are: simplicity and ease of writing
the programs.
Interviewer:
IN A PROCEDURAL PROGRAMMING, IF I WANT TO TELL MY PUPPET MASTER WHAT TO
DO, I HAVE TO LIST EVERYTHING ABSOLUTELY IN THE RIGHT ORDER. IN
OBJECT-ORIENTED, I BUILD MY OBJECTS, GIVE THEM BEHAVIOR AND...
Kay:
Yeah, well there's, I think there's a continuum from totally lock-step
proceduralism to trying to deal with multiple processes, to having
objects which have many of their processes inside of them and act much
less sequential and so forth to what's coming in the future which is
something called agent-oriented programming or modular control
programming -- it hasn't got a good name yet. But of something where,
where the elements are much more like biological cells, that they're
quite self contained. They may spend 90 percent of their energies just
maintaining their own equilibrium and maybe only 10 percent of them
contributes to the larger system as a whole.
Interviewer:
AND THESE ARE PRESUMABLY EVEN HIGHER LEVELS OF VIRTUAL MACHINE, AS IT
WERE?
Kay:
Yes, that's a very good way of thinking about them.
Interviewer:
EVERY TIME WE GO FURTHER UP TO A HIGHER LEVEL OF VIRTUALITY THAT SUITS
US, IT GENERALLY MEANS THAT THE COMPUTER HAS TO WORK EVEN HARDER.
Kay:
Yeah, oh, I don't think it's, the computer is, is only working at one
level, any given computer. It's always executing two million
instructions per second. So you can't make it work harder. But it's
like a, if you have a, a 5 horsepower go-cart and you put it up various
grades of hills, there will be a hill eventually that it won't be able
to climb. It's always putting out at 5 horsepower and it needs 10 to
get up that particular hill. And that's what happens, you just, things
start slowing down as you put more and more burden on an old-style
processor.
Interviewer:
CAN YOU GIVE ME AN IDEA OF WHAT'S GOING ON IF I'M LOOKING AT A SCREEN
AND I WANT TO DO SOMETHING LIKE ON OPEN A FILE ON THE MAC DESKTOP...
SOME NOTION OF HOW MANY OPERATIONS MIGHT BE INVOLVED?
Kay:
Well, it's, you can figure it out because the typical Mac these days
executes about two million instructions per second. And if it takes,
well say opening a file it has to go out to the disc so it's
complicated because you, now it depends on the speed of the desk moving
stuff back and forth. But, suppose you're just in the Multi-Finder and
you go, you're in this window doing something and you put the mouse in
another window and the window comes up to the top of the screen and you
can easily, you just get out a stopwatch, if it takes us, if it's a
big, lumbering thing and it takes about a second to rebuild the screen
then two million instructions have been executed. If it takes two
seconds then four million instructions have been executed. And, it's
always executing that many instructions per second even when it's just
idling.
Interviewer:
THERE'S IMMENSE, MONUMENTAL COMPLEXITY GOING ON UNDERNEATH THESE SIMPLE
THINGS...
Kay:
Yeah, and often the complexity is something that is a by-product of the
way the system was programmed rather than being intrinsic.
Interviewer:
I WANT TO MOVE ON NOW -- I'M GOING TO MOVE ON TO YOUR CURRENT WORK WITH
THE SCHOOLS, BUT I WANT FIRST FOR YOU TO TALK ABOUT THE IDEA OF TOOLS
AND AGENTS, HISTORICALLY, AND SO FORTH.
Kay:
Okay, well the way, the way I think about tools and agents is, you need
some sort of perspective to think about them. And I think about them in
terms of the way we've extended ourselves over the last several hundred
thousand years. And when you, when you say extension to somebody they
almost always come back and say "tools." And indeed there have been
levers and wheels and physical tools, but there have also been mental
tools like language and mathematics. And I think of tools as being
extensions of the gesture, as a way of manipulating things. You're
manipulating symbols when you're dealing with language. You're bringing
things that are hard to deal with into your power via your hand or
something like your hand. So the 'M-word,' to me, for tools is
manipulation. And tools are things that you watch while you manipulate
them. And then the other main way people have extended themselves is by
getting other people to take on their goals. Mumford called this making
"mega-machines." He said that for most of human history most machinery
made by humans has had other humans as its moving parts. So we make
cities and cultures, and there are groups trying to do this and groups
trying to do that. And, there are fewer goals in those kinds of groups
than there are people. They've taken on each other's goals, traded off,
in one way or the other and they are communicating. And the kind of
entity that can take on your goals and act in your behalf, we call an
agent. So an agent is something that watches you and you manage it. So
the 'M-word' is management for agent, and manipulation for tools. And
in the ...
Interviewer:
YOU'RE SAYING THAT AGENTS HAVE BEEN PEOPLE UP TO NOW, WHEREAS TOOLS
HAVE BEEN ....
Kay:
Yeah, you could, you could say that that a shepherd dog, or maybe a
horse, maybe a thermostat is something you have to work fairly hard
with to build a thermostat to get it to take on the goal of what
temperature that you, that you want. But, by and large, they've been
people up to now. And the interesting thing about computers, when
you're building agents on them, is the agents don't have to be as smart
as people, just like a thermostat does not have to be as smart as a
person to be useful. The main thing you want it to do is to be able to
take on some collection of goals and to be able to deal with those
goals while you're not around.
Interviewer:
SO AN AGENT HAS TO HAVE SOME ARTIFICIAL INTELLIGENCE IN IT?
Kay:
Yeah, if you like to use that, that term. You could call it "flexible
competence" and make it sound a little less loaded.
Interviewer:
SO HOW WOULD THIS WORK? YOU FEEL THAT THE COMPUTERS OF THE NEXT 10, 20
YEARS IN ADDITION TO HAVING THE RICH INTERFACES WE HAVE, WILL HAVE
PERSONAL AGENTS AS WELL?
Kay:
Yeah, one of the ways I think about looking ahead into future is to try
and find analogies that might actually make some sense, and also to
look for driving forces. So the, one of the driving forces for the
PARC-type user interface came out just from there being inexpensive
ICs...integrated circuits around. You start getting a proliferation of
computers that are inexpensive enough for people to buy, and all of a
sudden the kinds of people who might want to use computers changes
completely, and so all of a sudden you need a much easier to use user
interface. There is a driving force now to do something because it, it
isn't just graduate students any more. To me the driving force for
agents is pervasive networking, because the techniques used on the
Macintosh don't work well when you're connected up to a trillion
objects scattered all over the world. You need something looking for
potential objects that will further your own goals. And you need that
something to be looking 24 hours a day. So we think that what, what
we'll have is, you know, 10, 15, 20 or more little agents -- many of
them not particularly intelligent but able to flexibly take on a goal
that we have like, an example of one is an agent that goes out and
finds you the newspaper you'd most like to read at breakfast every
morning. So, all night long it works. It can touch dozens of different
news sources, the Associated Press, New York Times and so forth,
looking for things that are relevant to you. It can go to other sources
for getting photographs and so forth. It can do the news gathering with
a particular interest in the kinds of things that you're...that you
have been involved in. A headline could say, "New fighting in
Afghanistan," or it might say, "Your 3 o'clock meeting was cancelled
today," because news now could involve your own electronic mail. The
sidebar might say, "Your children slept well last night." And this is
an interesting example of an agent because it's one that was built
about ten years ago. It did not require a large amount of intelligence
in order to work. It's major strength was its ability to work 24 hours
a day while you weren't there and with limited ability of doing
matching against what you said you wanted and what it thought you
wanted, it could do a great deal of useful work for you.
[END OF TAPE F189]
Interviewer:
SO WE THINK OF -- THERE HAVE BEEN A NUMBER OF REVOLUTIONS IN THE
HISTORY OF COMPUTERS SO FAR, BUT WE'RE THINKING NOW -- MOST PEOPLE
THINK OF THE COMPUTER AS A STAND ALONE, DESKTOP OBJECT, RIGHT? WHERE DO
YOU SEE -- TYING THIS IN WITH YOUR DYNABOOK CONCEPT, WHAT YOU'VE BEEN
SAYING ABOUT NETWORKING AND AGENTS -- WHERE DO YOU SEE THE NEXT THING
TAKING US?
Kay:
Yeah, but I think the way I think about that is the, these three very
different ways of relating the, the human to the computer. One is this
institutional way of the time-sharing mainframe, one is the desk top
way where you control all the stuff and then the, the third way is the
intimate way, which is the, is the Dynabook way which is continuously
connected into the worldwide informational network. So, it's part--
Interviewer:
WHAT IS A DYNABOOK EXACTLY?
Kay:
Well a Dynabook is sort of a figment of imagination. It's a, it was a
Holy Grail that got us going. It was a cardboard model that allowed us
to avoid having meetings about what we were trying to do. It was a lot
of different things but it was basically a service concept not a box
concept. So, there were actually three physical renderings of the
Dynabook we thought about: one was the notebook, one was something that
went in your pocket that had a head-mounted display and glasses, as I
had worked with Ivan Sutherland's head-mounted display in the '60s, and
then one was, Nicholas Negroponte's idea of the sensitive wrist watch
that in the 20 years future or so when there is a network wherever
there is an electric plug in the wall, then your user interface will
follow you from room to room as, as you go. Everything has become
pervasive you don't have, need to carry a big computer or even a tiny
computer around, around with you. So, the, the whole idea behind the
Dynabook was the kinds of service and your relationship to it which
should be intimate, casual, mundane. You should be able to aspire to
the heights on it just as you can when you learn English, you can
aspire to the heights of Shakespeare, but you're not forced to do what
Shakespeare did every time you use the language. So, this idea of
having a nice, connected ramp without lots of bumps and so forth in it
-- as Seymour Papert likes to say, "low threshold, no ceiling."
Interviewer:
NOW GIVEN YOU HAVE THAT CONCEPT OF THE FUTURE, YOUR CURRENT WORK AT THE
MOMENT AT THE OPEN SCHOOL HERE IN LOS ANGELES WITH CHILDREN WHO MAY
INHERIT SUCH A MARVELOUS COMPUTER, WHAT ARE THE OBJECTIVES OF THIS
WORK?
Kay:
Well, it's several, several one is that Apple traditionally has been a
company very interested in the educational process and helping children
in, in schools and, so we do a lot of things in this school that have
to do with thinking about how schooling might be in the future. Then,
specifically, one of the things that we do is a project that's been
going on for about four years now to try and help find ways that will
allow children to be able to write in the computer as fluently as they
can now read using the Macintosh user interface. And what we do, since
artificial intelligence is coming along, we're trying to find ways to
both understand artificial intelligence and understand how to program
it by putting together a set of tools that allow children to do the
kind of artificial intelligence programming normally only done by
adults.
Interviewer:
NOW THIS IS DONE THROUGH A RATHER AMBITIOUS SIMULATION.
Kay:
Yeah, well our, our, I, we're trying to, I mean you think of adults try
and simulate humans and, and so forth. Humans are pretty tough, nobody
has done simulation yet. I've always felt it would be a good idea to
work our way up through the food chain and start off with fairly simple
animals, see how they interact with the world. That's something that
children are interested in. And so quite a few years ago we got the
idea that it would be really great if we could give children an
environment where they could create ecologies to study animals and
plants and then build those models into the computer and see if those
models reflected what they thought they understood from the, from the
real world. So, there's a constant comparison between the real world
and the and the computer model. The, the school has torn up part of its
playground to make a, a life lab, which has both animals and plants in
it. The classrooms have animal cages and aquariums and so forth. And so
there's a lot of real animals to study. And then we also have
Macintoshes with a new system we've designed called Playground, that
tries to bring some of the techniques of artificial intelligence
programming to 8, 9, and 10 year-olds right now.
Interviewer:
ARE YOU TRYING TO LITERACY -- COMPUTER LITERACY -- IN CHILDREN? ARE YOU
THINKING ABOUT THAT?
Kay:
Well...
Interviewer:
IS THAT WHAT YOU MEAN BY IT, OR...?
Kay:
Yeah, I've never...you know, in one sense I think so, in the sense that
I've always wanted to close the loop at least with something that was
like reading and something that was like writing. And right now, the
something that's like reading is using the Macintosh user interface
language to deal with 9,000 or 10,000 applications that are out there.
That seems to work reasonably successfully right now. And the
equivalent of writing should be something that allows children to
aspire to the same kinds of things that are built on the Mac. Now they
may not sit down and do Aldus PageMaker or something like that, because
that's something like a large play or something. But they should be
able to see a continuity between what they're doing and these tools
that adults make for them. We want to do something like what Dewey was
talking about in the last century, which is, he pointed out that in
most of the ages of mankind the games that children played were serious
simulations of what the adults did. So the African child practicing
with a spear, or the Eskimo child learning how to kill birds because
he's eventually going to have to go and kill seals for food, is doing
something that is content-rich relative to the adult world. But, the
20th century child dressed up in a nurse's suit and playing nursie with
her doll has been cut off from the content part of that adult activity,
only the form is left. So the kids are very disenfranchised from most
things that are happening to adults. And one of the things that was
noticed right away with computers is that when you put a child on a
computer they know they're doing the real thing, that this is, they can
see instinctively the continuity between it and the other things that
are going -- actually much better than adults do.
Interviewer:
THE COMPUTER, MANY PEOPLE CLAIM, IS GOING TO HAVE A BIG ROLE IN SAVING
AMERICAN EDUCATION, AND SO FORTH. AND THIS IS A WORRYING THING, BECAUSE
YOU SAID YESTERDAY THAT THE COMPUTER WAS A META-MEDIUM, SO IT CAN BE
WHAT WE CHOOSE IT TO BE...
Kay:
Yeah, well, I mean, the story I always tell is - imagine the parents
were afraid that their children wouldn't make it in life unless they
were musicians, and the State legislature said, "Well, okay, we'll put
a piano in every classroom but we don't have enough money to hire
musicians, so we'll give the existing teachers two week refresher
courses," and music doesn't get into the classroom. And I think we have
a very similar problem when we want to think of the technology as being
the magic ointment. Musicians will tell you, the music isn't in the
piano. If it were, we would have to let it vote. So, at best what we
have is an amplifier and often these things that could be amplifiers
will turn people away. Pianos often turn people away from music rather
than turn them towards it. So, I think the most important thing is to
have for people who want healthy schools is to have parental
involvement, because down deep it's the value system that the children
pick up about what's important in life, that they mainly get from their
parents, that is going to condition what they do with their time. It's
hard to learn things in school. There are lots of things going on.
School, to me, is basically about a resource for finding out that
various things existed that you didn't think existed. But as far as
learning them, most of the learning, I think is done outside of school.
And what the child decides to do outside of school with his time is
going to depend on the value system. Once you have that going really
well then it's fairly easy to use all kinds of technology because then
you will simply amplify out from this interest in getting stuff
happening in here-
Interviewer:
INTERESTING THAT THIS MEDIUM SHOULD COME ALONG AT A TIME WHEN THE
PREVIOUS IMPORTANT MEDIUM LIKE WRITING, EVERYONE IS SO CONCERNED ABOUT
LITERACY IN THE SCHOOLS. DO YOU SEE IT AS A SOLUTION, OR POSSIBLY A
WORSE PROBLEM?
Kay:
Well, I think that was one of McLuhan's tongue-in-cheek jokes -- that
we've had all these great inventions like the book and they've hardly
affected education at all because if you go into most schools in the
northern hemisphere you find 30 humans writing down what another human
is saying and that's what was going on in Oxford in the 12th century.
So, you know, where is the book in all of this? So I think that the...
the kinds of social whirlpools that exist when you get different kinds
of humans together, like teachers and children, are going to have a lot
to do whether technology gets used at all. I think the most important
aspects of are, have to do with areas of control and other kinds of
things which are, theoretically outside the domain of the education,
but a lot of school is about controlling the kids.
Interviewer:
NOW, IF WE'RE LOOKING INTO THIS FUTURE, SOME OF THE THINGS THAT MIGHT
BE IMPORTANT, CAN WE COUNT ON THE HARDWARE CONTINUING TO IMPROVE FOR
ANOTHER DECADE?
Kay:
Yeah, yeah, we can definitely count on the hardware continuing to
improve another decade and probably more. But I mean just, this is just
extrapolation, the current kinds of hardware that we know how to build
has a very stable outlook for the next ten years.
Interviewer:
WHAT ABOUT KEY INGREDIENTS FROM SOFTWARE FROM ARTIFICIAL INTELLIGENCE,
DO YOU THINK PROJECTS LIKE THE CYC PROJECT ARE GOING TO BE VITALLY
IMPORTANT?
Kay:
Well, Cyc Project is one of my favorite projects partly because it's
done by one of the smartest guys in the U.S. in computer science, and
partly because it's one of the hardest projects that anybody is trying
to do right now. There are not a lot of researchers working on, what I
would call, really frontier difficult projects, and, but this is one of
them. And, it, his success will be and is, is about turning up new
ideas for representing things. Whether the system is actually able to
turn into what its design goals say it is, which is a model of human
common sense, I don't think is nearly as important as the wake that
it's throwing up. When you got a smart guy working on something really
hard and a bunch of people being ingenious about it, you're always
going to get good things.
Interviewer:
WHY WOULD IT BE IMPORTANT TO INCORPORATE COMMON SENSE -- A
REPRESENTATION OF COMMON SENSE IN FUTURE COMPUTERS?
Kay:
Yeah, it's, that's a good question because common sense is frequently
wrong. Scientific discoveries of the last 300 years have been against
common sense. But, whatever common sense is, it's a kind of underlying
fabric for getting between things that we know in much higher detail.
So, one way of thinking about it is there are things, little things
that we're expert in are like islands and then there's this ocean that
we can paddle around in from island to island. There's a way to get
from one island to another. And the problem with most expert systems up
to now is that they're an island completely surrounded by a cliff. If
you go anywhere off what it's good at, it just drops you completely.
There's no way of paddling to the next thing and, and I think as I said
yesterday, somebody, Picasso, I think, said, "Art is not the truth. Art
is a lie that tells the truth." And common sense is not the truth. But
it's a lie that provides a continuous fabric to work around in that we
can then delve deeper into. Another thing about common sense that's
kind of interesting is that it might be possible to use the computer to
enlarge what we think of as common sense by giving us sensory contact
with things we've never been able to have sensory contact with before,
like things that are a million times smaller than us. Because common
sense I think has a lot to do with the sensory domain and reasoning
from things that are on our scale. Science, in a very literal sense, is
non-sense because it's outside of the sensory domain. Almost everything
that happens in science is very far, you know, just common sense this
is solid, (should hit here), common sense says this is solid. But
science says it isn't. Common sense says the sun is going to come up
tomorrow morning and science says no, the earth is turning. And, yet we
still say what time is sunrise tomorrow? So I think the importance of
Cyc using common sense has a lot to do with that regardless of whether
we are scientists or not, we have this one way of knowing the world,
rightly or wrongly that is very comprehensive and gives a sort of
universal way of weakly getting from one topic to another.
Interviewer:
WHAT DO YOU THINK THE MAIN LEGACY OF ARTIFICIAL INTELLIGENCE IS?
Kay:
I think, you know, some of my favorite crazy people are AI people. AI
in the '60s was once defined as all that stuff we don't know how to do
yet. And the, to some extent it's been a moving target. Things, things
that were AI problems in the '50s and '60s are now taught as part of
computer engineering courses. And but as far as the AI as something
mimicking, in a strong way, human intelligence, we're very far away
from it. And so it, it's a good goal. It gives you something to, to
reach for. And I think for people in the field who have some biological
motivation to their interest, it's a good goal because it has partly to
do with understanding more how we do it, wonder if there are alternate
ways to do it, can you only do it the way we do it, at what level do we
do it? Do we have to do it at the absolutely neuronal level? Do we have
to simulate every neuron in a human brain to get artificial
intelligence or is there a higher level that we can do it? And those
are good questions to ask because if you look at the way for instance
biochemistry is done by nature. It is appallingly inefficient, very low
energy transfer the absence of some watcher from the outside saying,
oh, yeah it would be much simpler to do it this way. So that, for
instance, the way we do chemistry in a lab and the way nature does
biochemistry is completely different. We do it much more efficiently.
Nature does it much more ingeniously because of the way it's contrived.
And you can learn a lot from looking at the comparison between the two.
And so there, there is a lot of reason to expect that you don't have to
go to the neuronal level to be able to do the kinds of things that we
do. But nobody knows whether, what level you actually have to go to.
Interviewer:
YOU MENTIONED YOU WORKED WITH IVAN SUTHERLAND ON THE HEADS-UP DISPLAY.
ONE GROUP OF PEOPLE ARGUE THAT STAGE WE'VE GOT TO WITH THE HUMAN
COMPUTER INTERFACE, 2D, 2.5D IS JUST A STEPPING-STONE. THAT BEYOND THAT
THERE IS THIS VIRTUAL REALITY. WHAT'S YOUR VIEW ON THAT?
Kay:
Well I think, I think people many people will enjoy virtual reality,
since many people don't enjoy the current reality. Television is a kind
of virtual reality. And I think things that go further in that
direction will be very popular with a lot of people. I think the best
thing about virtual reality is that you can deal with these things
outside of the normal senses. You can take a trip down into a cell and
see how incredibly agitated the thermal properties are down there. I
think all the things that you only can read about now in terms of
symbols, you can actually go there and get a much more kinesthetic and
visual hit on doing those things. I think the use of it in fantasy will
certainly, certainly happen. But if you look at what you actually have
to do to get good dramatic situations in the theatre, then it's going
to be a while before something good can happen in a movie that's partly
being generated by your presence. On the other hand, if you look at a
typical Arnold Schwarzenegger shoot-em-up, then those will be easy to
do. Right, because so, I forget what was, what was the name of that
movie? I can't remember the name of that movie that he just ...
Interviewer:
TOTAL RECALL.
Kay:
Total Recall, you know Total Recall, somebody said, if you like road
accidents, you'll love this picture. And the, that kind of stuff, you
know, where you have, you know, five people you have to kill every 30
seconds or so is very easy to set up in virtual reality. I'm sure that
a large percentage of the population will enjoy it.
[END OF TAPE F190]
Kay:
... computer, do a computer simulation that kids could understand from
that one.
Interviewer:
THAT'S THE INTERESTING THING, CLEARLY THE FACT THAT YOU CAN MAKE REAL
THOUGHT EXPERIMENTS, SO YOU PUT MANY OF THE CLASSIC THOUGHT EXPERIMENTS
OF EINSTEIN AND BOHR, YOU COULD DO.
Kay:
Yeah, and of course you have to be careful because because simulations
are lies, in a sense, then there is nothing that says the simulation
has to be like real life. I mean you can make, there have been plenty
of thought experiments that are wrong. And most of the great scientists
have been good guessers. And, so you can, you can also set up
simulations of situations that don't have anything to do with the
physical world... So you can delude yourself as well as help yourself
along.
Interviewer:
AND YOU THINK THAT MIGHT BE ONE OF THE DANGERS?
Kay:
No, I don't think that's a danger. I think that anytime people try and
make models, try and look at their beliefs from different points of
view rather than just one point of view, I think is good.
Interviewer:
IF WE LOOK BACK -- I KNOW YOU'VE THOUGHT A LOT ABOUT THE PROBLEMS OF
PREDICTING WHAT IS GOING HAPPEN -- IF WE LOOK BACK AT THE HISTORY OF
THE COMPUTER, IT SEEMS THAT ALMOST EVERYBODY HAS BEEN QUITE SERIOUSLY
WRONG AT EVERY STAGE WITHOUT VERY FEW EXECPTIONS. WHAT LESSONS CAN WE
LEARN, IF ANY?
Kay:
Well, I think prediction, there are lots of different ways of doing
prediction but the, the worst one in the 20th century has been
extrapolation. So just because something is X and 10 percent in some
other direction of X gets you here and so forth, doesn't mean a thing.
It's like, if a computer could do so and so, it would have to be the
size the Empire State Building. Well, that was when people's
imaginations were limited by vacuum tubes. So the extrapolative way, I
think, is out. But the reason the predictions that we made in the late
'60s were so good, and the reason Bush's predictions in the '40s were
so good, had to do with a completely different way of predicting which
had to do with thinking about things that amplify human endeavor and
the amount of horsepower available that is interested in making human
endeavor be amplified is very, very large. So, as you can hook into
something like say, if you can say, "Oh, the computer is a medium."
Then all of a sudden you start seeing what the powers of amplification
are and you also start getting ideas about what to do next. You look at
say, Freud's model of human mentality, which is a good one, but it's
all about drives. That doesn't help you much in doing user interface.
You look at Bruner's mental model which is about different ways we have
of knowing the world and all of a sudden you get ideas.
Interviewer:
IT'S INTERESTING ISN'T IT THAT ONE OF THE ARGUMENTS THAT SAID THERE
WOULD NEVER BE MANY COMPUTERS WAS THE ONE THAT WE WOULD NEVER BE ABLE
TO FIND ENOUGH THINGS FOR THEM TO DO.
Kay:
Yes and that's another one of those things is the, again it was looking
at the kinds of things that computers are doing now. And as, as people
used to say, "That's right, you, you numbskull," you know, but it was
all the new things that they can do. It's not, we can do we can do
payroll on the mainframe when the personal computer came along, it's
all those things we can't do on the mainframe like spreadsheets and
desktop publishing, and so forth.
Interviewer:
YOU SEE THE FUTURE OF THE COMPUTER IS BECOMING TOTALLY UNREMARKABLE.
Kay:
Well, I would hope so. Nowadays, well, ten years ago, if you went into
somebody's office and you didn't see a phone, that would have been
remarkable. It was the absence of the thing that would have been
remarkable. Nowadays if you go into somebody's office and you don't see
a phone, you assume they're wearing one but you do not assume that
there is no phone in a person's office because it's something that is
noticeable when it's absent. And the computer right now is still more
noticeable by its presence than its absence. When you go somewhere and
somebody doesn't have a computer on them and that becomes a remarkable
thing. Then I think the computer will have made it. It's destiny is to
disappear into our lives like all of our really important technology,
the things that we don't think of as technology like wristwatches, and
paper and pencil, and clothing and all of those things. I think the
computer's destiny is to be one of those.
Interviewer:
AND IT WILL DISAPPEAR INTO OUR LIVES, EMBODYING ALL PREVIOUS MEDIA, OR
MANY PREVIOUS MEDIA AS IT GOES?
Kay:
I think so. I think we have to be careful because when you simulate one
thing by another you usually give up something. And anybody who has
ever seen the Book of Kells in the, for real realizes what you don't
get from photographs, realizes what you don't get from printed books,
and also realizes what you do get from printed books that...compelling
charisma, the transcription of the oral event that was a book like the
Book of Kells is completely different from the alienating regularity of
machine type. And both those things have their place in the world.
You'd hate to get rid of one completely and say, "Well, we're replacing
it with the new," because I don't think it works that way. I mean I'm
building a rather large baroque pipe organ even though you can, quote,
unquote, simulate them on synthesizers and stuff today, and the answer
is: you can't simulate them. You can't get all of the stuff yet, and
even if you could, even if you could prove beyond the shadow of a doubt
that the waveforms from it [were identical], you still don't get
something that looks as neat. And so, I think if you include all of our
senses into an experience that when you simulate something as you
always do in science, what you're saying is, "I am going to give up
this in order to get that and that's my trade-off right now." But a
person who says, "I'm going to use this, and I'm not going to give up
any of that stuff," is just fooling themselves because there isn't a
complete interconvertability between one medium and another.
Interviewer:
HOW WOULD YOU RATE THE COMPUTER? YOU'VE STUDIED A LOT OF HUMAN HISTORY,
IS THIS SOMETHING WE'VE BEEN PRIVILEGED TO LIVE THROUGH? IS THIS REALLY
A VERY REMARKABLE DEVELOPMENT?
Kay:
Yeah, well I don't think it... I think one way of rating the computer
is to say it's definitely a thing like the printed book. It is
definitely in an unremarkable stage -- like 30 years after Gutenberg --
and almost certainly if its promise is realized, and it's not just
turned into television -- because that's one of the things it can
simulate as well -- but if it can deal with all of the range of things
it can deal with, and people use it to express all of those things,
then it very likely will have the same kind of effect on our
civilization as the printed book did. Whether it has that affect on any
given person though, is a completely different question. Because, as
people have noted, the Renaissance has come and gone and we have what
we are pleased to call civilization now, and a large percentage of the
population, not just in Third World countries but in our own country,
have never touched the 20th century as far as it's ideas are concerned.
And in spite of all the libraries with all the books, and all the
things, and what books have done to us, a very large percentage of
people have never been carried along with them. And that is very likely
to happen with the computer.
Interviewer:
SO WE HAVEN'T REALLY SUCCEEDED WITH THE LAST MAJOR MEDIUM?
Kay:
Yeah, I doubt, it's probably the case that we never succeed with those
things, that civilization gets transformed and a certain critical mass
of people get transformed by it and they are the ones who transform the
civilization. And then, for one reason or another, a large number of
people don't get transformed by it. But if you -- another way of
thinking about is, if you take a look at what the average person who
has not been transformed by it, but has gone to college today thinks
about the world, it is a little bit better, a little bit richer, I
think, than what people thought about the world in 1000 A.D.
Interviewer:
BUT THERE IS A PROBLEM WITH THIS LITERACY THING, YOU SAID, BECAUSE I
KNOW YOU THINK OF TELEVISION AS LIKE A MEDIEVAL STAINED GLASS WINDOW IN
SOME WAYS... THAT ACTUALLY YOU CAN -- YOU DON'T NEED TO LEARN ANYTHING
TO DO IT, IS THAT THE POINT...?
Kay:
Well, you have to learn something because we know that when D.W.
Griffith first invented close-ups, and moving cameras, and montages and
stuff, that the audience was a little bit startled, but it didn't take
long, once, one pass through it and they got the idea of what was going
on. And so the amount of learning in the visual domain is pretty low
compared to what you have to do in doing reading and writing. And that
is a big barrier. The biggest problem though, I think, is that many
people believe that there is an equilibrium between the two media, that
what you can say in a book, you can say on television. And all of the
evidence is against that. What you can do with television are some very
important things, you can get people interested, you can give them an
emotional hit, you can get them to know somebody in a way they didn't
think they could do it before. You can maybe get them to look, be
interested enough to look deeper, but it's very strength is its
weaknesses. Its strength is its involvement, it's weakness is its
involvement. Because in order to think scientifically about the world,
you have to be romantic, it's true, but you also have to be able to
step back and say, "Gee, I wonder what else it is. It looks this way,
but I wonder what else it is. I wonder what else is going on." And I
don't think television gets people to do that kind of connected thought
away from the dominance of the senses.
Interviewer:
THERE'S ALSO A THING YOU WRITE ABOUT CALLED "THE AMPLITUDE PROBLEM,"
THAT SOME MEDIA REQUIRE MORE CONCENTRATION THAN OTHERS, DON'T THEY?
WRITTEN MEDIA DO, AND SOME FORMS OF TELEVISION, SOME FORMS OF GAMES AND
SUCH DON'T. THE QUESTION IS, WILL THE COMPUTER DISAPPEAR INTO THE
TELEVISION, OR THE OTHER WAY AROUND?
Kay:
I'm not sure whether it requires more concentration, but it may very
well be that like you know, some people read well and it's nothing for
them to read a book a day. And other people this is a big deal. I mean
it's a struggle, and what's likely is going on is that most people
never learn to read very well. And so, the amount of concentration they
have to put into the mechanics is what's defeats them with the amount
of material that has to be read. And I think it's much more like that
because it's remarkable how much concentration you have to put into
something like tennis until you learn to play it, or how much
concentration you have to put into music on the mechanics until you
actually get fluent at playing. But then, then the hour, once you've
gotten to that place then the hours go by without even realizing
because you're deep into what the content of the medium is.
Interviewer:
LAST QUESTION, WHAT HAS SURPRISED YOU MOST ABOUT THE HISTORY OF
COMPUTING?
Kay:
What surprised me most? Well I think that, I think the thing that
surprised me the most is how long it has taken to get ideas out of
laboratories into the commercial world, of how many different kinds of
inertia there are, both for good reason and for bad reason, but just
the sheer amount of time where a decade after an idea is a very short
period to see it emerging in the commercial world. That is certainly
surprising to me because most of the scientists who work on these
things work on them because they were obvious. They were so obvious
that they just want to make them. And to have them not be obvious, and
to have them have, go, something that was revolutionarily obvious, to
have to go through an evolutionary stage that may take 10 or 20 years
is quite surprising.
Interviewer:
DO YOU THINK THERE IS ANY WAY OF CUTTING DOWN THAT PERIOD?
Kay:
I don't see it right now, because it's, I think it's I think it's a
question of having people grow up being interested in multiple points
of view rather than being disturbed when they're shown something
outside of their single way of looking at the world. And our
civilization is very prone towards single-minded ways of looking at the
world. We come from a monotheistic religious background, which says
there's a right and a wrong, and you're either for God or against God
and so forth. And these attitudes trickle over into our reactions to
everything. If we're doing, what we're doing now is right, then
something that's different from it can't possibly be right. The
Japanese seem to be a little more flexible in some of those scores.
They have several religions existing side by side in Japan, and many of
the people adhere to several of them at once. They don't see any big
conflict. And I think any civilization that can treat ideas as
interesting, as more interesting in an array than as treated singly, is
going to make it into the future.
[END OF TAPE F191]
Series
The Machine That Changed the World
Program
Interview with Alan Kay, 1990
Title
The Machine That Changed the World: Interview with Alan Kay, 1990
Contributing Organization
WGBH (Boston, Massachusetts)
AAPB ID
cpb-aacip/15-pv6b27q16d
If you have more information about this item than what is given here, or if you have concerns about this record, we want to know! Contact us, indicating the AAPB ID (cpb-aacip/15-pv6b27q16d).
Description
Episode Description
Full-length interview with Alan Curtis Kay. Portions of this interview were featured in episodes from the WGBH/BBC Series, The Machine That Changed The World. The Machine That Changed The World was a five part series chronicling the personalities and events of the computer revolution. The program traced the history of development of computer to the modern personal computer, to future developments on the horizon. There was a focus on history of computers from 19th century to PC, present day applications, and future developments. Alan Curtis Kay is an American computer scientist. He was an Apple Fellow at Apple Inc. in the 1980s and went on to be a Disney Fellow with Walt Disney Imagineering. Later jobs included Senior Fellow at Hawlett-Packard and founder of the Viewpoints Research Institute. Select metadata for this record was submitted by John Campopiano.
Date
1990-07-24
Date
1990-07-24
Asset type
Raw Footage
Topics
Technology
Subjects
Apple computer--History; Apple Computer; Kay, Alan Curtis, 1940-; information technology; Personal Computers; Microprocessors--United States--History; Interactive Computing; Computer software--Development--United States; Computer software--Development--History; Computer software developers--United States
Rights
Rights Note:,Rights:,Rights Credit:WGBH Educational Foundation,Rights Type:All,Rights Coverage:,Rights Holder:
Media type
Moving Image
Duration
02:45:30
Embed Code
Copy and paste this HTML to include AAPB content on your blog or webpage.
Credits
Interviewee2: Kay, Alan Curtis, 1940-
Publisher: A WGBH Boston/BBC TV coproduction in association with NDR Hamburg
AAPB Contributor Holdings
WGBH
Identifier: 66911435e22e6b41505f73f5cdc7b5c45b5ec39a (ArtesiaDAM UOI_ID)
Format: video/quicktime
Color: Color
Duration: 00:00:00
If you have a copy of this asset and would like us to add it to our catalog, please contact us.
Citations
Chicago: “The Machine That Changed the World; Interview with Alan Kay, 1990; The Machine That Changed the World: Interview with Alan Kay, 1990,” 1990-07-24, WGBH, American Archive of Public Broadcasting (GBH and the Library of Congress), Boston, MA and Washington, DC, accessed November 22, 2024, http://americanarchive.org/catalog/cpb-aacip-15-pv6b27q16d.
MLA: “The Machine That Changed the World; Interview with Alan Kay, 1990; The Machine That Changed the World: Interview with Alan Kay, 1990.” 1990-07-24. WGBH, American Archive of Public Broadcasting (GBH and the Library of Congress), Boston, MA and Washington, DC. Web. November 22, 2024. <http://americanarchive.org/catalog/cpb-aacip-15-pv6b27q16d>.
APA: The Machine That Changed the World; Interview with Alan Kay, 1990; The Machine That Changed the World: Interview with Alan Kay, 1990. Boston, MA: WGBH, American Archive of Public Broadcasting (GBH and the Library of Congress), Boston, MA and Washington, DC. Retrieved from http://americanarchive.org/catalog/cpb-aacip-15-pv6b27q16d