1. Moore's Law
In 1999-2000, the Royal Swedish Academy of Engineering Sciences carried
out an ambitious "Technical Foresight" study, aimed at exploring
the implications of anticipated developments in technology on Swedish
society over the next few decades. As secretary, I proposed that we
should include a section on Moore's Law in the synthesis report, but
my suggestion was met with groans and moans. "Moore's Law has
been beaten to death. Let us not keep nagging about that. We want a
fresh and forward-looking approach!"
|
In 1965, Gordon Moore predicted that the
density of components in integrated circuits would continue
to grow exponentially for at least the next 10 years. Few people
anticipated that his "Law" would still remain valid
after 40 years.
|
|
The members of the committee had a point. Too often, Moore's Law has
been discussed in purely technical terms, or in "gee whiz"
terms: "If the automotive industry had been subject to Moore's
Law, by now a Ferrari would cost 50 cents", etc. And even if
the semiconductor industry should somehow freeze at today's level of
technology, there would still be a lot of potential for further development
on the applications side for the next 10 or 20 years, and that
was what our study should address: probable developments in medicine,
education, etc.
Although everybody seems to be familiar with Moore's Law, strangely
enough there is no general agreement on what the Law actually says.
Does it refer to component density or to manufacturing cost or to "processing
power" (whatever that means)? And is the doubling time 18 months
or 24 months or 36 months? That makes a lot of difference in the long
run! Moore himself never handed down a Law, but he is said to be pleased
to be revered as the Father of all kinds of exponential growth :-)
|
|
Gordon
Moore (b. 1929), co-founder of Intel Corp., in a 1970s
photo. |
|
Moore's original
paper from 1965 is brief, accessible and interesting. Among his
specific predictions you will find home computers, digital watches,
and "personal mobile communications equipment".
The heart of the matter is the amazing fact that semiconductor technology
has been advancing at such a steady and relentless pace for well over
40 years, and that at least another decade of progress is expected before
it reaches the ultimate limits imposed by fundamental physics. Even
then, there are new technologies on the horizon that offer some hope
that the obstacles may be circumvented. To an outsider, it seems surprising
that the rate of progress has been so steady. No setbacks, no sudden
spurts! (This is disputed by Finnish scientist Ilkka
Tuomi.) I suspect that the explanation has more to do with the
economics of building and equipping industrial plants - especially while
maintaining high quality standards - than with engineering physics.
Of course, much of the revolution that we have seen in Information
Technology can be attributed to developments in fields not directly
dependent on Moore's Law: magnetic storage, optical fiber, software
etc, and they will continue to be important. Still, the future of Moore's
Law is going to have far-reaching consequences for our society in the
next few decades, although perhaps not always in the directions we have
come to expect. It is all too easy to extrapolate current trends, or
to think of new technologies as just replacing old technologies when
evaluating their potential. - The difficulty is illustrated by the anecdote
about the mayor who was impressed with an early demonstration of telephony
and exclaimed: "The telephone is a great invention! I can foresee
a time when every town will have one." The story may well be
true. After all, every town had a telegraph!
What I personally find especially intriguing about Moore's Law, however,
is the long-term potential for developing artificial intelligence. It
may even become possible to evolve artificial consciousness, although,
to be sure, that is not going to happen in my lifetime, and most
likely not even in this century. Yet, it is interesting to speculate
about whether the hardware needed to - even in theory - make
such leaps possible, might become available over the next few decades.
2. Speculations
about machine intelligence
Artificial intelligence! Many people, and in particular many scientists,
take offense at the very expression. "Even if we are able to
perform certain clever tricks with computers, how can anybody even think
of attributing intelligence to them? A computer has no fantasy, no reasoning
power, no free will, no imagination. It just performs a sequence of
pre-determined steps very rapidly. In principle, there is no difference
between a computer and, say, a typewriter."
Part
of the controversy is undoubtedly just a question of semantics. What
do we mean by intelligence? Many animals exhibit intelligent
behavior, even though we would not call them intelligent: the bird building
its nest, the spider weaving its net etc. Their intelligence is "hard-wired"
in their nervous systems. It turns out to be quite difficult to define
intelligence in a way that satisfies everyone. Yet, ever since the first
programmable computers were developed in the 1940s, scientists have
speculated about the possibility of building a machine that would emulate
at least some aspects of human intelligence. In 1950, Alan Turing (a
brilliant mathematician famous inter alia for his work in breaking
German cryptography in WW II) proposed a functional test for deciding
whether a computer had intelligence: if in a conversation (of arbitrary
length, through printed output) it could not be decided if the other
party was a machine or a human, then it would have to be admitted that
the machine was intelligent. "If it walks like a duck and talks
like a duck, then it probably is a duck", as the proverb says.
The Turing test has had a central role in discussions about artificial
intelligence, but there is still no general agreement on the definition
of intelligence. - The subject of artificial consciousness is even more
controversial and will not be discussed here.
The concept of artificial intelligence (AI) generated great enthusiasm
in the 1960s, but progress in the field has been much slower than many
people expected. In particular, it turned out to be quite difficult
to emulate even the intelligence of a baby: "Grab the blue box
and put it on the red box. Now, put the yellow ball in the blue box."
On the positive side, this led to a new appreciation of the tremendous
progress that a baby makes in understanding and interacting with the
world in its first year while seeming so passive. What has been
successful is the application of machine intelligence (functionally
speaking - let us not descend into philosophizing at this point!) in
narrow, well-defined domains. Some simple examples are Optical Character
Recognition programs, useful for turning scanned documents into ASCII
text files, and Speech Recognition software, although both obviously
fail the Chinese
Room test.
|
It's mate in 24!
|
An application close to my heart is computer chess. Here, progress
has been phenomenal during the last decade, and we have just reached
the crossover point, where the best chess programs are now stronger
than the human world champion. This also disproves the ancient myth
that machines can never become "smarter" than their creators.
Contrary to legend, the strength of computers is not just due to their
ability to calculate millions of positions in a fraction of a second,
but also to the quality of their evaluation functions, which have been
developed with the assistance of human grandmasters. Thus, there is
a fair amount of "chess understanding" built into the programs.
There is no doubt that chess computers pass the Turing test with flying
colors. In principle, they can explain the reasons for selecting a certain
move. This can also be seen in how we tend to talk about the machines:
"Fritz likes h5 in this position." - The very best
computers are now actually specially designed clusters of personal computer
processors, with extra memory etc, and loaded with databases for openings
and endgames, but even a standard commercial
program running on your home PC will be strong enough to beat all
but the world elite.
So-called "expert
systems" were in vogue in the 1980s. They represented an attempt
to encapsulate the knowledge of an expert in a set of rules and guidelines
that would be converted into a computer program. I remember hearing
about cases such as that of a production specialist at the Campbell
Soup company, who was approaching retirement, and whose invaluable knowledge
was to be saved in this way. Another case was a regional manager at
an airline who was an expert at setting the best day-by-day ticket prices
based on his "gut feeling". -
I have not heard much about such software since then, although I am
sure it is used extensively in such applications as medical diagnosis.
Of course, chess programs can be classified as expert systems.
Another AI development in the 1980s was neural
networks. As the name implies, it was an attempt to capture some
of the characteristics of the nervous system in the parallel processing
of input data. Different pathways are given different weight depending
on how they influence the output. A feedback mechanism ensures that
the system gradually improves. An early application was the recognition
of handwriting. - At Swedish Space Corporation we carried out some experiments
in the classification of multispectral satellite imagery, as I recall,
but we decided (I decided?) that even when the results were promising
in an individual 'scene', it would be difficult to generalise the process
and have confidence in the results, especially as the inner working
of the system was not easy to inspect. Still, the technology is said
to be useful in spotting patterns in data (data mining; financial fraud
etc.). In fact, it may very well be that it is being used with great
success in the financial markets today without, understandably, being
widely advertised :-)
The
game of Life was an early fad among computer hobbyists. It simulates
evolution on a grid, with the following three simple rules:
1. A dead cell with exactly three live neighbors becomes a live
cell (birth).
2. A live cell with two or three live neighbors stays alive (survival).
3. In all other cases, a cell dies or remains dead (overcrowding
or loneliness).
For
a surprising demonstration of how complexity can arise in such
a simple system, go here,
click on the "Play Life" button (twice?), expand to
full screen, then enter the pattern below somewhere in the middle
of the grid:
|
|
and
hit "Go". The pattern becomes stationary (with some 'gliders'
heading for infinity) only after 1103 generations. |
This brings us to the fascinating subject of self-improving programs.
Already in the case of chess programs, it would be tempting to have
the programs adjust their evaluation functions automatically, on the
basis of results. Unfortunately, they still play rather slowly at their
highest level. I suppose a program could play thousands of games "against
itself" in a minute, but the quality of the games would then be
so low that the evaluation function would just become optimised for
play against "patzers". Perhaps yet another decade of Moore's
Law in action will change the situation?
Artificial evolution is already a popular area of research. It is relatively
easy to simulate "organisms" with given (macroscopic) properties
on a computer and let them compete in a simulated environment. Based
on their competitive success, they then pass on their "genes",
i. e. properties, to the next generation of organisms. Random "mutations"
may be introduced. Of course, such simulations are extremely limited
in scope, but they may give useful insights in the evolutionary process.
And I am sure that they are great
fun!
Artificial evolution of intelligence, or more modestly, certain
aspects of intelligent behavior, is a much more difficult subject. The
"brute-force" Darwinian approach of simulating a habitat for
competing organisms, whether bacteria, fish or bands of monkeys, does
not seem a particularly promising route to evolving artificial intelligence,
especially when we consider that it took complex life more than 500
million years to evolve human intelligence, and even then it may have
been a fluke, triggered by asteroid impacts, ice ages, etc. It seems
plausible that we need to develop a better understanding of biological
neural systems in order to guide efforts to create heuristic self-improving
programs. In addition, a strong selection mechanism is essential to
rapid evolution. For example, a strong
case has been made that Darwinian evolution progressed from a simple
patch of light-sensitive cells to an eye equivalent to a human eye in
less than a million years! It seems to me that it would be difficult
to find a suitable measure of general machine intelligence, as opposed
to specialized intelligence. We all act according to how we are evaluated,
and this is no doubt true for the evolution of general intelligence
as well.
Considerations such as these make me skeptical to claims that Moore's
Law per se will lead to rapid progress toward the creation of
general machine intelligence at the human level. According to this theory,
computers will design more powerful computers faster and faster until
we reach a point - 'a technological singularity' - just a few decades
away, when computers will have become vastly more intelligent than humans
in every sense of the word. - This recalls a verse from Aniara,
written in 1956 by Nobel Prize laureate Harry Martinson:
The inventor was himself completely dumbstruck
the day he found that one half of the Mima
he'd invented lay beyond analysis.
That the Mima had invented half herself.
|
Recently, I discovered an interesting
dialog from 2002 in an unlikely place - a betting web site!
- between well-known inventor and futurist Ray Kurzweil and one
of the pioneers in AI research, Stanford professor John McCarthy.
McCarthy has become disillusioned with progress in the development
of general machine intelligence. Since the 1970s, what has been
lacking, in his opinion, is not computing power but bright ideas.
Kurzweil, on the other hand, is a great optimist and expects machines
to surpass humans in general intelligence in just a few decades.
He believes that we are just on the verge of a much better understanding
of how the human brain works, and that such knowledge will speed
AI research. |
|
Kurzweil also clarifies a question I have been asking myself: How much
computer capacity is needed to emulate the human brain (disregarding
the "software" or "wetware" issue)? According to
him, the human brain contains about 1011 neurons (I have
seen the number 1012 elsewhere), and there are 1000 connections
per neuron (and I have seen the number 10,000 elsewhere). He then goes
on to postulate 200 digitally controlled analog "transactions" per connection
per second, and comes up with the number 20*1015 operations
per second, which he claims should be achieved by conventional silicon
circuits prior to 2020, thanks to Moore's Law.
It appears that a project to simulate a fundamental neurological unit
- a neurocortical column - is already
underway. The processing speed is given as 23*1012 operations
per second. A neurocortical column is said to comprise just some 60,000
neurons, so it would appear that there is a discrepancy in comparison
with Kurzweil's numbers. Still, Professor Alan Dix at Lancaster University,
who has calculated the number of aggregated PCs needed to emulate the
human brain, writes: Philosophers of mind and identity have long
debated whether our sense of mind, personhood or consciousness are intrinsic
to our biological nature or whether a computer system emulating the
brain would have the same sense of consciousness as an emergent property
of its complexity ... we are nearing the point when this may become
an empirically testable issue!
|
The measure of all things.
|
So, it seems that the hardware needed to emulate a human brain might
become available sooner than most people would think. To actually represent
the complexity of the brain on a computer is, of course, a different
story. The two main approaches - to "reverse engineer" the
brain or to spawn intelligence from a primitive level through "artificial
evolution" - are doomed to fail if applied in isolation, in my
humble opinion. The key has got to be an integrated approach, where
biology is allowed to guide program architecture. After all, the "technical
specification" of the generic brain does not require anything remotely
approaching the numbers you might need to specify an individual brain.
It is all there in the genetic code. The genetic blueprint of the human
species requires less storage space than the latest version of Microsoft's
operating system, and the instructions pertaining to the brain make
up just a fraction of that. Even more remarkable: the genetic code includes
all the code needed to allow (and encourage!) two individuals to create
more brains! So not even Nature produces intelligent human beings directly;
it is a two-step process where architecture is delivered first, and
then content is added over days, months and years in a self-improving
process. "Neurons that fire together, wire together."
One aspect of Moore's Law that is often overlooked, is that today's
"super computer" should in time become affordable to each
individual scientist. Moreover, tomorrow's most advanced computational
facilities will become ever more accessible from a distance through
the Internet in its successive incarnations (GRID
technology etc.) The net result has got to be vastly improved opportunities
for experimentation with different models and theories of human and
machine intelligence. - In fact, if I were 20, that is probably the
field I would enter. :-)
One of the pleasures of speculating about technological progress is
to make unforeseen discoveries while surfing the Web. Last night, I
hit upon a 2006 conference on "The future of cognitive computing"
sponsored by IBM, with several hours of video recordings of the
presentations and panel discussions by many of the luminaries in the
field. The program, with links to video and presentation slides, is
here.
- There are many more free science and video lectures
online at this Latvian blogger's
site. A treasure trove!)