Dylan Azulay at emerj
has just published
another in a series of surveys that have been conducted over the last several
years by different groups about when the technological singularity is likely to
happen. The singularity is the idea that
computers will get so smart that their intelligence will grow explosively.
The notion of a technological singularity was initially proposed
by Vernor Vinge in
1993, expanding on some ideas from I. J. Good and John Von Neumann.
Good wrote:
“Let an ultraintelligent machine be defined as a machine
that can far surpass all the intellectual activities of any man however clever.
Since the design of machines is one of these intellectual activities, an
ultraintelligent machine could design even better machines; there would then
unquestionably be an "intelligence explosion," and the intelligence
of man would be left far behind. Thus the first ultraintelligent machine is the
last invention that man need ever make.”
Good, I. J. (1965). Speculations Concerning the First Ultraintelligent
Machine, in Advances in Computers, vol 6, Franz L. Alt and Morris Rubinoff,
eds., 31-88, Academic Press.
According to Vinge: “It's fair to call this event [the explosion
in machine intelligence] a singularity (‘the Singularity’ for the purposes of
this piece). It is a point where our old models must be discarded and a new
reality rules, a point that will loom vaster and vaster over human affairs
until the notion becomes a commonplace.”
The notion of the singularity combines the idea of artificial
general intelligence, with the idea that such a general intelligence will be
able to grow at exponential velocity. General
intelligence is a difficult enough problem, but it is solvable, I think. But, contrary to the speculations of Good,
Vinge, Bostrom, and
others, it will not result in an intelligence explosion.
To understand why there will be no explosion, we can start
with the 18th Century philosophical conflict between Rationalism
and Empiricism. Simplifying
somewhat, the rationalist approach assumes that the way to understanding, that
is intelligence, lies principally in thinking about the world. The empiricist approach says that understanding
comes from apprehension of facts gained through experience with the world. In order for there to be a singularity explosion,
the rationalist position has be completely correct, and the empiricist position
has to be completely wrong, at least so far as computational intelligence is
concerned. If all it took to achieve
explosive growth in intelligence was to think about it, then the singularity
would be possible, but it would leave a system lost in thought.
If understanding depends on gleaning facts from experience,
then a singularity is not possible because the rate at which facts become
available is not changed by increases in computational capacity. In reality, neither pure Rationalism nor pure
Empiricism is sufficient, but if we view intelligence as including the ability
to solve physical world, not just virtual, problems, then a singularity of the
sort Vinge discussed is simply not possible.
Computers may, indeed, increase their intelligence over time, but well designed machines and being
good at designing them are not sufficient to cause an explosive expansion of
intelligence.
Imagine, for example, that we could double computing capacity
every few (pick one) months, days, or years.
As time goes by, the size of the increase becomes indistinguishable from
vertical, and an explosion in computing capacity can be said to have
occurred. If all the computer had to do
was to process symbols or mathematical values, then we might achieve a
technological singularity. The computer
would think faster and faster and faster and be able to process more
propositions more quickly. Intelligence,
in other words, would consist entirely of the formal problem of manipulating
symbols or mathematical objects. A
computer under these conditions could become super-intelligent even if the
entire universe around it somehow disappeared because it is the symbols that
are important, the world is not. But the
world is important.
The board game go is conceptually very simple, but because
of the number of possible moves, winning the game is challenging. Go is a formal problem, meaning that one
could play go without actually using stones or a game board, just by
representing those parts symbolically or mathematically. It is the form of the problem, not its
instantiation in stones and boards that is important.
In fact, when AlphaGo played Lee
Sedol, its developers did not even bother to have the computer actually
place any stones on the board. Instead, the computer communicated its moves to
a person who placed the stones and recorded the opponents responses. It could have played just as well without a
person placing the stones because all it really did was manipulate symbols for
those stones and the board. The physical
properties of the stones and board played no role and contributed nothing to
its ability to play. The go game board and
stones were merely a convenience for the humans, they played no role in the operation
of the computer.
AlphaGo was trained
in part by having two versions of the game play symbolically against one
another. With more computer power, it could play faster and thus, theoretically
learn faster. Learning to play go is the
perfect rationalist situation. Improvement
can be had just by thinking about it. No experience with a physical world is
needed. With enough computer power, its
ability to play go might be seen to “explode.”
But playing go is not a good model for general
intelligence. After playing these
virtual games, it knew more because of
its ability to think about the game, but intelligence in the world requires
different capabilities beyond those required to play go. Go is a formal, perfect information problem. The two players may find it challenging to
guess what the future state of the game will be following a succession of moves,
but there is no uncertainty about the current state of the game. The positions of the stones on the playing grid
are perfectly known by each player. The
available moves at any point in time are perfectly known and the consequences
of each move, at least the immediate consequences of that move are also perfectly
known. Learning to play consisted completely of learning to predict the future
consequences of each potential move.
Self-driving
vehicles, in contrast, do not address a purely formal problem. Instead, their sensors provide incomplete,
faulty, information about the state of the vehicle and its surroundings. Although some progress can be made by learning
to drive a virtual simulated vehicle, there is no substitute for the feedback of
driving a physical vehicle in a physical world.
Learning to drive is not a purely rationalist system. Rather it depends
strongly on the system’s empirical experience with its environment.
At least some of the problems faced by an artificial general
intelligence system will be of this empiricist type. But a self-driving vehicle that computed
twice as fast, would not learn at twice the rate, because its learning depends
on feedback from the world and the world does not increase its speed of providing
feedback, no matter how fast the computer is. This is one of the main reasons
whey there will be no intelligence explosion.
The world, not the computer, ultimately controls how fast it can
learn.
Most driving is mundane.
Nothing novel happens during most of the miles driven so there is nothing
new for the computer to learn. Unexpected
events (why simulation is not enough) occur with a frequency that is entirely
unrelated to the speed or capacity of the computer. There will be no explosion in the
capabilities of self-driving vehicles.
They may displace truck and taxi drivers, but they will not take over
the world, and they will not do it explosively.
There are other reasons why the singularity will be a
no-show. Here is just one of them. Expanding machine intelligence will surely
require some form of machine learning. At
its most basic, machine learning is simply a method of modifying the values of certain
parameters to find an optimal set of values that solve a problem. AlphaGo was capable of learning to play go
because the DeepMind team structured the computational problem in an important
new way. Self-driving cars became
possible because the teams competing in the second
DARPA grand challenge figured out a new way to represent the problem of
driving. Computers are great at finding
optimal parameter values, but so far, they have no capability at all for figuring
out how to structure problem representations so that they can be solved by
finding those parameter values.
Good assumed that “the design of machines is one of these
intellectual activities” just like those used to play go or drive, but he was
wrong. Structuring a problem so that a
computer can find its solution is a different kind of problem that cannot be
reduced to parameter value adjustment, at least
not in a timely way. Until we can
come up with appropriate methods to design solutions, artificial general
intelligence will not be possible. Albert
Einstein was not known as brilliant for his ability to solve well-posed problems,
rather he was renowned for his ability to design new approaches to solving certain
physics problems—new theories. Today’s
computers are great at solving problems that someone has structured into
equations, but none is able yet to build create new structures. General intelligence requires this ability, and
it may be achievable, but as long as general intelligence depends on empirical feedback,
the chances of a technological singularity are nil.
No comments:
Post a Comment