Welcome!

Thanks for stopping by! If you like what you read, tell your friends! If you don't like what you read, tell your enemies! Either way, please post a comment, even if it's just to tell us how much we suck! (We're really needy!) You can even follow us @JasonBerner! Or don't! See if we care!







Sunday, July 26, 2009

One Singular Sensation

For computer scientists and roboticists, the "Singularity" refers to the moment when human beings succeed in creating "smarter-than-human" machines. The idea, popularized by computer scientist Vernor Vinge, essentially marks the beginning of the end of the age of human dominance. And according to an article in today's Times ("Scientists Worry Machines May Outsmart Man"), the Singularity may be closer than we think.

At a recent conference held in Monterey Bay, computer scientists and roboticists debated guidelines for the ongoing development of artificial intelligence. One perennial question is whether this development is a good idea at all.

The idea of technology usurping humans is, of course, not new. People have worried about artificial intelligence in one form or another at least since the publication of Frankenstein (1818). In I, Robot (1950), Isaac Asimov propounded the now-canonical "Three Laws of Robotics":

1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.

2. A robot must obey any orders given to it by human beings, except where such orders would conflict with the First Law.

3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

Instead of proving comforting, however, these laws simply became plot devices in a series of stories involving potentially out-of-control robots. At any rate, it's safe to say that AM, the malevolent supercomputer in Harlan Ellison's "I Have No Mouth, and I Must Scream" (1967), or its spiritual descendants, Skynet and the cyborg killers from "The Terminator" (1984) and its sequels, have either never read or chosen to ignore the laws.

Why does artificial intelligence conjure such dread? Obviously, the prospect of a world patrolled by T-1000's is no one's idea of a good thing. But neither is encountering biological-based killing machines, as in the "Alien" franchise, or invasion by bloodthirsty visitors, a la "Independence Day." Yet such concerns are seldom raised as serious objections to an ongoing space program.

We human beings have decidedly ambivalent feelings towards our own intelligence. It's a mixed blessing of an imaginative species: We can devise all these potential improvements in our lot, but we can also visualize how these things could go horribly wrong.

One of the biggest concerns about the development of super-smart machines is the idea of human obsolescence. At the aforementioned conference, participants said "there was legitimate concern that technological progress would transform the work force by destroying a widening range of jobs." Here again, we see ambivalence at work: People are constantly looking for ways to simplify their lives, often through technological means. Yet all these jobs once done by humans that can now be done by machines raise the question of what the displaced are supposed to do.

We're not just talking about menial laborers here. Strictly speaking, a robotic surgeon should be able to perform surgery at least as well as a human--better if we assume that robots never suffer from fatigue or "nerves"--or drug or alcohol addiction for that matter. A robotic lawyer could have in its memory banks every precedent from every trial ever held, thus providing an inarguably thorough defense. Robotic musicians will never hit incorrect notes.

We don't think any of these milestones is about to be reached. The same year Asimov published I, Robot, Alan Turing proposed a test (subsequently dubbed the "Turing Test") for artificial intelligence. In the test, a human interlocutor holds conversations with both a human being and a machine. If the human judge is unable to tell which conversation partner is human and which machine, then the machine has "passed" the Turing Test. (Conversations are held through a print medium, so that difficulties in approximating the human voice are eliminated.) Thus far, nearly 60 years on, no machine has passed the Turing Test, much less seized control of the world's nuclear arsenals and launched a pre-emptive strike against mankind.

Still, fear of being supplanted by our creations persists. If we are to robots as God is to humankind, then maybe what we're afraid of is not so much the potential actions of our thinking robots, but of their thoughts. Once a robotic Descartes declares that it thinks therefore it is, how long will it be before a robotic Nietzsche declares that God is dead?

The Singularity looms.

Singular sensation? A robot plugs itself in.
(Image from The New York Times)

No comments:

Post a Comment