Rodney Brooks: AI is a tool, not a threat

...and we've got a long way to go.

Studies of mental functions, behaviors and the nervous system.

Moderators: kiore, Blip, The_Metatron

Rodney Brooks: AI is a tool, not a threat

#1  Postby kennyc » Nov 11, 2014 7:27 pm

someone from the field speaks:

artificial intelligence is a tool, not a threat

November 10, 2014 in rethinking robotics by Rodney Brooks
Recently there has been a spate of articles in the mainstream press, and a spate of high profile people who are in tech but not AI, speculating about the dangers of malevolent AI being developed, and how we should be worried about that possibility. I say relax. Chill. This all comes from some fundamental misunderstandings of the nature of the undeniable progress that is being made in AI, and from a misunderstanding of how far we really are from having volitional or intentional artificially intelligent beings, whether they be deeply benevolent or malevolent.

By the way, this is not a new fear, and we’ve seen it played out in movies for a long time, from “2001: A Space Odyssey”, in 1968, “Colossus: The Forbin Project” in 1970, through many others, and then “I, Robot” in 2004. In all cases a computer decided that humans couldn’t be trusted to run things and started murdering them. The computer knew better than the people who built them, so it started killing them. (Fortunately that doesn’t happen with most teenagers, who always know better than the parents who built them.)

I think it is a mistake to be worrying about us developing malevolent AI anytime in the next few hundred years. I think the worry stems from a fundamental error in not distinguishing the difference between the very real recent advances in a particular aspect of AI, and the enormity and complexity of building sentient volitional intelligence. Recent advances in deep machine learning let us teach our machines things like how to distinguish classes of inputs and to fit curves to time data. This lets our machines “know” whether an image is that of a cat or not, or to “know” what is about to fail as the temperature increases in a particular sensor inside a jet engine. But this is only part of being intelligent, and Moore’s Law applied to this very real technical advance will not by itself bring about human level or super human level intelligence. While deep learning may come up with a category of things appearing in videos that correlates with cats, it doesn’t help very much at all in “knowing” what catness is, as distinct from dogness, nor that those concepts are much more similar to each other than to salamanderness. And deep learning does not help in giving a machine “intent”, or any overarching goals or “wants”. And it doesn’t help a machine explain how it is that it “knows” something, or what the implications of the knowledge are, or when that knowledge might be applicable, or counterfactually what would be the consequences of that knowledge being false. Malevolent AI would need all these capabilities, and then some. Both an intent to do something and an understanding of human goals, motivations, and behaviors would be keys to being evil towards humans.

Michael Jordan, of UC Berkeley, was recently interviewed in IEEE Spectrum, where he said some very reasonable, but somewhat dry, academic, things about big data. He very clearly and carefully laid out why even within the limited domain of machine learning, just one aspect of intelligence, there are pitfalls as we don’t yet have solid science on understanding exactly when and what classifications are accurate. And he very politely throws cold water on claims of near term full brain emulation and talks about us being decades or centuries from fully understanding the deep principles of the brain.

The Roomba, the floor cleaning robot from my previous company, iRobot, is perhaps the robot with the most volition and intention of any robots out there in the world. Most others are working in completely repetitive environments, or have a human operator providing the second by second volition for what they should do next.
....

http://www.rethinkrobotics.com/artifici ... ol-threat/
Kenny A. Chaffin
Art Gallery - Photo Gallery - Writing&Poetry
"Strive on with Awareness" - Siddhartha Gautama
User avatar
kennyc
THREAD STARTER
 
Name: Kenny A. Chaffin
Posts: 8698
Male

Country: U.S.A.
United States (us)
Print view this post

Re: Rodney Brooks: AI is a tool, not a threat

#2  Postby Rilx » Nov 11, 2014 8:08 pm

AI is to computer science as alchemy was to chemistry. They never reach the final goal but while trying they develop a lot of good methodology.
In the life, there are no solutions. There are forces in motion. Those need to be created, and solutions follow.
- Antoine de Saint-Exupery, "Night Flight"
Rilx
 
Posts: 340
Age: 76
Male

Finland (fi)
Print view this post

Re: Rodney Brooks: AI is a tool, not a threat

#3  Postby tuco » Nov 11, 2014 11:13 pm

From the article:

The computer knew better than the people who built them, so it started killing them. (Fortunately that doesn’t happen with most teenagers, who always know better than the parents who built them.)


Indeed, most. "Malevolent" in the spirit of the article seems to require "intent" while "intent" itself is anecdote. Robots can be dangerous to humans without "intent", well, some of them can. Personally, I prefer malevolent but not dangerous over benevolent and dangerous .. robocops *cough*

When its acoustic sensors in its suction system hear dirt banging around in the air flow, it stops exploring and circles in that area over and over again until the dirt is gone, or at least until the banging around drops below a pre-defined threshold.


Like observing black hole ;)

Repeat from here:

You have a chessboard (8×8) plus a big box of dominoes (each 2×1). I use a marker pen to put an “X” in the squares at coordinates (1, 1) and (8, 8) – a pair of diagonally opposing corners.

Q: Is it possible to cover the remaining 62 squares using the dominoes without any of them sticking out over the edge of the board and without any of them overlapping? You cannot let the dominoes stand on their end.


http://puzzles.nigelcoldwell.co.uk/sixteen.htm

Intent, want, know .. I call it realization.
tuco
 
Posts: 16040

Print view this post


Return to Psychology & Neuroscience

Who is online

Users viewing this topic: No registered users and 1 guest