Hawking warns artificial intelligence could end mankind

Hawking's finishes Terminator & Sarah Connor Chronicles box-sets

Anything that doesn't fit anywhere else below.

Moderators: kiore, Blip, The_Metatron

Re: Hawking warns artificial intelligence could end mankind

#61  Postby DavidMcC » Dec 07, 2014 2:54 pm

Chrisw wrote:It's also worth bearing in mind that these ideas of AIs with intelligence far beyond human level assume continued exponential increases in computing power.

But everyone knows that there are fundamental limits to how far Moore's Law (the continual reduction in transistor area) can go and we aren't all that far from those fundamental limits now. In fact I'd argue that increases in computer performance over the last decade have been quite disappointing. Moore's Law won't just stop, but progress will progressively slow down as we approach the limits and this is already happening (e.g. clock speeds have barely increased in the last ten years).

I know Kurzweil likes to claim that there is some law of nature (law of accelerating returns) that says that new technologies will always come along to continue the exponential progress. I don't know what the reason for his faith in this is. It seems quite possible to me that computer technology could become a mature industry, like cars or aeroplanes, where progress is slow and incremental.

Maturing, it is!
Moore's law more-or-less applied from the invention of the transistor in ther '50's, right through to ULSI chips.
May The Voice be with you!
DavidMcC
 
Name: David McCulloch
Posts: 14913
Age: 70
Male

Country: United Kigdom
United Kingdom (uk)
Print view this post

Re: Hawking warns artificial intelligence could end mankind

#62  Postby VazScep » Dec 07, 2014 3:12 pm

epepke wrote:Back in the day, we did a lot of work with general purpose SIMD machines, an extremely promising architecture that would be good for AI I think if the hardware advanced.
Any links to this sort of architecture, and how it compares to the modern GPU architecture? The GPU's idea of SIMD is where all the recent performance advancements have happened on the PC since the 2000s, and it's gone beyond graphics with CUDA.
Here we go again. First, we discover recursion.
VazScep
 
Posts: 4590

United Kingdom (uk)
Print view this post

Re: Hawking warns artificial intelligence could end mankind

#63  Postby epepke » Dec 07, 2014 10:04 pm

VazScep wrote:
epepke wrote:Back in the day, we did a lot of work with general purpose SIMD machines, an extremely promising architecture that would be good for AI I think if the hardware advanced.
Any links to this sort of architecture, and how it compares to the modern GPU architecture? The GPU's idea of SIMD is where all the recent performance advancements have happened on the PC since the 2000s, and it's gone beyond graphics with CUDA.


We had a CM2 with floating-point chips. http://en.wikipedia.org/wiki/Connection_Machine We also had the graphics board, which let us do some real-time ray casting. (I tried to overclock the graphics board once, because the clock speed on the board in NTSC mode was not close enough to NTSC to synchronize the motor on a Sony write-once laserdisc. Yeah, it was that long ago.)

I went to a lot of papers on GPU parallelization, including the Reality Engine when it came out, which had something like 320 fragment engines. It's good stuff, but there has been a consistent limitation on GPUs since the beginning, which is that Open GL is order-dependent. This makes it a bit trickier. The RE had this weird re-synchronizing feature. It's kind of sad to me, because Z-buffers still really suck big rocks, and there are far superior techniques such as depth-weaving. However, they don't map onto temporal ordering.

Other ways of doing things, such as the Pixel Planes and Pixel Power architectures, which showed a lot of promise, are gone. That's kind of a shame, because that stuff was almost trivially parallelizable and easily scalable. We did some work on the CAVE, which of course used SGI and gl/Open GL. I came up with an idea for tiles that could be put together for as large a CAVE as you like. This would have mapped well onto a Pixel Power architecture but maps very poorly onto an Open GL.

And now for something completely different:

Am I the only one with a mental picture of Stephen Hawking as a dalek?
User avatar
epepke
 
Posts: 4080

Country: US
United States (us)
Print view this post

Re: Hawking warns artificial intelligence could end mankind

#64  Postby kennyc » Dec 07, 2014 10:40 pm

epepke wrote:
Chrisw wrote:But everyone knows that there are fundamental limits to how far Moore's Law (the continual reduction in transistor area) can go and we aren't all that far from those fundamental limits now. In fact I'd argue that increases in computer performance over the last decade have been quite disappointing. Moore's Law won't just stop, but progress will progressively slow down as we approach the limits and this is already happening (e.g. clock speeds have barely increased in the last ten years).


Moore's law has effectively stopped and has been stopped for about 10 years.......


http://www.mooreslaw.org/
Kenny A. Chaffin
Art Gallery - Photo Gallery - Writing&Poetry
"Strive on with Awareness" - Siddhartha Gautama
User avatar
kennyc
 
Name: Kenny A. Chaffin
Posts: 8698
Male

Country: U.S.A.
United States (us)
Print view this post

Re: Hawking warns artificial intelligence could end mankind

#65  Postby kennyc » Dec 07, 2014 10:41 pm

Oops double post.
Kenny A. Chaffin
Art Gallery - Photo Gallery - Writing&Poetry
"Strive on with Awareness" - Siddhartha Gautama
User avatar
kennyc
 
Name: Kenny A. Chaffin
Posts: 8698
Male

Country: U.S.A.
United States (us)
Print view this post

Re: Hawking warns artificial intelligence could end mankind

#66  Postby VazScep » Dec 08, 2014 8:57 am

Yes, Moore's Law is still going strong. What's been long over is the "free lunch" as explained in this article back from 2005:

The Free Lunch is Over.
Here we go again. First, we discover recursion.
VazScep
 
Posts: 4590

United Kingdom (uk)
Print view this post

Re: Hawking warns artificial intelligence could end mankind

#67  Postby twistor59 » Dec 08, 2014 9:35 am

epepke wrote:
Am I the only one with a mental picture of Stephen Hawking as a dalek?


No, back in the 80s his students used to affectionately refer to him as "Davros".
A soul in tension that's learning to fly
Condition grounded but determined to try
Can't keep my eyes from the circling skies
Tongue-tied and twisted just an earthbound misfit, I
User avatar
twistor59
RS Donator
 
Posts: 4966
Male

United Kingdom (uk)
Print view this post

Re: Hawking warns artificial intelligence could end mankind

#68  Postby Chrisw » Dec 08, 2014 11:11 am

kennyc wrote:
epepke wrote:
Chrisw wrote:But everyone knows that there are fundamental limits to how far Moore's Law (the continual reduction in transistor area) can go and we aren't all that far from those fundamental limits now. In fact I'd argue that increases in computer performance over the last decade have been quite disappointing. Moore's Law won't just stop, but progress will progressively slow down as we approach the limits and this is already happening (e.g. clock speeds have barely increased in the last ten years).


Moore's law has effectively stopped and has been stopped for about 10 years.......


http://www.mooreslaw.org/

Yes, I'm well aware of what Moore's Law is. That's why I distinguished between Moore's Law itself (the regular reductions in transistor area) and the regular increases in performance which are related to but not identical with it.

In reality it's not quite that simple *. But my point was that even if you take Moore's Law to simply mean increases in transistor density, as is conventional, it will soon come to an end. That's bad enough for people who want to build a super-AI with godlike powers. But it's even worse than that - even now, while Moore's law is still technically operating, gains in computer performance have virtually ground to a halt.

People like Kurzweil, Bostrom and Yudkowsky really need to take a reality check here. Apart from all the other problems with their ideas, there seems no reason to believe they will have the necessary computing hardware to accomplish their goals. I'd put off worrying about the robot-apocalypse at least until we have some idea of what the hardware technology that could make such machines possible will look like. Right now it's like worrying about the implications of warp drives.


* The problem is, the fundamental limits to scaling are different for different parts of a chip and some limits have already been reached. Technology generations are usually classed by transistor gate width and reductions in this, all other things being equal, result in reductions in capacitance and hence increases in transistor switching speed (lower RC delays). But all other things are no longer equal - we have already reached single-atom gate thicknesses. So gates get smaller in area but no thinner with each generation. This doesn't give us the reductions in capacitance we are used to, so we don't get the increases in clock speed. Also the smaller feature sizes require lower voltages which, beyond a certain point, interfere with the working of the transistors and cause large leakage currents to flow. These leakage currents cause greatly increased power consumption (and thus heat) though this can be somewhat controlled by raising threshold voltages which makes the chip slower. I think it is reasonable to say that Moore's Law is breaking down. We can't just simply scale things down and expect big performance gains like we used to. It's got very tricky.
Chrisw
 
Posts: 2022
Male

United Kingdom (uk)
Print view this post

Re: Hawking warns artificial intelligence could end mankind

#69  Postby kennyc » Dec 08, 2014 12:14 pm

VazScep wrote:
Yes, Moore's Law is still going strong. What's been long over is the "free lunch" as explained in this article back from 2005:

The Free Lunch is Over.


TANSTAAFL

and never has been. :lol:
Kenny A. Chaffin
Art Gallery - Photo Gallery - Writing&Poetry
"Strive on with Awareness" - Siddhartha Gautama
User avatar
kennyc
 
Name: Kenny A. Chaffin
Posts: 8698
Male

Country: U.S.A.
United States (us)
Print view this post

Re: Hawking warns artificial intelligence could end mankind

#70  Postby kennyc » Dec 08, 2014 1:03 pm

How Many Computers Does It Take to Think Like a Human?
...

http://www.occupycorporatism.com/home/m ... _medium=FB
Kenny A. Chaffin
Art Gallery - Photo Gallery - Writing&Poetry
"Strive on with Awareness" - Siddhartha Gautama
User avatar
kennyc
 
Name: Kenny A. Chaffin
Posts: 8698
Male

Country: U.S.A.
United States (us)
Print view this post

Re: Hawking warns artificial intelligence could end mankind

#71  Postby Nicko » Dec 08, 2014 2:38 pm

Nicko wrote:
epepke wrote:What you'd have to do in order to get a product to market in anywhere like a reasonable amount of time, I think, would be to grow bottom-up to an extent that hasn't been done, probably using top-down tricks as scaffolding for things like vision and hearing.


Makes sense. Gets you the inputs you need for the "brain" to start making connections.


I'd just like to revisit this, if I may.

First I'd like to confirm my impression of what you meant by "top-down tricks as scaffolding for things like vision and hearing". I take this to mean "plugging in" things like a preprogrammed image recognition system and a preprogrammed voice recognition system. Would this be correct?
"Democracy is asset insurance for the rich. Stop skimping on the payments."

-- Mark Blyth
User avatar
Nicko
 
Name: Nick Williams
Posts: 8643
Age: 47
Male

Country: Australia
Australia (au)
Print view this post

Re: Hawking warns artificial intelligence could end mankind

#72  Postby epepke » Dec 08, 2014 4:25 pm

Nicko wrote:
Nicko wrote:
epepke wrote:What you'd have to do in order to get a product to market in anywhere like a reasonable amount of time, I think, would be to grow bottom-up to an extent that hasn't been done, probably using top-down tricks as scaffolding for things like vision and hearing.


Makes sense. Gets you the inputs you need for the "brain" to start making connections.


I'd just like to revisit this, if I may.

First I'd like to confirm my impression of what you meant by "top-down tricks as scaffolding for things like vision and hearing". I take this to mean "plugging in" things like a preprogrammed image recognition system and a preprogrammed voice recognition system. Would this be correct?


Yes, and use the outputs of those to your neural thingie. Maybe not full image recognition, but at least everything through Area 7 of the visual cortex. We know how to do Gabor filters and color processing, and it cuts out a lot of grey matter, so maybe what's left won't be so intimidating.

By way of analogy, it's a bit like how we use preprocessors and lexical analyzers and feed tokens to compilers. Totally unneccesary, of course, an LALR-1, let alone a stack machine or recursive descent, can do all that. Conceptually, though, it cuts out at least half of the complexity to do it that way.

I now realize I didn't say much about the CM-2. It was a 16-D hypercube therefore with 64K processors, some cross-bar switches for distant communication (because 16 cycles was a lot in those days) and a parallel video card and disk storage.

It was kind of pretty, and it was used in the movie Die Hard, because it looked like people's idea of a big computer with blinkenlights:

Image

The bar-looking thing held the disks, all 50 MB of them. That picture omits the small UNIX box that ran the actual code and wagged the dog. The screen is the frame buffer, which in our installation I insisted be located in another, quieter room, so we ran teflon coax through the plenum. That turned out to be good, as the two huge chilled water cooling units kept growing this brightly colored fudge that kept giving me Legionaire's Disease every couple of weeks. So for about a year, I worked a few days until I started coughing up bits of lung and went home for a few days, which didn't do me much good, but what are you going to do when you have access to the fastest computer in the world?

The hardware was primitive, but these were the days when I had to get bought a 25 MHz machine for $45,000 just to do gl/Open GL. What made it fantastic was the software. It was able to map any arbitrary graph onto the hypercube automatically with results that are close to optimal. Here's Richard Feynman wearing the Thinking Machines T-shirt that shows the idea roughly:

Image

I had one of those T-shirts, but mine was lavender, not black, but again, what are you going to do?

So at this point, of course, I'm thinking neurons. Trouble is that there wasn't much biological stuff going on at SCRI. I got to do some quantum organic chemistry and some stuff with neurons and glial cells in rat brains, but that was about it. Then thinking machines switched to coarse-grained multiprocessing with the CM5, which was a bit like our RS6000 cluster with Fibre Channel. Karl Sims did some great artificial life stuff with that one:



But the SIMD architecture petered out, and so did the culture of research as PCs became more popular. We all had to go out and get day jobs. I got more sick, which wiped me out financially because nobody would sell me insurance after the COBRA expired. I've hung on for a decade more, but now I'm wondering how to get $25 for a replacement power supply so that I can get back to developing, which I have to do because I'm pretty sure I'm never going to get another job given the state of my teeth, which I haven't been able to afford to get fixed, not to mention the breaks in my so-called career due to getting sick.

At least I can have some fun thinking about this stuff, though in a few hours I'll probably snap back into a deep depression and be rude to the denizens of this forum. Again, though, what can you do?
User avatar
epepke
 
Posts: 4080

Country: US
United States (us)
Print view this post

Re: Hawking warns artificial intelligence could end mankind

#73  Postby Nicko » Dec 08, 2014 11:11 pm

epepke wrote:
Nicko wrote:
Nicko wrote:
epepke wrote:What you'd have to do in order to get a product to market in anywhere like a reasonable amount of time, I think, would be to grow bottom-up to an extent that hasn't been done, probably using top-down tricks as scaffolding for things like vision and hearing.


Makes sense. Gets you the inputs you need for the "brain" to start making connections.


I'd just like to revisit this, if I may.

First I'd like to confirm my impression of what you meant by "top-down tricks as scaffolding for things like vision and hearing". I take this to mean "plugging in" things like a preprogrammed image recognition system and a preprogrammed voice recognition system. Would this be correct?


Yes, and use the outputs of those to your neural thingie. Maybe not full image recognition, but at least everything through Area 7 of the visual cortex. We know how to do Gabor filters and color processing, and it cuts out a lot of grey matter, so maybe what's left won't be so intimidating.


Okay.

But what you seemed to be saying is that it was infants trying to work out what all this random light and noise going on was all about - and doing stuff to try and affect it - that was how their brains developed these neural connections that produce intelligence.

That is, by trying to take a shortcut, we might actually be erecting a roadblock. By providing a premade "visual cortex", for example, would we not be depriving a potential machine intelligence of the very opportunity to work these things out for itself that would result in it achieving what we would recognise as intelligence?
"Democracy is asset insurance for the rich. Stop skimping on the payments."

-- Mark Blyth
User avatar
Nicko
 
Name: Nick Williams
Posts: 8643
Age: 47
Male

Country: Australia
Australia (au)
Print view this post

Re: Hawking warns artificial intelligence could end mankind

#74  Postby Nicko » Dec 08, 2014 11:16 pm

epepke wrote:Image

I had one of those T-shirts, but mine was lavender, not black, but again, what are you going to do?


Demand a black t-shirt?

Sounds like the least they fucking owe you.
"Democracy is asset insurance for the rich. Stop skimping on the payments."

-- Mark Blyth
User avatar
Nicko
 
Name: Nick Williams
Posts: 8643
Age: 47
Male

Country: Australia
Australia (au)
Print view this post

Re: Hawking warns artificial intelligence could end mankind

#75  Postby epepke » Dec 09, 2014 8:15 pm

Nicko wrote:But what you seemed to be saying is that it was infants trying to work out what all this random light and noise going on was all about - and doing stuff to try and affect it - that was how their brains developed these neural connections that produce intelligence.

That is, by trying to take a shortcut, we might actually be erecting a roadblock. By providing a premade "visual cortex", for example, would we not be depriving a potential machine intelligence of the very opportunity to work these things out for itself that would result in it achieving what we would recognise as intelligence?


This is possible, of course. But that's specifically why I mentioned Area 7. Everything up through Area 7 is known from animal studies, and it's just the same in pretty much every mammal that has been studied. So it seems pretty likely to me that all that stuff is hard-wired and learning is minimal. There's corroborating evidence from, say, optical illusions, which if anything go way beyond Area 7 and are still common to people who can see, for the most part.

That at least suggest to me that these parts of the brain, though they evolved, don't really change much or any as learning happens. And since they account for a lot of brain and a huge amount of evolution, and we know how to simulate them easily, we can avoid quite a lot of unnecessary work by just building them the way we know how.

There are, of course, some hard-wired structures for facial recognition and facial part recognition. (Which are different, and the upside-down smile simulation shows that pretty convincingly). Maybe it would also be safe to hard-wire them, but I think that might be riskier according to the objection you pointed out. But we might eventually have to evolve them a la the Karl Sims thingie, as sort of an intermediate step between hard-wired and individual learning. Because I don't think we really understand them well enough yet. So it's unknown territory.

Still, I think, some visual stuff, motor stuff, and basic audio processing are fairly safe to try. Up to Area 7 of the visual cortex, up to a cochlear implant for hearing, and up to servo mechanisms (or their virtual equivalents) for motion. That I think is safe, and it would save an enormous amount of work. I also don't think that we have to simulate every damn electric potential and ion pump in detail, which is what is holding up the worm work. An approximation with a few bits per axon should be good enough. Maybe it won't be, but it's a way to get started, anyway.
User avatar
epepke
 
Posts: 4080

Country: US
United States (us)
Print view this post

Re: Hawking warns artificial intelligence could end mankind

#76  Postby home_ » Dec 09, 2014 11:51 pm

Why couldn't we build AI without continuing Moore's law? My layman guess is that there's a lot of redundant stuff in the brain that we don't need for AI. Lots of sensori-motor skills for example are not really needed for AI, or at least they can be made much more efficient than biological counterparts. We only need enough computing power to match small fraction of brain, only the part that is responsible for intelligence, not everything.
User avatar
home_
 
Posts: 190

Print view this post

Re: Hawking warns artificial intelligence could end mankind

#77  Postby kennyc » Dec 10, 2014 12:01 am

home_ wrote:Why couldn't we build AI without continuing Moore's law? My layman guess is that there's a lot of redundant stuff in the brain that we don't need for AI. Lots of sensori-motor skills for example are not really needed for AI, or at least they can be made much more efficient than biological counterparts. We only need enough computing power to match small fraction of brain, only the part that is responsible for intelligence, not everything.


We are, we can. We have very simple rudimentary AI programs working right now.
Kenny A. Chaffin
Art Gallery - Photo Gallery - Writing&Poetry
"Strive on with Awareness" - Siddhartha Gautama
User avatar
kennyc
 
Name: Kenny A. Chaffin
Posts: 8698
Male

Country: U.S.A.
United States (us)
Print view this post

Re: Hawking warns artificial intelligence could end mankind

#78  Postby laklak » Dec 10, 2014 3:10 am

I can't help hearing his mechano-voice saying "Ar-ti-fi-cial in-tell-i-gence will de-stroy man-kind".

Image
A man who carries a cat by the tail learns something he can learn in no other way. - Mark Twain
The sky is falling! The sky is falling! - Chicken Little
I never go without my dinner. No one ever does, except vegetarians and people like that - Oscar Wilde
User avatar
laklak
RS Donator
 
Name: Florida Man
Posts: 20878
Age: 70
Male

Country: The Great Satan
Swaziland (sz)
Print view this post

Re: Hawking warns artificial intelligence could end mankind

#79  Postby kennyc » Dec 10, 2014 12:48 pm

and from Google Chairman Eric Schmidt, It's all about cats:


Google chairman blasts "misguided" AI concerns

Eric Schmidt also adds weight to "the internet is made of cats" theory during New York talk
Google chairman Eric Schmidt has described concerns about the rise of artificial intelligence systems as “misguided”.

Speaking at the Financial Times Innovate America event in New York, Schmidt said people shouldn’t be overly concerned about automation and the development of AI technologies – such as self-driving cars and virtual assistants - leading to job losses.

“These concerns are normal,” he said, reported Wired. “They’re also to some degree misguided.”

Worries about computers and machines taking over jobs traditionally done by human beings have always existed, he conceded, but the move to embrace mechanisation has its benefits.

“Go back to the history of the loom. There was absolute dislocation, but I think all of us are better off with more mechanised ways of getting clothes made,” he said.

He also claimed that industries tend to thrive when they switch from man-made processes to machine-based ones.

“There’s lots of evidence that when computers show up, wages go up,” he explained.

“There’s lots of evidence that people who work with computers are paid more than people without.”

On a lighter note, he added considerable weight to the “internet is made of cats” theory by revealing the results of an experiment in which 11,000 hours of YouTube videos were fed into an artificial neural network to see what it could learn from them.

“It discovered the concept of ‘cat’, he said. “I’m not quite sure what to say about that, except that’s where we are.”

....



http://www.itpro.co.uk/strategy/23691/g ... i-concerns
Kenny A. Chaffin
Art Gallery - Photo Gallery - Writing&Poetry
"Strive on with Awareness" - Siddhartha Gautama
User avatar
kennyc
 
Name: Kenny A. Chaffin
Posts: 8698
Male

Country: U.S.A.
United States (us)
Print view this post

Re: Hawking warns artificial intelligence could end mankind

#80  Postby DavidMcC » Dec 10, 2014 2:50 pm

kennyc wrote:
home_ wrote:Why couldn't we build AI without continuing Moore's law? My layman guess is that there's a lot of redundant stuff in the brain that we don't need for AI. Lots of sensori-motor skills for example are not really needed for AI, or at least they can be made much more efficient than biological counterparts. We only need enough computing power to match small fraction of brain, only the part that is responsible for intelligence, not everything.


We are, we can. We have very simple rudimentary AI programs working right now.

Sure, but the failure of Moore's law means that they will not get as good as once predicted on the assumption of the continuation of the law for a further dedcade or so.
May The Voice be with you!
DavidMcC
 
Name: David McCulloch
Posts: 14913
Age: 70
Male

Country: United Kigdom
United Kingdom (uk)
Print view this post

PreviousNext

Return to General Science & Technology

Who is online

Users viewing this topic: No registered users and 1 guest