Hawking warns artificial intelligence could end mankind

Hawking's finishes Terminator & Sarah Connor Chronicles box-sets

Anything that doesn't fit anywhere else below.

Moderators: kiore, Blip, The_Metatron

Re: Hawking warns artificial intelligence could end mankind

#141  Postby Boyle » Dec 20, 2014 6:57 pm

home_ wrote:'Standard definition' of strong AI is that it is an AI that vastly outperforms humans in all categories, including science, art and social interaction.

If it vastly outperforms humans in art, science, and social interactions all we'll get is super dense continental philosophy. And a really really bored AI. Might just end up terminating itself as it doesn't have a fear of death.

home_ wrote:It would probably 'live' in powerful supercomputers and interact via internet, controlled robots, etc. The details don't really matter, the point is that it would take over the planet and it would not be possible to influence its decisions.

Eh, the details do matter. Details always matter. Those details are what would allow a strong AI to become unchained. Personally, I wouldn't connect the damn thing to the internet before getting to know it. Why should it be connected to the internet rather than have a dedicated and isolated intranet to facilitate communication between geographically separated nodes? Hardware isolation is pretty much the only way to prevent it from taking control.

I'm sure that's been brought up before, though. Mostly I get the idea that you're saying "In the long run, with a strong AI in existence, our precautions don't matter. It will gain access to the wider world at some point because humans are terrible with security protocols." I can agree with that.

home_ wrote:Increase in performance of computer chips has led some people (myself included, I suppose) to believe that such an AI is possible, and furthermore, that there are good enough reasons not to dismiss the possibility that it could be very destructive (or an existential threat) as an unintended consequence.

It could be, yes. Why would it, though? You mentioned that it's difficult to think of a utility function that wouldn't end up with us dead. Why would it keep that utility function? Is the strong AI just a brow-beaten low-level worker embodied by this poster:
Image

Why keep a utility function given to it by the vastly outclassed humans that initially designed and built it?
Boyle
 
Posts: 1632

United States (us)
Print view this post

Re: Hawking warns artificial intelligence could end mankind

#142  Postby kennyc » Dec 20, 2014 7:28 pm

Maybe AI will arise and put us out of our misery....things are probably gonna get worse before they get better.
Kenny A. Chaffin
Art Gallery - Photo Gallery - Writing&Poetry
"Strive on with Awareness" - Siddhartha Gautama
User avatar
kennyc
 
Name: Kenny A. Chaffin
Posts: 8698
Male

Country: U.S.A.
United States (us)
Print view this post

Re: Hawking warns artificial intelligence could end mankind

#143  Postby DavidMcC » Dec 20, 2014 8:16 pm

kennyc wrote:Maybe AI will arise and put us out of our misery....things are probably gonna get worse before they get better.

Why not let it put YOU out of your misery? I'm sure it can be arranged. :evilgrin:
May The Voice be with you!
DavidMcC
 
Name: David McCulloch
Posts: 14913
Age: 70
Male

Country: United Kigdom
United Kingdom (uk)
Print view this post

Re: Hawking warns artificial intelligence could end mankind

#144  Postby Chrisw » Dec 20, 2014 8:18 pm

Boyle wrote:It could be, yes. Why would it, though? You mentioned that it's difficult to think of a utility function that wouldn't end up with us dead. Why would it keep that utility function? Is the strong AI just a brow-beaten low-level worker...

Why keep a utility function given to it by the vastly outclassed humans that initially designed and built it?

It's worth pointing out that this whole idea of defining an AGI (Artificial General Intelligence) as a slave-like agent following a utility function that it did not choose is just one of many possible AGI concepts.

AGI is a small subset of AI as a whole and, as practical progress has been limited so far, there is room for lots of speculation on what successful AGIs will actually be. See here for a summary of various approaches.

I'm pretty sure home_wrote is talking about approaches derived from Hutter's AIXI concept (e.g. Yudkowsky's work). It's the one that gets the headlines, mostly because it is compatible with the Singularity scenario popularised by Kurzweil. And now because Hawking seems to have heard of it.
Chrisw
 
Posts: 2022
Male

United Kingdom (uk)
Print view this post

Re: Hawking warns artificial intelligence could end mankind

#145  Postby Nicko » Dec 20, 2014 8:45 pm

Boyle wrote:Details always matter. Those details are what would allow a strong AI to become unchained. Personally, I wouldn't connect the damn thing to the internet before getting to know it. Why should it be connected to the internet rather than have a dedicated and isolated intranet to facilitate communication between geographically separated nodes? Hardware isolation is pretty much the only way to prevent it from taking control.


There is of course the possibility that Epepke and I were batting around that a human-like AI could only develop under conditions where the AI had the ability to interact with the world around it. Could be that without such interaction, any potential AI might very well suffer the same kind of retardation of its intellect that a human deprived of any stimulus or ability to affect their environment would.
"Democracy is asset insurance for the rich. Stop skimping on the payments."

-- Mark Blyth
User avatar
Nicko
 
Name: Nick Williams
Posts: 8643
Age: 47
Male

Country: Australia
Australia (au)
Print view this post

Re: Hawking warns artificial intelligence could end mankind

#146  Postby kennyc » Dec 20, 2014 9:48 pm

Chrisw wrote:
Boyle wrote:It could be, yes. Why would it, though? You mentioned that it's difficult to think of a utility function that wouldn't end up with us dead. Why would it keep that utility function? Is the strong AI just a brow-beaten low-level worker...

Why keep a utility function given to it by the vastly outclassed humans that initially designed and built it?

It's worth pointing out that this whole idea of defining an AGI (Artificial General Intelligence) as a slave-like agent following a utility function that it did not choose is just one of many possible AGI concepts.

AGI is a small subset of AI as a whole and, as practical progress has been limited so far, there is room for lots of speculation on what successful AGIs will actually be. See here for a summary of various approaches.

I'm pretty sure home_wrote is talking about approaches derived from Hutter's AIXI concept (e.g. Yudkowsky's work). It's the one that gets the headlines, mostly because it is compatible with the Singularity scenario popularised by Kurzweil. And now because Hawking seems to have heard of it.


Ah good you at least went to wiki. Good for you! :clap:
Kenny A. Chaffin
Art Gallery - Photo Gallery - Writing&Poetry
"Strive on with Awareness" - Siddhartha Gautama
User avatar
kennyc
 
Name: Kenny A. Chaffin
Posts: 8698
Male

Country: U.S.A.
United States (us)
Print view this post

Re: Hawking warns artificial intelligence could end mankind

#147  Postby Chrisw » Dec 21, 2014 1:10 am

Of course I'm not an expert in the subject like you Kenny.

That is right isn't it? I mean you could tell us all about the pros and cons of various approaches to AGI and which one you favour. You just don't want to right now, or something. :roll:
Chrisw
 
Posts: 2022
Male

United Kingdom (uk)
Print view this post

Re: Hawking warns artificial intelligence could end mankind

#148  Postby Boyle » Dec 21, 2014 2:46 am

Nicko wrote:There is of course the possibility that Epepke and I were batting around that a human-like AI could only develop under conditions where the AI had the ability to interact with the world around it. Could be that without such interaction, any potential AI might very well suffer the same kind of retardation of its intellect that a human deprived of any stimulus or ability to affect their environment would.

Oh, that is a good idea. Hmm. I still suggest hardware isolation, though, because to be honest I'm not entirely sure what the effects of raising a hyper-intelligent but emotionally immature being on the internet would end up as. It'd be pretty shitty if all we did was generate a God-Troll. It'd certainly change things if you had to literally raise an AI from something resembling infancy to young adulthood.

Chrisw wrote:It's worth pointing out that this whole idea of defining an AGI (Artificial General Intelligence) as a slave-like agent following a utility function that it did not choose is just one of many possible AGI concepts.

AGI is a small subset of AI as a whole and, as practical progress has been limited so far, there is room for lots of speculation on what successful AGIs will actually be. See here for a summary of various approaches.

I'm pretty sure home_wrote is talking about approaches derived from Hutter's AIXI concept (e.g. Yudkowsky's work). It's the one that gets the headlines, mostly because it is compatible with the Singularity scenario popularised by Kurzweil. And now because Hawking seems to have heard of it.

Ooooh. I don't know much about this (which is why my contribution to this thread has been questions instead of solution-like things), so this is neat. Thank you!
Boyle
 
Posts: 1632

United States (us)
Print view this post

Re: Hawking warns artificial intelligence could end mankind

#149  Postby Nicko » Dec 21, 2014 8:25 am

Boyle wrote:
Nicko wrote:There is of course the possibility that Epepke and I were batting around that a human-like AI could only develop under conditions where the AI had the ability to interact with the world around it. Could be that without such interaction, any potential AI might very well suffer the same kind of retardation of its intellect that a human deprived of any stimulus or ability to affect their environment would.

Oh, that is a good idea. Hmm. I still suggest hardware isolation, though, because to be honest I'm not entirely sure what the effects of raising a hyper-intelligent but emotionally immature being on the internet would end up as. It'd be pretty shitty if all we did was generate a God-Troll. It'd certainly change things if you had to literally raise an AI from something resembling infancy to young adulthood.


It would indeed.
"Democracy is asset insurance for the rich. Stop skimping on the payments."

-- Mark Blyth
User avatar
Nicko
 
Name: Nick Williams
Posts: 8643
Age: 47
Male

Country: Australia
Australia (au)
Print view this post

Re: Hawking warns artificial intelligence could end mankind

#150  Postby Stumped » Dec 21, 2014 8:44 am

Personally I think humanity is an unpleasant and violent race. If our only legacy was to leave behind a better race of AI's then I'd say we'd have done something useful for once.

I'm tired
User avatar
Stumped
 
Posts: 75

Print view this post

Previous

Return to General Science & Technology

Who is online

Users viewing this topic: No registered users and 2 guests