The Myth of Superhuman AI

Anything that doesn't fit anywhere else below.

Moderators: Calilasseia, ADParker

Re: The Myth of Superhuman AI

#21  Postby Thommo » Apr 28, 2017 5:57 pm

VazScep wrote:
Thommo wrote:Link is back up again for me. Quite a good read, provoked some thoughts at the very least.

Despite the title it's very upbeat and highly speculative. My own reaction was to take it with a pinch of salt, but be thankful it didn't read like the almost endless churn of singularity narrative, or similar.
It's quite possible that I messed up this thread by not stressing the intended context, which is the pants-wetting of the likes of Sam Harris and Hawking (neither of whom have any credentials in a relevant field) and the utopian sci-fi porn of Kurzweil (I think he used to make electronic keyboards). Those guys are all in the world of super speculation. The article I posted is just a rejoinder, suggesting that the likes of Harris, Hawking and Kurzweil are engaged in naive speculation. It's asking "but what if...?", not countering with a bunch of equally naive proclamations on the nature of intelligence.

Maybe we're reading different articles. It's possible that they've had website problems after they appeared yesterday on Hacker News.

Any response about what will happen a million years from now is so off-base that I just have to let the relevant poster drift off into their own little world.


I agree, to quote a forum compatriot, this article was a welcome tonic.
User avatar
Thommo
 
Posts: 21916

Print view this post

Ads by Google


Re: The Myth of Superhuman AI

#22  Postby tuco » Apr 28, 2017 6:07 pm

VazScep wrote:
tuco wrote:I know nothing about AI
But that didn't stop you weighing in with your uneducated assertions and then doing the whole "C'mon" to someone who is educated in the field?

Yeah, fuck this thread.


Indeed as the article starts with ..

I’ve heard that in the future computerized AIs will become so much smarter than us that they will take all our jobs and resources, and humans will go extinct. Is this true?


Speculation about future. Apparently, in your opinion, experts in field have much better understanding of future .. yeah maybe decade ahead which is worth not much really.
tuco
 
Posts: 13757

Print view this post

Re: The Myth of Superhuman AI

#23  Postby crank » Apr 28, 2017 6:52 pm

VazScep wrote:
crank wrote:Turing never meant his test to be verry interesting or worthwhile. It was a stopgap until people could understand AI and what it is and isn't.[//quote]I think the test was a pretty good first go, and his original paper is great. I suspect what Turing didn't factor in was that humans are way too willing to assign meaning to nonsense, as we first saw with ELIZA.

To worry about AIs is to assume they will have motives built into them. Most scenarios people worry about, like the Skynet' idea, make no sense to me, where is the motivation, the desire, coming from? It has to be built in, don't build it in and they won't decide we'd all be better off turned into paper clips.
To be fair to Harris, he says some sensible stuff, that it isn't about the machine having bad motivations, but us giving machines specs to fulfill that they fulfill in the wrong way. Hence, maximising paper clip production might involve killing all hoomans.

The thing is, we've known this since forever in AI and in software engineering in general. One of my favourite computer science quotes about the spec being wrong is Alan Kay's:

"And the users all said with a laugh and a taunt: it's just what we asked for, but not what we want."

And anyone who has done AI stuff has encountered situations where the machine came up with a solution that you look at and go "No! That's cheating!" In fact, it's a matter of routine. I'd propose that the definition of AI is it's code that surprises you when it runs, and it's not always a good surprise. But when the machine solves your problem the wrong way, it's you who is to blame, because it was your spec that was just shit. Alan Kay's joke can be spun back on the customer with the response "but it is what you signed off on."

The problem of trusting specifications is something we've been aware of for decades, and we have mitigations. A first step is to make sure you don't hook up the program trying to figure out how to beat the greatest Go player with your nuclear launch systems (hint: to beat the greatest Go player in the world, just annihilate everyone living in China.)

Side note: I've been watching a lot of GreshamCollege videos, mostly Prof. Raymond Flood's maths lectures [v. good if you like that sort of thing], there's quite a few of them. And I came across this one, it's fascinating and this guy, not Flood, gets into formal systems programming a fair amount. If you haven't seen, you'd probably find it interesting.


I don't disagree with anything you've said. I understand about how motivation isn't required for the paper clip catastrophe to obtain, but so many of the AI dooms day scenarios presuppose an obvious motivation-driven intent on the part of the AI.

They already have lots of genetic-algorithm programming results that they don't have a clue how the results were arrived at. This will likely become typical.

The article goes badly astray in his 4th and 5th points as I see it. In 4, he says:
It stands to reason that reason itself is finite, and not infinite. So the question is, where is the limit of intelligence? We tend to believe that the limit is way beyond us, way “above” us, as we are “above” an ant. Setting aside the recurring problem of a single dimension, what evidence do we have that the limit is not us? Why can’t we be at the maximum? Or maybe the limits are only a short distance away from us? Why do we believe that intelligence is something that can continue to expand forever?

To me this lacks almost any imagination at all. We can see ways to vastly increase our own abilities, if only with speed, which he even discussed earlier. A vastly increased memory along with more circuits so we can process enough info to hold more of it in our mental picture at any one moment, probably would also vastly increase our pattern recognition/connections of information that seem to be what problem solving/innovating is. He seems stuck on the 'infinite' bit, there's a lot of room between 'not infinite' and 'infinite', a whole lot of room.

In his 5th point, he says:
First, simulations and models can only be faster than their subjects because they leave something out.


Now, this is just plain wrong. I don't know what he was thinking, or maybe I'm badly misconstruing what he means, but it's trivial to come up with examples that disprove what he is saying. Maybe the easiest ones are all the old game emulators that have to be vastly slowed down when run on today's basic desktops. There are all kinds of systems that run slowly where simulation can go vastly faster without leaving anything of significance out. One of my predictions for the far off future is that whatever minds there are of super-intelligence will have little use for the real world, the simulations they'll be capable of will be far richer and more interesting, and less dangerous.

One last bit. He threw in there one line about a type of intelligence that hits upon something I think is extremely significant, the implications of which I haven't heard anyone mention that I can remember. it was this line:
A mind with operational access to its source code, so it can routinely mess with its own processes.


How marvelous it would be to have access to one's own programming, able to modify to suit your needs and desires. The only problem is, which desires? What will anything mean when you can change such things by twiddling a couple of knobs in your brain? What will it really mean then to look someone in the eyes and say 'I love you'?
“When you're born into this world, you're given a ticket to the freak show. If you're born in America you get a front row seat.”
-George Carlin, who died 2008. Ha, now we have human centipedes running the place
User avatar
crank
RS Donator
 
Name: Sick & Tired
Posts: 10362
Age: 2
Male

Country: 2nd miasma on the left
Pitcairn (pn)
Print view this post

Re: The Myth of Superhuman AI

#24  Postby VazScep » Apr 28, 2017 7:46 pm

crank wrote:Side note: I've been watching a lot of GreshamCollege videos, mostly Prof. Raymond Flood's maths lectures [v. good if you like that sort of thing], there's quite a few of them. And I came across this one, it's fascinating and this guy, not Flood, gets into formal systems programming a fair amount. If you haven't seen, you'd probably find it interesting.
Will check it out. Thanks for the link!

They already have lots of genetic-algorithm programming results that they don't have a clue how the results were arrived at. This will likely become typical.
It might. If the spec is tight, then you don't care. If it isn't, then maybe you want to insist that the algorithm also provides a human checkable certificate as to how it came up with the solution. My money is on doing the former. I welcome the day that we don't care how you arrived at the solution, so long as you solve it, whether you're an evil AI or an evil human trying to make a quick buck.

The article goes badly astray in his 4th and 5th points as I see it. In 4, he says:
It stands to reason that reason itself is finite, and not infinite. So the question is, where is the limit of intelligence? We tend to believe that the limit is way beyond us, way “above” us, as we are “above” an ant. Setting aside the recurring problem of a single dimension, what evidence do we have that the limit is not us? Why can’t we be at the maximum? Or maybe the limits are only a short distance away from us? Why do we believe that intelligence is something that can continue to expand forever?

To me this lacks almost any imagination at all. We can see ways to vastly increase our own abilities, if only with speed, which he even discussed earlier. A vastly increased memory along with more circuits so we can process enough info to hold more of it in our mental picture at any one moment, probably would also vastly increase our pattern recognition/connections of information that seem to be what problem solving/innovating is. He seems stuck on the 'infinite' bit, there's a lot of room between 'not infinite' and 'infinite', a whole lot of room.
It's not beyond the realms of reason. We know we can't just increase speed/connections/memory for free. We've already hit thermal limits with speed. Eventually, you have so many parallel CPUs, spread over so much space, that the amount of time they are waiting on inter-CPU communication means you might as well have just used a single CPU. Since the 90s, the stupid facts about the speed-of-light and memory access times have put stupid constraints on how we program, even as Moore's law was claiming to march us all forward.

There are major physical limits to scaling computation which we've known for a long time. I doubt the human brain has found them all, but they'll be there.

In his 5th point, he says:
First, simulations and models can only be faster than their subjects because they leave something out.


Now, this is just plain wrong. I don't know what he was thinking, or maybe I'm badly misconstruing what he means, but it's trivial to come up with examples that disprove what he is saying. Maybe the easiest ones are all the old game emulators that have to be vastly slowed down when run on today's basic desktops. There are all kinds of systems that run slowly where simulation can go vastly faster without leaving anything of significance out. One of my predictions for the far off future is that whatever minds there are of super-intelligence will have little use for the real world, the simulations they'll be capable of will be far richer and more interesting, and less dangerous.
Yes, you're absolutely correct here. The only way I can be charitable to the author is based on my experience arguing this stuff with others: the argument from my side goes that computationalism is wrong. Human intelligence is not a CPU specification that you can emulate however you want (computationalism/functionalism). Human intelligence is very wet biology.

Okay, say your opponents: but you can simulate anything wet. You have to pick a level of granularity and run a small time-delta, but as you push these, you get as close as you like to the simulated phenomenon.

But a simulation is very different to an emulation. You can emulate to 100% accuracy. You can't generally simulate with 100% accuracy. In fact, even for stupidly simple physical systems such as three bodies influencing each other non-trivially under gravity, simulation goes off course pretty quick.

One last bit. He threw in there one line about a type of intelligence that hits upon something I think is extremely significant, the implications of which I haven't heard anyone mention that I can remember. it was this line:
A mind with operational access to its source code, so it can routinely mess with its own processes.


How marvelous it would be to have access to one's own programming, able to modify to suit your needs and desires. The only problem is, which desires? What will anything mean when you can change such things by twiddling a couple of knobs in your brain? What will it really mean then to look someone in the eyes and say 'I love you'?
I think it's a common trope in AI speculation to wonder about this. Have you heard of the "wirehead" problem? It's the reinforcement feedback AI which figures out how its reward function is embedded in the real world, takes control over it, and then stimulates itself indefinitely by having it send maximum reward forever. It's the same situation as someone who figures out that they'd be pretty happy dosed up forever with morphine. I haven't actually read any standard solutions to this problem, but I assume they are the same ones that would be used for a machine that can reprogram itself.
Here we go again. First, we discover recursion.
VazScep
THREAD STARTER
 
Posts: 4589

United Kingdom (uk)
Print view this post

Re: The Myth of Superhuman AI

#25  Postby tuco » Apr 28, 2017 9:17 pm

I went to read the comments, was curious if my little uneducated opinion was extraordinary, and found this:

On the Impossibility of Supersized Machines - https://arxiv.org/abs/1703.10987

However, our conclusion might also be taken as a sad one. We are the largest
things in the universe, and we will never be otherwise.


I should leave it without comment but I am not sure who is more foolish when speculating about future. Whether those who say something will be possible or those who say it will be not. Its like telling Sun Tzu that he can stick is Art of War to .. because there will (not) be little boxes doing big-bada-boom in mushrooms.
tuco
 
Posts: 13757

Print view this post

Re: The Myth of Superhuman AI

#26  Postby felltoearth » Apr 29, 2017 6:17 am

tuco wrote:

What are humans good at? Sleep, eat and sex, anyways.

Why would anyone want to create robot indistinguishable from human?


The answer to your question is in your previous sentence.
"Walla Walla Bonga!" — Witticism
User avatar
felltoearth
 
Posts: 7429
Age: 50

Canada (ca)
Print view this post

Re: The Myth of Superhuman AI

#27  Postby tuco » Apr 29, 2017 6:25 am

The answer you, and VazScep, are satisfied with perhaps.

The question:

And why would anyone bother trying to build AIs that replicate the meaningless primitive, hairless ape bullshit we engage in for most of our lives?


needs no answering as its been answered by numerous sci-fi authors in past.

You know, the best way to predict future is: a) put time constraint on what future means b) base such prediction on current knowledge, not.
tuco
 
Posts: 13757

Print view this post

Ads by Google


Re: The Myth of Superhuman AI

#28  Postby Cito di Pense » Apr 29, 2017 11:09 am

Хлопнут без некролога. -- Серге́й Па́влович Королёв

Translation by Elbert Hubbard: Do not take life too seriously. You're not going to get out of it alive.
User avatar
Cito di Pense
 
Name: Ivar Poäng
Posts: 24875
Age: 6
Male

Country: The Heartland
Mongolia (mn)
Print view this post

Re: The Myth of Superhuman AI

#29  Postby Cito di Pense » Apr 29, 2017 11:13 am

tuco wrote:Its not necessary to have general intelligence to make some people worry. Its enough to have intelligence which emulates and surpasses their own. In other words, the question is/becomes .. what can humans do better than AI? If the answer will be .. nothing, well, whether general or not matters little.


Well, some people want to know what it means to be human. As I like to say, they should get an opinion from somebody other than a human, in a language that doesn't require translation. See also, telepathy. Alas, all the science fiction we know of is written (or at least read) by human beings.

tuco wrote:
I know nothing about AI but the idea that couple of kilograms of grey matter with consumption of hundreds of calories is capable of unachievable intelligence seems pretty naive to little me. Its just physics.


I guess that's related to the other link you posted about unachievable (or at least, indefinable) largeness. When you start talking about achievability, it helps to name the target before you let fly the arrow. Please don't tell me the target is knowing what it is to be human; you've already made known what think about that one, and your guess is as good as anyone else's, but not better.
Хлопнут без некролога. -- Серге́й Па́влович Королёв

Translation by Elbert Hubbard: Do not take life too seriously. You're not going to get out of it alive.
User avatar
Cito di Pense
 
Name: Ivar Poäng
Posts: 24875
Age: 6
Male

Country: The Heartland
Mongolia (mn)
Print view this post

Re: The Myth of Superhuman AI

#30  Postby tuco » Apr 29, 2017 11:31 am

Maybe, just thinking loud, not general intelligence but lack of individuality is real weakness of AI. Then again, these concepts are so vague that they are essentially useless /shrugs

What it means to be human .. well, it means being stuck with biological determinism, that is for sure.
tuco
 
Posts: 13757

Print view this post

Re: The Myth of Superhuman AI

#31  Postby crank » Apr 29, 2017 11:35 am

tuco wrote:The answer you, and VazScep, are satisfied with perhaps.

The question:

And why would anyone bother trying to build AIs that replicate the meaningless primitive, hairless ape bullshit we engage in for most of our lives?


needs no answering as its been answered by numerous sci-fi authors in past.

You know, the best way to predict future is: a) put time constraint on what future means b) base such prediction on current knowledge, not.

Predictions of future tech have a pattern, way over-estimating the near and way underestimating the far. It's because we think linearly and progress tends to the exponential.
“When you're born into this world, you're given a ticket to the freak show. If you're born in America you get a front row seat.”
-George Carlin, who died 2008. Ha, now we have human centipedes running the place
User avatar
crank
RS Donator
 
Name: Sick & Tired
Posts: 10362
Age: 2
Male

Country: 2nd miasma on the left
Pitcairn (pn)
Print view this post

Re: The Myth of Superhuman AI

#32  Postby crank » Apr 29, 2017 11:51 am

VazScep wrote:It's not beyond the realms of reason. We know we can't just increase speed/connections/memory for free. We've already hit thermal limits with speed. Eventually, you have so many parallel CPUs, spread over so much space, that the amount of time they are waiting on inter-CPU communication means you might as well have just used a single CPU. Since the 90s, the stupid facts about the speed-of-light and memory access times have put stupid constraints on how we program, even as Moore's law was claiming to march us all forward.

There are major physical limits to scaling computation which we've known for a long time. I doubt the human brain has found them all, but they'll be there.

I'm not arguing for a limitless increase in computation, only that the author seems to me unable to see how there's a rather huge gap between better than what we have now but not that much better, and limitless increases.

VazScep wrote:I think it's a common trope in AI speculation to wonder about this. Have you heard of the "wirehead" problem? It's the reinforcement feedback AI which figures out how its reward function is embedded in the real world, takes control over it, and then stimulates itself indefinitely by having it send maximum reward forever. It's the same situation as someone who figures out that they'd be pretty happy dosed up forever with morphine. I haven't actually read any standard solutions to this problem, but I assume they are the same ones that would be used for a machine that can reprogram itself.


Yeah, I've read a lot of science fiction. Wasn't that in the William Hurt movie, Altered States, either his character or someone else was sitting permanently in a chair, twitching as he was orgasming every few seconds and they couldn't turn it off? Something like that, god it's been really long ago since I saw that movie, which is now 37 years old. [That just made me really depressed]. But, yeah, such a trope is common in other versions, like in Discworld, when Granny Weatherwax goes 'borrowing', living in other animals minds, there is always the risk they'll get stuck there, losing their humanity if they stay too long and not being able to find their way back. 'It's not nice to fool mother nature' Shit, that just popped into my head and I know it's a TV commercial but can't for the life of me remember what for.
“When you're born into this world, you're given a ticket to the freak show. If you're born in America you get a front row seat.”
-George Carlin, who died 2008. Ha, now we have human centipedes running the place
User avatar
crank
RS Donator
 
Name: Sick & Tired
Posts: 10362
Age: 2
Male

Country: 2nd miasma on the left
Pitcairn (pn)
Print view this post

Re: The Myth of Superhuman AI

#33  Postby Cito di Pense » Apr 29, 2017 12:01 pm

crank wrote:I'm not arguing for a limitless increase in computation, only that the author seems to me unable to see how there's a rather huge gap between better than what we have now but not that much better, and limitless increases.


Well, weren't you just saying that progress is exponential? I think what you're contending with is that we can rule that out for reasons already given. When all else fails, redefine success. That said, all else hasn't failed, yet, but the article linked in the OP is a sensible step in the direction of re-definition and away from futurological porn.
Хлопнут без некролога. -- Серге́й Па́влович Королёв

Translation by Elbert Hubbard: Do not take life too seriously. You're not going to get out of it alive.
User avatar
Cito di Pense
 
Name: Ivar Poäng
Posts: 24875
Age: 6
Male

Country: The Heartland
Mongolia (mn)
Print view this post

Re: The Myth of Superhuman AI

#34  Postby zoon » Apr 29, 2017 12:43 pm

Drifting somewhat off topic, an article on the BBC website 2 days ago here suggests that while it’s unlikely that machines will take over from humans, there is a very real possibility that AI may greatly increase the inequality between human elites and the rest. The machines which currently need millions of ordinary human brains to control them are being replaced by machines which have enough intelligence to do the same jobs on their own, and it’s not always obvious what jobs the redundant people will find instead. For example, driverless cars are coming over the horizon.

The author points out that hunter gatherer societies are fairly equal; there is much argument over status (fuelling the high rates of killing in those societies), but it is difficult to pass on that status to later generations except in so far as the children of high-status individuals exist and stay alive and healthy (the search for status is selected for). Most of human evolution was in hunter-gatherer societies, so I think we may be psychologically comfortable with that degree of inequality. With farming, land and also other property tends to be passed down in families and wider groups, and inequality becomes greater and more institutionalised. The great civilisations for most of the last 3,000 years were extremely unequal societies, from the pharaohs and emperors at the top to slaves who were barely allowed to exist at the bottom.

Quoting the linked article: “In the 19th and 20th Centuries, however, something changed.”
Yuval Noah Harari wrote:Equality became a dominant value in human culture, almost all over the world. Why?
It was partly down to the rise of new ideologies such as humanism, liberalism and socialism.
But it was also about technological and economic change - which was connected to those new ideologies, of course.
Suddenly the elite needed large numbers of healthy, educated people to serve as soldiers in the army and as workers in the factories.
Governments didn't educate and vaccinate to be nice.
They needed the masses to be useful.
But now that's changing again.


Prof Harari is suggesting that the machine-driven increased social equality over the last couple of centuries may be reversed again as the machines start to replace the brainpower of ordinary workers, not just their muscles, and inequality could eventually be worse than ever as the rich improve their bodies with biotechnology (superhuman AI/human). It may never happen, the recent increases in general health and education would hardly have been possible before modern technology, whatever the elite would have preferred, but a takeover by a comparatively small number of humans wired by natural selection to seek status seems to me a distinctly more likely scenario than the machines taking over.
User avatar
zoon
 
Posts: 2815

Print view this post

Re: The Myth of Superhuman AI

#35  Postby John Platko » Apr 29, 2017 1:21 pm

I like to imagine ...
User avatar
John Platko
 
Name: John Platko
Posts: 9186
Male

Country: US
United States (us)
Print view this post

Ads by Google


Re: The Myth of Superhuman AI

#36  Postby John Platko » Apr 30, 2017 9:30 pm



And besides doing that, there will be big demand for robots good at killing -it's just a matter of time.

I like to imagine ...
User avatar
John Platko
 
Name: John Platko
Posts: 9186
Male

Country: US
United States (us)
Print view this post

Previous

Return to General Science & Technology

Who is online

Users viewing this topic: No registered users and 1 guest