Atlas, The Next Generation

Anything that doesn't fit anywhere else below.

Moderators: kiore, Blip, The_Metatron

Re: Atlas, The Next Generation

#41  Postby Mazille » Feb 25, 2016 10:10 pm

"They're bureaucrats; I don't respect them."
- Pam.
- Yes?
- Get off the Pope.
User avatar
Mazille
RS Donator
 
Posts: 19741
Age: 38
Male

Austria (at)
Print view this post

Re: Atlas, The Next Generation

#42  Postby John Platko » Feb 25, 2016 10:23 pm

ScholasticSpastic wrote:
John Platko wrote:You'd walk away even if Max Tegmark is giving the talk, as was this case? :what:

Sure. I can find better sources for his talks, without supporting the bullshit.


By all means, post a better source for Max sharing his ideas on the emergence of consciousness. It might help us figure out when kicking a robot might actually be cruel.
I like to imagine ...
User avatar
John Platko
 
Name: John Platko
Posts: 9411
Male

Country: US
United States (us)
Print view this post

Re: Atlas, The Next Generation

#43  Postby ScholasticSpastic » Feb 26, 2016 12:10 am

John Platko wrote:By all means, post a better source for Max sharing his ideas on the emergence of consciousness. It might help us figure out when kicking a robot might actually be cruel.

No it would not. Cockroaches are more complex in terms of neural architecture than the Atlas robot. There simply isn't enough complexity for us to expect any sort of consciousness to emerge, and we've no reason to expect that, even if there were sufficient complexity, it would be of sufficient structure for consciousness.

A rock the size of a brain is just as complex as a brain. There's nothing special about complexity alone. There's nothing special about information alone.
"You have to be a real asshole to quote yourself."
~ ScholasticSpastic
User avatar
ScholasticSpastic
 
Name: D-Money Sr.
Posts: 6354
Age: 48
Male

Country: Behind Zion's Curtain
United States (us)
Print view this post

Re: Atlas, The Next Generation

#44  Postby John Platko » Feb 26, 2016 1:35 am

ScholasticSpastic wrote:
John Platko wrote:By all means, post a better source for Max sharing his ideas on the emergence of consciousness. It might help us figure out when kicking a robot might actually be cruel.

No it would not. Cockroaches are more complex in terms of neural architecture than the Atlas robot. There simply isn't enough complexity for us to expect any sort of consciousness to emerge, and we've no reason to expect that, even if there were sufficient complexity, it would be of sufficient structure for consciousness.

A rock the size of a brain is just as complex as a brain. There's nothing special about complexity alone. There's nothing special about information alone.



Let's take this a bit more slowly.

Do you think cockroaches are conscious?
I like to imagine ...
User avatar
John Platko
 
Name: John Platko
Posts: 9411
Male

Country: US
United States (us)
Print view this post

Re: Atlas, The Next Generation

#45  Postby CdesignProponentsist » Feb 26, 2016 2:34 am

ScholasticSpastic wrote:
John Platko wrote:By all means, post a better source for Max sharing his ideas on the emergence of consciousness. It might help us figure out when kicking a robot might actually be cruel.

No it would not. Cockroaches are more complex in terms of neural architecture than the Atlas robot. There simply isn't enough complexity for us to expect any sort of consciousness to emerge, and we've no reason to expect that, even if there were sufficient complexity, it would be of sufficient structure for consciousness.

A rock the size of a brain is just as complex as a brain. There's nothing special about complexity alone. There's nothing special about information alone.


I agree with you on robot consciousness. You have to design in consciousness. It would be like expecting a pocket calculator to suddenly play Call of Duty. It doesn't have the infrastructure or the software to do so. When robots be come conscious, there will have been a concerted effort to make it so.

I disagree with the claim that a rock the size of a brain is just as complex as a brain. The measure of complexity of a system is the amount of information that is required to describe the system. A rock is much easier to describe than a brain. We're still struggling to even scratch the surface of adequately describing the brain.
"Things don't need to be true, as long as they are believed" - Alexander Nix, CEO Cambridge Analytica
User avatar
CdesignProponentsist
 
Posts: 12711
Age: 56
Male

Country: California
United States (us)
Print view this post

Re: Atlas, The Next Generation

#46  Postby John Platko » Feb 26, 2016 2:46 am

CdesignProponentsist wrote:
ScholasticSpastic wrote:
John Platko wrote:By all means, post a better source for Max sharing his ideas on the emergence of consciousness. It might help us figure out when kicking a robot might actually be cruel.

No it would not. Cockroaches are more complex in terms of neural architecture than the Atlas robot. There simply isn't enough complexity for us to expect any sort of consciousness to emerge, and we've no reason to expect that, even if there were sufficient complexity, it would be of sufficient structure for consciousness.

A rock the size of a brain is just as complex as a brain. There's nothing special about complexity alone. There's nothing special about information alone.


I agree with you on robot consciousness. You have to design in consciousness. It would be like expecting a pocket calculator to suddenly play Call of Duty. It doesn't have the infrastructure or the software to do so. When robots be come conscious, there will have been a concerted effort to make it so.


I'm not following your Call of Duty comparison. I'm conscious but don't expect me suddenly, or ever, to function as a pocket calculator. Surely you can be limited in what you can do and still be conscious.

:scratch: Design consciousness? Can't it evolve?




I disagree with the claim that a rock the size of a brain is just as complex as a brain. The measure of complexity of a system is the amount of information that is required to describe the system. A rock is much easier to describe than a brain. We're still struggling to even scratch the surface of adequately describing the brain.


Yep.
I like to imagine ...
User avatar
John Platko
 
Name: John Platko
Posts: 9411
Male

Country: US
United States (us)
Print view this post

Re: Atlas, The Next Generation

#47  Postby Macdoc » Feb 26, 2016 3:33 am

Are you confusing sentience with consciousness.??

Almost any reactive intelligence is conscious....in other words can react purposefully to incoming data and respond.

Self awareness, self conscious and sentience require more complexity from the neural net so an analogue of the external world can be built for comparison ( Bayesnian brain )
https://en.wikipedia.org/wiki/Bayesian_ ... n_function

There is a spectrum of consciousness from simple to complex ...no need to invoke mystical bone throwing. What is surprising is the range of sophisticated behaviours that can derive from simple neural imputs and reactions.

Self awareness so far is limited in some species and some robots.



http://www.iflscience.com/rise-machines ... gic-puzzle

Sentience is a whole nother level

Sentience - Wikipedia, the free encyclopedia
https://en.wikipedia.org/wiki/Sentience
Sentience is the capacity to feel, perceive, or experience subjectively. Eighteenth-century philosophers used the concept to distinguish the ability to think (reason) from the ability to feel (sentience).


In 1997 the concept of animal sentience was written into the basic law of the European Union. The legally binding protocol annexed to the Treaty of Amsterdam recognizes that animals are "sentient beings", and requires the EU and its member states to "pay full regard to the welfare requirements of animals".

The laws of several states include certain invertebrates such as cephalopods (octopuses, squids) and decapod crustaceans (lobsters, crabs) in the scope of animal protection laws, implying that these animals are also judged capable of experiencing pain and suffering.[7]

David Pearce is a British philosopher of the negative utilitarian school of ethics. He is most famous for his advocation of the idea that there exists a strong ethical imperative for humans to work towards the abolition of suffering in all sentient beings.[8]

https://en.wikipedia.org/wiki/Sentience

then take that to the machine intelligence zone...= can o worms...


Bruce Duncan is “working on a longterm computer science project which is interested in the transfer of your mind to a robot.” With the LifeNaut Project he is collecting a “mindfile” consisting of your social memes as part of an ambitious plan to replicate your consciousness, and reanimate you in “biological, nano-technological and/or robotic bodies.” He demonstrates what’s been achieved so far with the extremely lifelike Bina48, the world’s most sentient robot, drawing her into a conversation on stage. The audience is delighted when she sometimes exhibits a mind of her own, and an attitude, when, for example, she interrupts Moses Znaimer, and like a seasoned politician, evades answering some questions altogether.


there is a video

http://www.ideacity.ca/video/bruce-dunc ... ot-bina48/

Interesting times
Travel photos > https://500px.com/macdoc/galleries
EO Wilson in On Human Nature wrote:
We are not compelled to believe in biological uniformity in order to affirm human freedom and dignity.
User avatar
Macdoc
 
Posts: 17714
Age: 76
Male

Country: Canada/Australia
Australia (au)
Print view this post

Re: Atlas, The Next Generation

#48  Postby John Platko » Feb 26, 2016 4:52 am

Macdoc wrote:Are you confusing sentience with consciousness.??


:scratch: I don't think so.

from

In the philosophy of consciousness, sentience can refer to the ability of any entity to have subjective perceptual experiences, or as some philosophers refer to them, "qualia".[2] This is distinct from other aspects of the mind and consciousness, such as creativity, intelligence, sapience, self-awareness, and intentionality (the ability to have thoughts about something). Sentience is a minimalistic way of defining consciousness, which is otherwise commonly used to collectively describe sentience plus other characteristics of the mind.


I could imagine the robot having subjective perceptual experiences in addition to cousnciousness. For example. The robot might think the walk in the snow was easy or hard depending on how hard it had to work to keep its balance. And it might base future decisions and actions based on that perception, i.e. go a different way next time. But my perception of taking that same walk would be very different- unless I had a few drinks first.



Almost any reactive intelligence is conscious....in other words can react purposefully to incoming data and respond.


So you think the robot is conscious?



Self awareness, self conscious and sentience require more complexity from the neural net so an analogue of the external world can be built for comparison ( Bayesnian brain )
https://en.wikipedia.org/wiki/Bayesian_ ... n_function

There is a spectrum of consciousness from simple to complex ...no need to invoke mystical bone throwing. What is surprising is the range of sophisticated behaviours that can derive from simple neural imputs and reactions.

Self awareness so far is limited in some species and some robots.


I'm thinking the robot is self aware and is building an analogue of the external world.





http://www.iflscience.com/rise-machines ... gic-puzzle

Sentience is a whole nother level

Sentience - Wikipedia, the free encyclopedia
https://en.wikipedia.org/wiki/Sentience
Sentience is the capacity to feel, perceive, or experience subjectively. Eighteenth-century philosophers used the concept to distinguish the ability to think (reason) from the ability to feel (sentience).


In 1997 the concept of animal sentience was written into the basic law of the European Union. The legally binding protocol annexed to the Treaty of Amsterdam recognizes that animals are "sentient beings", and requires the EU and its member states to "pay full regard to the welfare requirements of animals".

The laws of several states include certain invertebrates such as cephalopods (octopuses, squids) and decapod crustaceans (lobsters, crabs) in the scope of animal protection laws, implying that these animals are also judged capable of experiencing pain and suffering.[7]

David Pearce is a British philosopher of the negative utilitarian school of ethics. He is most famous for his advocation of the idea that there exists a strong ethical imperative for humans to work towards the abolition of suffering in all sentient beings.[8]

https://en.wikipedia.org/wiki/Sentience

then take that to the machine intelligence zone...= can o worms...


Bruce Duncan is “working on a longterm computer science project which is interested in the transfer of your mind to a robot.” With the LifeNaut Project he is collecting a “mindfile” consisting of your social memes as part of an ambitious plan to replicate your consciousness, and reanimate you in “biological, nano-technological and/or robotic bodies.” He demonstrates what’s been achieved so far with the extremely lifelike Bina48, the world’s most sentient robot, drawing her into a conversation on stage. The audience is delighted when she sometimes exhibits a mind of her own, and an attitude, when, for example, she interrupts Moses Znaimer, and like a seasoned politician, evades answering some questions altogether.


there is a video

http://www.ideacity.ca/video/bruce-dunc ... ot-bina48/

Interesting times


That's interesting.

I think it's important not to judge consciousness just by human standards though, I find that to be a bit of a prejudiced view, which is why I find Max Tegmark ideas interesting. Even with my very limited understanding of what he is explaining, it gives me a different perspective on what it may mean to be conscious and how I may have to open my mind to perceive consciousness when it's going about it's business making models of it's experiences, etc. etc..

Perhaps someone here can put Max's ideas in some context.
I like to imagine ...
User avatar
John Platko
 
Name: John Platko
Posts: 9411
Male

Country: US
United States (us)
Print view this post

Re: Atlas, The Next Generation

#49  Postby Macdoc » Feb 26, 2016 9:42 am

So you think the robot is conscious?

Absolutely


I'm thinking the robot is self aware and is building an analogue of the external world.


That's what The Bayesian Brain concept is about.

Keep in mind conscious, self conscious/ self aware, sentient are all behaviours of a neural net that can be wetware or software/silicon.

Good overview of robot learning
http://www.theguardian.com/books/2016/f ... h-suggests

and as to sentience - from that piece

“We believe that a computer that can read and understand stories, can, if given enough example stories from a given culture, ‘reverse engineer’ the values tacitly held by the culture that produced them,” they write. “These values can be complete enough that they can align the values of an intelligent entity with humanity. In short, we hypothesise that an intelligent entity can learn what it means to be human by immersing itself in the stories it produces.”

Travel photos > https://500px.com/macdoc/galleries
EO Wilson in On Human Nature wrote:
We are not compelled to believe in biological uniformity in order to affirm human freedom and dignity.
User avatar
Macdoc
 
Posts: 17714
Age: 76
Male

Country: Canada/Australia
Australia (au)
Print view this post

Re: Atlas, The Next Generation

#50  Postby Macdoc » Feb 26, 2016 11:10 am

This is an excellent read

Consciousness creep
Our machines could become self-aware without our knowing it. We need a better way to define and test for consciousness

by George Musser

https://aeon.co/essays/could-machines-h ... knowing-it
Travel photos > https://500px.com/macdoc/galleries
EO Wilson in On Human Nature wrote:
We are not compelled to believe in biological uniformity in order to affirm human freedom and dignity.
User avatar
Macdoc
 
Posts: 17714
Age: 76
Male

Country: Canada/Australia
Australia (au)
Print view this post

Re: Atlas, The Next Generation

#51  Postby John Platko » Feb 26, 2016 1:08 pm

Macdoc wrote:
So you think the robot is conscious?

Absolutely


:thumbup:



I'm thinking the robot is self aware and is building an analogue of the external world.


That's what The Bayesian Brain concept is about.

Keep in mind conscious, self conscious/ self aware, sentient are all behaviours of a neural net that can be wetware or software/silicon.


:scratch: But why bring in "neural net" into the discussion? Perhaps other, non neural net, ways of information processing are more intrinsic for a given conscious, self conscious/self aware, sentient entity.



Good overview of robot learning
http://www.theguardian.com/books/2016/f ... h-suggests

and as to sentience - from that piece

“We believe that a computer that can read and understand stories, can, if given enough example stories from a given culture, ‘reverse engineer’ the values tacitly held by the culture that produced them,” they write. “These values can be complete enough that they can align the values of an intelligent entity with humanity. In short, we hypothesise that an intelligent entity can learn what it means to be human by immersing itself in the stories it produces.”



Which, makes it very disturbing to me that they are subjecting those robots to what seems to me to be abuse. Would they "teach" their toddler to learn to walk that way? I think not.
I like to imagine ...
User avatar
John Platko
 
Name: John Platko
Posts: 9411
Male

Country: US
United States (us)
Print view this post

Re: Atlas, The Next Generation

#52  Postby Fenrir » Feb 26, 2016 1:17 pm

John Platko wrote:

Which, makes it very disturbing to me that they are subjecting those robots to what seems to me to be abuse. Would they "teach" their toddler to learn to walk that way? I think not.


They are not teaching their robot to learn to walk, whatever that means.

They are testing their engineering and code in the hope of learning where it fails so they can improve it.

It's all front-loading.
Religion: it only fails when you test it.-Thunderf00t.
User avatar
Fenrir
 
Posts: 4096
Male

Country: Australia
South Georgia and the South Sandwich Islands (gs)
Print view this post

Re: Atlas, The Next Generation

#53  Postby John Platko » Feb 26, 2016 3:02 pm

Fenrir wrote:
John Platko wrote:

Which, makes it very disturbing to me that they are subjecting those robots to what seems to me to be abuse. Would they "teach" their toddler to learn to walk that way? I think not.


They are not teaching their robot to learn to walk, whatever that means.

They are testing their engineering and code in the hope of learning where it fails so they can improve it.

It's all front-loading.


Well I would describe what is happening as teaching the robot to walk, it's just that they are themselves in the loop.

But let's drill down a bit deeper. Are you sure there are not self learning mechanisms involved?

I'm reminded of some of the things I've seen about the self driving cars, how they are learning from the experiences they encounter on the road- and how they share those experiences with all other cars so they don't have to experience the same thing to learn it. That is, cars are learning more quickly by shared experiences. Perhaps these robots are more basic than that but my concern doesn't rally change. If these entities, which have the ability to learn from experience and share that experience, perhaps at some point, in ways that are difficult to erase, then we should be careful how we treat them. And we should start being conscious about that before they reach a stage of being able to remember how we mistreat them.
I like to imagine ...
User avatar
John Platko
 
Name: John Platko
Posts: 9411
Male

Country: US
United States (us)
Print view this post

Re: Atlas, The Next Generation

#54  Postby VazScep » Feb 26, 2016 4:33 pm

John Platko wrote:Which, makes it very disturbing to me that they are subjecting those robots to what seems to me to be abuse. Would they "teach" their toddler to learn to walk that way? I think not.
It's not my field, but I'm told there are people in AI who would be very happy to hear you say this, because what they're aiming to create is not intelligence per se, but artificial objects with which humans express empathy. One way to cheat this is to create something that looks cute. But Atlas certainly isn't that.

The thing you should worry about when it comes to abusing people and animals is their potential to turn round and bite you. This is something that worries some people about superintelligent AIs. It doesn't have to happen out of some propensity for vengeance, but more a case of finding that the best solution to not falling from a kick is to act in a way which eliminates the possibility.

(I don't take such ideas seriously, but I probably don't have good reasons why).
Here we go again. First, we discover recursion.
VazScep
 
Posts: 4590

United Kingdom (uk)
Print view this post

Re: Atlas, The Next Generation

#55  Postby ScholasticSpastic » Feb 26, 2016 4:58 pm

CdesignProponentsist wrote:I disagree with the claim that a rock the size of a brain is just as complex as a brain. The measure of complexity of a system is the amount of information that is required to describe the system. A rock is much easier to describe than a brain. We're still struggling to even scratch the surface of adequately describing the brain.

I think our disagreement arises from how we're each thinking about complexity. I'm using complexity as a label for the amount of information it would take to describe each object (a brain and a rock) down to the same level. The human brain functions due to the interactions of neurons and neural transmitters, so a description of the human brain which accounts for the functions of a human brain would need to include information down to the molecular level for every bit of the human brain. (For the sake of conversational simplicity, please allow the consideration of a human brain without its sensory connections throughout the rest of the body, and without hormonal and endocrine feedback, even though it's actually stupid to consider brains as isolated from those things.) If we're treating our rock the same way, we must also include information down to the molecular level for every bit of the rock. If there are approximately the same number of atoms, the amount of information to describe their positions relative to each other should be about the same. Exceptions include: Rocks will tend to contain smaller molecules than brains. Which means we'd need to describe more molecules in the rock. Crystals consist of predictably arranged repeating units, so a description of a crystalline aggregate will be much, much simpler than a description of a brain to the same scale.

My point is that there are rocks which would take as much information to describe as brains, and thus in that sense those rocks are just as complex as a brain. But brains have FUNCTIONAL complexity whereas the only complexity rocks tend to have is that it could take a lot of information to describe them in detail.

Which goes back to the claim I was making: Being complex isn't significant. It's the WAY a thing is complex that's significant.
"You have to be a real asshole to quote yourself."
~ ScholasticSpastic
User avatar
ScholasticSpastic
 
Name: D-Money Sr.
Posts: 6354
Age: 48
Male

Country: Behind Zion's Curtain
United States (us)
Print view this post

Re: Atlas, The Next Generation

#56  Postby DavidMcC » Feb 26, 2016 5:03 pm

VazScep wrote:Remember, if you're attacked by a robot, all you have to do is ask it whether "this sentence is false" is true. Not even Boston Dynamics has figured out how to protect their killing machines from this.

I wouldn't depend on that gambit, becauase there is a simple ways to sidestep it, like the robot doesn't listen to what the intended victim says.
May The Voice be with you!
DavidMcC
 
Name: David McCulloch
Posts: 14913
Age: 70
Male

Country: United Kigdom
United Kingdom (uk)
Print view this post

Re: Atlas, The Next Generation

#57  Postby John Platko » Feb 26, 2016 5:11 pm

VazScep wrote:
John Platko wrote:Which, makes it very disturbing to me that they are subjecting those robots to what seems to me to be abuse. Would they "teach" their toddler to learn to walk that way? I think not.
It's not my field, but I'm told there are people in AI who would be very happy to hear you say this, because what they're aiming to create is not intelligence per se, but artificial objects with which humans express empathy. One way to cheat this is to create something that looks cute. But Atlas certainly isn't that.

The thing you should worry about when it comes to abusing people and animals is their potential to turn round and bite you.


That's one reason not to abuse animals. But on another level, if I habitually abuse animals I become insensitive to what I'm doing, it can even seem normal to treat animals that way. So in a sense, by abusing animals I'm also abusing myself. And I think it can be somewhat similar with these early forms of non biological based intelligence. These acts of abuse have ways of entering the permanent knowledge of the universe. For example, even if you wipe the memory banks of these robots, who knows how their descendants will look back on these acts when they watch them on youtube. And then there's the issue about what children watching the videos today are learning about how to treat non biological intelligent entities.




This is something that worries some people about superintelligent AIs. It doesn't have to happen out of some propensity for vengeance, but more a case of finding that the best solution to not falling from a kick is to act in a way which eliminates the possibility.

(I don't take such ideas seriously, but I probably don't have good reasons why).


Yes, a robot, especially one trained for the military, could make a cold calculation that Bob keeps stopping me from completing my mission, I must complete my mission ...

I also wonder who is teaching these self driving cars how to make decisions. What if they find themselves in a situation where they either hit a dog, or a child that runs in the middle of the street at the same time. Will they always choose to hit the dog?
I like to imagine ...
User avatar
John Platko
 
Name: John Platko
Posts: 9411
Male

Country: US
United States (us)
Print view this post

Re: Atlas, The Next Generation

#58  Postby ScholasticSpastic » Feb 26, 2016 5:21 pm

John Platko wrote:
I also wonder who is teaching these self driving cars how to make decisions.

Phrases like this make me wonder if you remember that programming is a thing. One needn't teach a robot anything for it to exhibit a complex behavior.
"You have to be a real asshole to quote yourself."
~ ScholasticSpastic
User avatar
ScholasticSpastic
 
Name: D-Money Sr.
Posts: 6354
Age: 48
Male

Country: Behind Zion's Curtain
United States (us)
Print view this post

Re: Atlas, The Next Generation

#59  Postby VazScep » Feb 26, 2016 5:24 pm

John Platko wrote:Yes, a robot, especially one trained for the military, could make a cold calculation that Bob keeps stopping me from completing my mission, I must complete my mission ...

I also wonder who is teaching these self driving cars how to make decisions. What if they find themselves in a situation where they either hit a dog, or a child that runs in the middle of the street at the same time. Will they always choose to hit the dog?
I never checked the threads, but there were a few on Hacker News asking "will Google's cars be programmed to kill you?" The question I'm sure that was being asked is whether, given a scenario where a car can either crash into a brick wall or plough into a group of school-children, will it and should it have been designed to do the former? It's a cold calculus. A cold, cold calculus.
Here we go again. First, we discover recursion.
VazScep
 
Posts: 4590

United Kingdom (uk)
Print view this post

Re: Atlas, The Next Generation

#60  Postby John Platko » Feb 26, 2016 5:44 pm

ScholasticSpastic wrote:
John Platko wrote:
I also wonder who is teaching these self driving cars how to make decisions.

Phrases like this make me wonder if you remember that programming is a thing. One needn't teach a robot anything for it to exhibit a complex behavior.


:scratch: I'm not following you at all. Yes, self driving cars are complex relative to a rock from a functional perspective.

from
Full Definition of programming

1
1: the planning, scheduling, or performing of a program

2
2 a : the process of instructing or learning by means of an instructional program

b : the process of preparing an instructional program



Now that we have that taken care of, how are those cars are being taught instructed to deal with the choice of running over a dog or a child? And what' about two dogs and a cat vs. a child?
I like to imagine ...
User avatar
John Platko
 
Name: John Platko
Posts: 9411
Male

Country: US
United States (us)
Print view this post

PreviousNext

Return to General Science & Technology

Who is online

Users viewing this topic: No registered users and 1 guest