Moderators: kiore, Blip, The_Metatron
John Platko wrote:By all means, post a better source for Max sharing his ideas on the emergence of consciousness. It might help us figure out when kicking a robot might actually be cruel.
ScholasticSpastic wrote:John Platko wrote:By all means, post a better source for Max sharing his ideas on the emergence of consciousness. It might help us figure out when kicking a robot might actually be cruel.
No it would not. Cockroaches are more complex in terms of neural architecture than the Atlas robot. There simply isn't enough complexity for us to expect any sort of consciousness to emerge, and we've no reason to expect that, even if there were sufficient complexity, it would be of sufficient structure for consciousness.
A rock the size of a brain is just as complex as a brain. There's nothing special about complexity alone. There's nothing special about information alone.
ScholasticSpastic wrote:John Platko wrote:By all means, post a better source for Max sharing his ideas on the emergence of consciousness. It might help us figure out when kicking a robot might actually be cruel.
No it would not. Cockroaches are more complex in terms of neural architecture than the Atlas robot. There simply isn't enough complexity for us to expect any sort of consciousness to emerge, and we've no reason to expect that, even if there were sufficient complexity, it would be of sufficient structure for consciousness.
A rock the size of a brain is just as complex as a brain. There's nothing special about complexity alone. There's nothing special about information alone.
CdesignProponentsist wrote:ScholasticSpastic wrote:John Platko wrote:By all means, post a better source for Max sharing his ideas on the emergence of consciousness. It might help us figure out when kicking a robot might actually be cruel.
No it would not. Cockroaches are more complex in terms of neural architecture than the Atlas robot. There simply isn't enough complexity for us to expect any sort of consciousness to emerge, and we've no reason to expect that, even if there were sufficient complexity, it would be of sufficient structure for consciousness.
A rock the size of a brain is just as complex as a brain. There's nothing special about complexity alone. There's nothing special about information alone.
I agree with you on robot consciousness. You have to design in consciousness. It would be like expecting a pocket calculator to suddenly play Call of Duty. It doesn't have the infrastructure or the software to do so. When robots be come conscious, there will have been a concerted effort to make it so.
I disagree with the claim that a rock the size of a brain is just as complex as a brain. The measure of complexity of a system is the amount of information that is required to describe the system. A rock is much easier to describe than a brain. We're still struggling to even scratch the surface of adequately describing the brain.
Sentience - Wikipedia, the free encyclopedia
https://en.wikipedia.org/wiki/Sentience
Sentience is the capacity to feel, perceive, or experience subjectively. Eighteenth-century philosophers used the concept to distinguish the ability to think (reason) from the ability to feel (sentience).
In 1997 the concept of animal sentience was written into the basic law of the European Union. The legally binding protocol annexed to the Treaty of Amsterdam recognizes that animals are "sentient beings", and requires the EU and its member states to "pay full regard to the welfare requirements of animals".
The laws of several states include certain invertebrates such as cephalopods (octopuses, squids) and decapod crustaceans (lobsters, crabs) in the scope of animal protection laws, implying that these animals are also judged capable of experiencing pain and suffering.[7]
David Pearce is a British philosopher of the negative utilitarian school of ethics. He is most famous for his advocation of the idea that there exists a strong ethical imperative for humans to work towards the abolition of suffering in all sentient beings.[8]
Bruce Duncan is “working on a longterm computer science project which is interested in the transfer of your mind to a robot.” With the LifeNaut Project he is collecting a “mindfile” consisting of your social memes as part of an ambitious plan to replicate your consciousness, and reanimate you in “biological, nano-technological and/or robotic bodies.” He demonstrates what’s been achieved so far with the extremely lifelike Bina48, the world’s most sentient robot, drawing her into a conversation on stage. The audience is delighted when she sometimes exhibits a mind of her own, and an attitude, when, for example, she interrupts Moses Znaimer, and like a seasoned politician, evades answering some questions altogether.
Macdoc wrote:Are you confusing sentience with consciousness.??
In the philosophy of consciousness, sentience can refer to the ability of any entity to have subjective perceptual experiences, or as some philosophers refer to them, "qualia".[2] This is distinct from other aspects of the mind and consciousness, such as creativity, intelligence, sapience, self-awareness, and intentionality (the ability to have thoughts about something). Sentience is a minimalistic way of defining consciousness, which is otherwise commonly used to collectively describe sentience plus other characteristics of the mind.
Almost any reactive intelligence is conscious....in other words can react purposefully to incoming data and respond.
Self awareness, self conscious and sentience require more complexity from the neural net so an analogue of the external world can be built for comparison ( Bayesnian brain )
https://en.wikipedia.org/wiki/Bayesian_ ... n_function
There is a spectrum of consciousness from simple to complex ...no need to invoke mystical bone throwing. What is surprising is the range of sophisticated behaviours that can derive from simple neural imputs and reactions.
Self awareness so far is limited in some species and some robots.
http://www.iflscience.com/rise-machines ... gic-puzzle
Sentience is a whole nother levelSentience - Wikipedia, the free encyclopedia
https://en.wikipedia.org/wiki/Sentience
Sentience is the capacity to feel, perceive, or experience subjectively. Eighteenth-century philosophers used the concept to distinguish the ability to think (reason) from the ability to feel (sentience).In 1997 the concept of animal sentience was written into the basic law of the European Union. The legally binding protocol annexed to the Treaty of Amsterdam recognizes that animals are "sentient beings", and requires the EU and its member states to "pay full regard to the welfare requirements of animals".
The laws of several states include certain invertebrates such as cephalopods (octopuses, squids) and decapod crustaceans (lobsters, crabs) in the scope of animal protection laws, implying that these animals are also judged capable of experiencing pain and suffering.[7]
David Pearce is a British philosopher of the negative utilitarian school of ethics. He is most famous for his advocation of the idea that there exists a strong ethical imperative for humans to work towards the abolition of suffering in all sentient beings.[8]
https://en.wikipedia.org/wiki/Sentience
then take that to the machine intelligence zone...= can o worms...Bruce Duncan is “working on a longterm computer science project which is interested in the transfer of your mind to a robot.” With the LifeNaut Project he is collecting a “mindfile” consisting of your social memes as part of an ambitious plan to replicate your consciousness, and reanimate you in “biological, nano-technological and/or robotic bodies.” He demonstrates what’s been achieved so far with the extremely lifelike Bina48, the world’s most sentient robot, drawing her into a conversation on stage. The audience is delighted when she sometimes exhibits a mind of her own, and an attitude, when, for example, she interrupts Moses Znaimer, and like a seasoned politician, evades answering some questions altogether.
there is a video
http://www.ideacity.ca/video/bruce-dunc ... ot-bina48/
Interesting times
So you think the robot is conscious?
I'm thinking the robot is self aware and is building an analogue of the external world.
“We believe that a computer that can read and understand stories, can, if given enough example stories from a given culture, ‘reverse engineer’ the values tacitly held by the culture that produced them,” they write. “These values can be complete enough that they can align the values of an intelligent entity with humanity. In short, we hypothesise that an intelligent entity can learn what it means to be human by immersing itself in the stories it produces.”
Consciousness creep
Our machines could become self-aware without our knowing it. We need a better way to define and test for consciousness
by George Musser
I'm thinking the robot is self aware and is building an analogue of the external world.
That's what The Bayesian Brain concept is about.
Keep in mind conscious, self conscious/ self aware, sentient are all behaviours of a neural net that can be wetware or software/silicon.
Good overview of robot learning
http://www.theguardian.com/books/2016/f ... h-suggests
and as to sentience - from that piece“We believe that a computer that can read and understand stories, can, if given enough example stories from a given culture, ‘reverse engineer’ the values tacitly held by the culture that produced them,” they write. “These values can be complete enough that they can align the values of an intelligent entity with humanity. In short, we hypothesise that an intelligent entity can learn what it means to be human by immersing itself in the stories it produces.”
John Platko wrote:
Which, makes it very disturbing to me that they are subjecting those robots to what seems to me to be abuse. Would they "teach" their toddler to learn to walk that way? I think not.
Fenrir wrote:John Platko wrote:
Which, makes it very disturbing to me that they are subjecting those robots to what seems to me to be abuse. Would they "teach" their toddler to learn to walk that way? I think not.
They are not teaching their robot to learn to walk, whatever that means.
They are testing their engineering and code in the hope of learning where it fails so they can improve it.
It's all front-loading.
It's not my field, but I'm told there are people in AI who would be very happy to hear you say this, because what they're aiming to create is not intelligence per se, but artificial objects with which humans express empathy. One way to cheat this is to create something that looks cute. But Atlas certainly isn't that.John Platko wrote:Which, makes it very disturbing to me that they are subjecting those robots to what seems to me to be abuse. Would they "teach" their toddler to learn to walk that way? I think not.
CdesignProponentsist wrote:I disagree with the claim that a rock the size of a brain is just as complex as a brain. The measure of complexity of a system is the amount of information that is required to describe the system. A rock is much easier to describe than a brain. We're still struggling to even scratch the surface of adequately describing the brain.
VazScep wrote:Remember, if you're attacked by a robot, all you have to do is ask it whether "this sentence is false" is true. Not even Boston Dynamics has figured out how to protect their killing machines from this.
VazScep wrote:It's not my field, but I'm told there are people in AI who would be very happy to hear you say this, because what they're aiming to create is not intelligence per se, but artificial objects with which humans express empathy. One way to cheat this is to create something that looks cute. But Atlas certainly isn't that.John Platko wrote:Which, makes it very disturbing to me that they are subjecting those robots to what seems to me to be abuse. Would they "teach" their toddler to learn to walk that way? I think not.
The thing you should worry about when it comes to abusing people and animals is their potential to turn round and bite you.
This is something that worries some people about superintelligent AIs. It doesn't have to happen out of some propensity for vengeance, but more a case of finding that the best solution to not falling from a kick is to act in a way which eliminates the possibility.
(I don't take such ideas seriously, but I probably don't have good reasons why).
John Platko wrote:
I also wonder who is teaching these self driving cars how to make decisions.
I never checked the threads, but there were a few on Hacker News asking "will Google's cars be programmed to kill you?" The question I'm sure that was being asked is whether, given a scenario where a car can either crash into a brick wall or plough into a group of school-children, will it and should it have been designed to do the former? It's a cold calculus. A cold, cold calculus.John Platko wrote:Yes, a robot, especially one trained for the military, could make a cold calculation that Bob keeps stopping me from completing my mission, I must complete my mission ...
I also wonder who is teaching these self driving cars how to make decisions. What if they find themselves in a situation where they either hit a dog, or a child that runs in the middle of the street at the same time. Will they always choose to hit the dog?
Full Definition of programming
1
1: the planning, scheduling, or performing of a program
2
2 a : the process of instructing or learning by means of an instructional program
b : the process of preparing an instructional program
Return to General Science & Technology
Users viewing this topic: No registered users and 1 guest