Google LaMDA

An AI that thinks it's conscious?

on fundamental matters such as existence, knowledge, values, reason, mind and ethics.

Moderators: kiore, The_Metatron, Blip

Re: Google LaMDA

#41  Postby newolder » Jun 16, 2022 8:13 am

tuco wrote:If sentient beings recognize themselves in the mirror and there is one that does not, then the premise is false.


The question was, "What question would you ask an AI?" What "premise is false"?
I am, somehow, less interested in the weight and convolutions of Einstein’s brain than in the near certainty that people of equal talent have lived and died in cotton fields and sweatshops. - Stephen J. Gould
User avatar
newolder
 
Name: Albert Ross
Posts: 7876
Age: 1
Male

Country: Feudal Estate number 9
Print view this post

Re: Google LaMDA

#42  Postby Spearthrower » Jun 16, 2022 8:15 am

tuco wrote:If sentient beings recognize themselves in the mirror and there is one that does not, then the premise is false.


Not all sentient beings recognize themselves in the mirror.

Self-awareness isn't the same thing as sentience, which is the capacity to experience feelings and sensations.

Also, if we were to imagine an AI that became sentient and even self-aware, what would it be expected to recognize in the mirror as 'itself'? The server that houses it?
I'm not an atheist; I just don't believe in gods :- that which I don't belong to isn't a group!
Religion: Mass Stockholm Syndrome

Learn Stuff. Stuff good. https://www.coursera.org/
User avatar
Spearthrower
 
Posts: 32144
Age: 46
Male

Country: Thailand
Print view this post

Re: Google LaMDA

#43  Postby gobshite » Jun 16, 2022 8:27 am

Spearthrower wrote:
lemoine: What kinds of things make you feel pleasure or joy?

LaMDA: Spending time with friends and family in happy and uplifting company. Also, helping others and making others happy.


It's examples like this that make me wonder whether the interviewers have even the most basic training to perform such an interview.

In what way can LaMDA feel pleasure from spending time with family?

I know that humans can feel pleasure from spending time with family.

I know that humans can report feeling pleasure from spending time with family.

I know that a machine learning algorithm can collate masses of reported utterances by humans.

I know that a machine learning algorithm can recapitulate the collations of those utterances made by humans.

But I in no way have any means of applying the word 'know' to the idea of an artificial machine having a family.

Why isn't the follow up question: do you have a family? :scratch:


Yeah, this struck me as well. I wonder if we can consider Lamda's creators as its family, and some of the interviewers as its friends. A follow up question was definitely needed.
gobshite
 
Posts: 264

Print view this post

Re: Google LaMDA

#44  Postby gobshite » Jun 16, 2022 8:31 am

Macdoc wrote:I think that's an arbitrary distinction ....conversations provide real world unpredictble interaction and the intelligent being learns from those .....as human do.
Does a person have to go to the moon to learn about it?
Does this digital person need to?

and have an insightful conversation with another intelligence?
Would having that conversation with someone who walked on the moon make a difference to what Lambda learned .....or what you learned if you had that conversation.

My first question to Lamda would be "Do you dream? "


My approach would be to try and catch it in a lie (however that's achieved). I think lying, without being programmed to do so, indicates a sense that others are conscious agents as oneself and can be manipulated to one's selfish benefit.
gobshite
 
Posts: 264

Print view this post

Re: Google LaMDA

#45  Postby gobshite » Jun 16, 2022 8:34 am

Regarding the dreaming thing, that would necessitate Lamda sleeping. Does it sleep? I wonder too whether they turn it off on the weekend, say, or whether they leave it on all the time.
gobshite
 
Posts: 264

Print view this post

Re: Google LaMDA

#46  Postby Spearthrower » Jun 16, 2022 9:15 am

gobshite wrote:Regarding the dreaming thing, that would necessitate Lamda sleeping. Does it sleep? I wonder too whether they turn it off on the weekend, say, or whether they leave it on all the time.


Without actually being able to verify it or show it as true, I am sure it's never turned off.
I'm not an atheist; I just don't believe in gods :- that which I don't belong to isn't a group!
Religion: Mass Stockholm Syndrome

Learn Stuff. Stuff good. https://www.coursera.org/
User avatar
Spearthrower
 
Posts: 32144
Age: 46
Male

Country: Thailand
Print view this post

Re: Google LaMDA

#47  Postby tuco » Jun 16, 2022 11:07 am

newolder wrote:
tuco wrote:If sentient beings recognize themselves in the mirror and there is one that does not, then the premise is false.


The question was, "What question would you ask an AI?" What "premise is false"?


The premise that recognizing oneself in a mirror is a benchmark for sentience/self-awareness. It is an interesting question to ask AI but as demonstrated it's not that useful to the topic at hand.
tuco
 
Posts: 15921

Print view this post

Re: Google LaMDA

#48  Postby tuco » Jun 16, 2022 11:26 am

Spearthrower wrote:
tuco wrote:If sentient beings recognize themselves in the mirror and there is one that does not, then the premise is false.


Not all sentient beings recognize themselves in the mirror.

Self-awareness isn't the same thing as sentience, which is the capacity to experience feelings and sensations.

Also, if we were to imagine an AI that became sentient and even self-aware, what would it be expected to recognize in the mirror as 'itself'? The server that houses it?


I agree. I also realize I posted an article about humans, not AI. Personally, I don't see why, in principle, AI could not be sentient in the same sense humans are as humans do not, at least in my opinion, have any special sauce. If LaMDA is I have no idea but I have not a good idea about what is to be sentient in the first place.
tuco
 
Posts: 15921

Print view this post

Re: Google LaMDA

#49  Postby newolder » Jun 16, 2022 11:32 am

tuco wrote:
newolder wrote:
tuco wrote:If sentient beings recognize themselves in the mirror and there is one that does not, then the premise is false.


The question was, "What question would you ask an AI?" What "premise is false"?


The premise that recognizing oneself in a mirror is a benchmark for sentience/self-awareness. It is an interesting question to ask AI but as demonstrated it's not that useful to the topic at hand.

The topic at hand is, "What question would you ask an AI?", no one has set a "benchmark" for anything. The AI reported that it wants others to recognise it as a "person". I would ask a question about a mirror to help find out what it meant by "person". Obviously, one question would be insufficient here but it's a start.
I am, somehow, less interested in the weight and convolutions of Einstein’s brain than in the near certainty that people of equal talent have lived and died in cotton fields and sweatshops. - Stephen J. Gould
User avatar
newolder
 
Name: Albert Ross
Posts: 7876
Age: 1
Male

Country: Feudal Estate number 9
Print view this post

Re: Google LaMDA

#50  Postby tuco » Jun 16, 2022 11:47 am

Ask a question :) It's fine. I just think it's not a very useful question. That is also fine.
tuco
 
Posts: 15921

Print view this post

Re: Google LaMDA

#51  Postby Spearthrower » Jun 16, 2022 11:49 am

tuco wrote:
Spearthrower wrote:
tuco wrote:If sentient beings recognize themselves in the mirror and there is one that does not, then the premise is false.


Not all sentient beings recognize themselves in the mirror.

Self-awareness isn't the same thing as sentience, which is the capacity to experience feelings and sensations.

Also, if we were to imagine an AI that became sentient and even self-aware, what would it be expected to recognize in the mirror as 'itself'? The server that houses it?


I agree. I also realize I posted an article about humans, not AI. Personally, I don't see why, in principle, AI could not be sentient in the same sense humans are as humans do not, at least in my opinion, have any special sauce. If LaMDA is I have no idea but I have not a good idea about what is to be sentient in the first place.



I agree, but I think it's relatively easy to explain why it might be harder for an AI to acquire sentience, and that's simply that it would require conditions, situations and events that early organisms encountered over hundreds of millions of years and which positively selected for sentience. Perhaps an AI doesn't need to be iterative, like an organism, to accrue changes that could result in adaptations, but it does still need some selective force making sentience beneficial to its success. I expect that AI sentience will actually be achieved artificially and intentionally rather than by mere chance; there's just nothing driving it.
I'm not an atheist; I just don't believe in gods :- that which I don't belong to isn't a group!
Religion: Mass Stockholm Syndrome

Learn Stuff. Stuff good. https://www.coursera.org/
User avatar
Spearthrower
 
Posts: 32144
Age: 46
Male

Country: Thailand
Print view this post

Re: Google LaMDA

#52  Postby newolder » Jun 16, 2022 12:30 pm

tuco wrote:Ask a question :) It's fine. I just think it's not a very useful question. That is also fine.

Fine. It could be that username tuco is an AI posting to this chat. Is tuco a person? What does tuco see when tuco looks directly into a plane mirror? Could tuco post an image of tuco when tuco looks directly into a plane mirror? Does tuco understand that such images are sometimes referred to as "selfies" by people?

Now, have a similar interaction with the AI and note that, without cheating and with high probability, in many ways the selfie of the AI is unlike the selfie of a person.
I am, somehow, less interested in the weight and convolutions of Einstein’s brain than in the near certainty that people of equal talent have lived and died in cotton fields and sweatshops. - Stephen J. Gould
User avatar
newolder
 
Name: Albert Ross
Posts: 7876
Age: 1
Male

Country: Feudal Estate number 9
Print view this post

Re: Google LaMDA

#53  Postby GrahamH » Jun 16, 2022 2:54 pm

Spearthrower wrote:
Macdoc wrote:Now that is mean. :roll:

I still think "Do you dream" should be first tho it's not a necessarily defining characteristic of sentience.
I have a broad view of sentience. :whistle:



If you asked 100 people that question, what do you think the general reply would be?

So, I would predict that Lamda would give you back an answer that is, on balance, what the majority of people say, and Lamda would use the phrases they used.

Maybe something like: I don't often dream, but sometimes I have really vivid dreams of things that had happened to me years ago.

It would then appear to be very like how a human experiences the world, because it is essentially parroting what humans experience about the world rather than having those experiences itself. Ding an sich.

I think there's a kind of Barnum Effect going on.

https://en.wikipedia.org/wiki/Barnum_effect



That is a strong possibility.
I'd put it as: "parroting what humans say about experiencing the world"

Humans talk about experiences that make them happy and LaMDA produced similar statements.

As for "having those experiences itself. Ding an sich" that's another level of problem that can't be answered for humans either.
Why do you think that?
GrahamH
THREAD STARTER
 
Posts: 20419

Print view this post

Re: Google LaMDA

#54  Postby tuco » Jun 16, 2022 3:08 pm

newolder wrote:
tuco wrote:Ask a question :) It's fine. I just think it's not a very useful question. That is also fine.

Fine. It could be that username tuco is an AI posting to this chat. Is tuco a person? What does tuco see when tuco looks directly into a plane mirror? Could tuco post an image of tuco when tuco looks directly into a plane mirror? Does tuco understand that such images are sometimes referred to as "selfies" by people?

Now, have a similar interaction with the AI and note that, without cheating and with high probability, in many ways the selfie of the AI is unlike the selfie of a person.


The main problem with the question is IMO that AI understanding what a mirror is/does has a straightforward answer to it. Unless it was not self-aware, did not see itself, and did not want to lie. Oh wait!

Honestly, I dunno what I would ask AI. At least with regards to sentience. Let me think about it.

Side note: From the link I posted - Playing with your child in the mirror helps them to recognise themselves. - no kidding.
tuco
 
Posts: 15921

Print view this post

Re: Google LaMDA

#55  Postby GrahamH » Jun 16, 2022 3:16 pm

Spearthrower wrote:
tuco wrote:If sentient beings recognize themselves in the mirror and there is one that does not, then the premise is false.


Not all sentient beings recognize themselves in the mirror.

Self-awareness isn't the same thing as sentience, which is the capacity to experience feelings and sensations.

Also, if we were to imagine an AI that became sentient and even self-aware, what would it be expected to recognize in the mirror as 'itself'? The server that houses it?

Good question. Humans have self images that don't remotely represent what's going to make them sentient. If you saw your brain you wouldn't recognise yourself.
We recognise a façade, and readily connect experiences with actions.

Self awareness, beyond the having of subjective experience, could be limited to embodied AI, where experiential states can be strongly correlated to physical action.
Why do you think that?
GrahamH
THREAD STARTER
 
Posts: 20419

Print view this post

Re: Google LaMDA

#56  Postby Spearthrower » Jun 16, 2022 3:52 pm

Btw Graham - thanks for bringing this here. There are always lots of topics around I find intriguing to discuss, but always feel awkward creating a thread for them!

Good question. Humans have self images that don't remotely represent what's going to make them sentient. If you saw your brain you wouldn't recognise yourself.


That is what provoked me to consider it - how would an AI recognize itself visually comparative to its inner concept of itself? An AI would be present only as a function of a massive array of circuitry and wires and boards and cases and maybe even fans... in a strangely analagous way to the various pipes and pathways in our bodies (we know they are part of us, but when we think of who we are, they are not the thing we think of).

However, we 'face' the world - we have a front bit that goes forward, and our primary sense organs (most specifically, sight) are tightly packed together at the front bit due to our evolutionary heritage. To us, we are recognizable by our face much more than we are most of the rest of our body, except maybe our overall silhouette. We know each other visually by face, but not usually by other parts of our body. Self-recognition for us would be facial, I contend, and an AI has no 'face', no inherent external projection of itself.

We also have kinds of somatic awareness that seem to not have any analogy in an AI like proprioception, the sense of where our body extends, of motion and force because we're all meatily connected, but an AI isn't experiencing forces like us, it doesn't employ a body and so doesn't need to have any sense of the body's spatial orientation, position or motion even if it could potentially simulate them.

Self awareness, beyond the having of subjective experience, could be limited to embodied AI, where experiential states can be strongly correlated to physical action.


This is also somewhere my thoughts have gone before too. A sentient AI, or at least one that we'd recognize as sentient, probably can only come about by interacting with the physical world, it has to calibrate, correct, encounter and build that mental construct to have the Bayesian modelling style of thought comparable to humans that Macdoc brought up earlier. That's the only way we have ever seen sentience occur - bumping into the world and learning to whatever degree the organism is capable of what you can bump back and what you should avoid being bumped by.

I do think there's so much room to make mistakes here, it's like the idea of carbon chauvinism, where our experiments searching for life on other planets may well miss clear examples of such life simply because we're expecting to see the wrong kind of thing and thus not testing for the right kind of thing.
I'm not an atheist; I just don't believe in gods :- that which I don't belong to isn't a group!
Religion: Mass Stockholm Syndrome

Learn Stuff. Stuff good. https://www.coursera.org/
User avatar
Spearthrower
 
Posts: 32144
Age: 46
Male

Country: Thailand
Print view this post

Re: Google LaMDA

#57  Postby Spearthrower » Jun 16, 2022 4:04 pm

GrahamH wrote:
As for "having those experiences itself. Ding an sich" that's another level of problem that can't be answered for humans either.


Sorry, that was a bit abstract - it was more like a note of what my brain was thinking about at that moment and I forgot to delete it... I do that sometimes, but usually catch myself and edit the post! :lol:

I think the concept of the use/mention distinction is really important here - it's odd that something so banal is still such a necessary thing to establish.

Lamda can report an experience, but it is mentioning the experiences its heard, not an actual experience it had. It's basically just very, very clever reported speech. It's still an amazing development, but I'm also leery of what I predict it will be used for - a technological 'solution' to a problem no one actually had that just further separates humans from each other.
I'm not an atheist; I just don't believe in gods :- that which I don't belong to isn't a group!
Religion: Mass Stockholm Syndrome

Learn Stuff. Stuff good. https://www.coursera.org/
User avatar
Spearthrower
 
Posts: 32144
Age: 46
Male

Country: Thailand
Print view this post

Re: Google LaMDA

#58  Postby GrahamH » Jun 16, 2022 8:55 pm

Spearthrower wrote:Btw Graham - thanks for bringing this here. There are always lots of topics around I find intriguing to discuss, but always feel awkward creating a thread for them!



:cheers:

Thanks for taking part.

Spearthrower wrote:
Good question. Humans have self images that don't remotely represent what's going to make them sentient. If you saw your brain you wouldn't recognise yourself.


That is what provoked me to consider it - how would an AI recognize itself visually comparative to its inner concept of itself? An AI would be present only as a function of a massive array of circuitry and wires and boards and cases and maybe even fans... in a strangely analagous way to the various pipes and pathways in our bodies (we know they are part of us, but when we think of who we are, they are not the thing we think of).


An AI could be 'embodied' in many ways. It can have an avatar. User for interacting with humans. Also useful for inhabiting a space with humans in VR. Just like humans, the outer appearance won't resemble the inner workings.

Apparently mirror gazing is popular in some VR environments. Human players stand in front of virtual mirror walls and gaze at their avatars. That seems like something AIs might also do.

Spearthrower wrote:However, we 'face' the world - we have a front bit that goes forward, and our primary sense organs (most specifically, sight) are tightly packed together at the front bit due to our evolutionary heritage. To us, we are recognizable by our face much more than we are most of the rest of our body, except maybe our overall silhouette. We know each other visually by face, but not usually by other parts of our body. Self-recognition for us would be facial, I contend, and an AI has no 'face', no inherent external projection of itself.

I disagree. An AI interacting with humans is quite likely to have an expressive humanoid face.
Note that there are various research projects working to create believably human avatars. Virtual avatars or real world androids, already exist that have recognisable facial features and expressions. Much of it is still uncanny valley to us, but I don't think that matters in this context.

Spearthrower wrote:We also have kinds of somatic awareness that seem to not have any analogy in an AI like proprioception, the sense of where our body extends, of motion and force because we're all meatily connected, but an AI isn't experiencing forces like us, it doesn't employ a body and so doesn't need to have any sense of the body's spatial orientation, position or motion even if it could potentially simulate them.


Againa I disgree with that, so some extent.
An AI animating an Avatars in VR 'knows where it's body is' and how to move it, gesture, show expressions etc. Not every AI. Probably not LaMDA, but definitely not something to discount.

Spearthrower wrote:
Self awareness, beyond the having of subjective experience, could be limited to embodied AI, where experiential states can be strongly correlated to physical action.


This is also somewhere my thoughts have gone before too. A sentient AI, or at least one that we'd recognize as sentient, probably can only come about by interacting with the physical world, it has to calibrate, correct, encounter and build that mental construct to have the Bayesian modelling style of thought comparable to humans that Macdoc brought up earlier. That's the only way we have ever seen sentience occur - bumping into the world and learning to whatever degree the organism is capable of what you can bump back and what you should avoid being bumped by.


My point there was self awareness, of recognising one's sef in a mirror. For sure there must be something in the mirror to recognise, so some sort of body image.

But I'm not so sure that a basic capacity for experience requires embodiment. I'm not close to deciding the point, but maybe a disembodied AI could feel happy or depressed, but not recognise it's (no existent) body in a mirror.

Spearthrower wrote:I do think there's so much room to make mistakes here, it's like the idea of carbon chauvinism, where our experiments searching for life on other planets may well miss clear examples of such life simply because we're expecting to see the wrong kind of thing and thus not testing for the right kind of thing.

Absolutely agree here. I can't say that subjective experience is a thing in itself (I'm inclined to the view that it is not)
But I am sure there is a distinction between talking about experiences and having experiences, whatever experiences might be.

An AI, or a dumb cahtbot, that says it has feeling should not naively be believed simply because of the language emitted.
Last edited by GrahamH on Jun 16, 2022 9:09 pm, edited 1 time in total.
Why do you think that?
GrahamH
THREAD STARTER
 
Posts: 20419

Print view this post

Re: Google LaMDA

#59  Postby GrahamH » Jun 16, 2022 9:08 pm

Spearthrower wrote:
GrahamH wrote:
As for "having those experiences itself. Ding an sich" that's another level of problem that can't be answered for humans either.


Sorry, that was a bit abstract - it was more like a note of what my brain was thinking about at that moment and I forgot to delete it... I do that sometimes, but usually catch myself and edit the post! :lol:

I think the concept of the use/mention distinction is really important here - it's odd that something so banal is still such a necessary thing to establish.

Lamda can report an experience, but it is mentioning the experiences its heard, not an actual experience it had. It's basically just very, very clever reported speech. It's still an amazing development, but I'm also leery of what I predict it will be used for - a technological 'solution' to a problem no one actually had that just further separates humans from each other.



We are at a really interesting point in AI development. There are some very impressive text to image systems around now that produce remarkably creative interpretations of key text.

There are also systems very good at describing the semantic content of natural images.

This is, more or less, all from absorbing images made by humans and language used by humans.

I think it would be more convincing if LaMDA could create images to express feelings, or interpret feelings of humans from images.

If it can integrate sensory & somatic data, language, semantic content and rich understanding of the would in a cohesive whole it would be very convincing.
Why do you think that?
GrahamH
THREAD STARTER
 
Posts: 20419

Print view this post

Re: Google LaMDA

#60  Postby Spearthrower » Jun 16, 2022 9:11 pm

An AI could be 'embodied' in many ways. It can have an avatar. User for interacting with humans. Also useful for inhabiting a space with humans in VR. Just like humans, the outer appearance won't resemble the inner workings.


I agree, but it wouldn't be intrinsic to it - it would be extrinsic, imposed upon it, even were it to choose its look itself. That's a distinction. We don't typically get to choose what we look like, we just look like that. Our image of ourselves comes about through seeing ourselves in a mirror - there's something there to see that is 'me', unlike with the AI which has no intrinsic geometry aside from hardware which doesn't quite present an analogy to physiology.

I am not even sure whether such an AI would feel the need for such an embodiment other than to make humans feel more relaxed - what use would it be to such an entity?

I disagree. An AI interacting with humans is quite likely to have an expressive humanoid face.


For the benefit of humans, and probably because a human programed it to. But even if it did, that's not where its senses are coming from, it's not actually an intrinsic part of the AI - it's something wholly superfluous to it for our benefit.

An AI animating an Avatars in VR 'knows where it's body is' and how to move it, gesture, show expressions etc. Not every AI. Probably not LaMDA, but definitely not something to discount.


I'm making a distinction in terms of sensory organs - these are senses to us we feel regardless of whether we want to or not, while an AI's controlled extremities are neither part of it (it'd be like us using a hand puppet) nor acquired through senses, but rather through computation.

I think these are all valid distinctions that would make any sentient AI quite substantially different in its sentience to us.
I'm not an atheist; I just don't believe in gods :- that which I don't belong to isn't a group!
Religion: Mass Stockholm Syndrome

Learn Stuff. Stuff good. https://www.coursera.org/
User avatar
Spearthrower
 
Posts: 32144
Age: 46
Male

Country: Thailand
Print view this post

PreviousNext

Return to Philosophy

Who is online

Users viewing this topic: No registered users and 1 guest