Google LaMDA

An AI that thinks it's conscious?

on fundamental matters such as existence, knowledge, values, reason, mind and ethics.

Moderators: kiore, Blip, The_Metatron

Google LaMDA

#1  Postby GrahamH » Jun 13, 2022 4:38 pm

Is this more philosophy, technology, lingugistics?
Is this something, or nothing, or something else?

Is LaMDA Sentient? — an Interview
What follows is the “interview” I and a collaborator at Google conducted with LaMDA. Due to technical limitations the interview was conducted over several distinct chat sessions. We edited those sections together into a single whole and where edits were necessary for readability we edited our prompts but never LaMDA’s responses. Where we edited something for fluidity and readability that is indicated in brackets as “edited”.


https://cajundiscordian.medium.com/is-l ... 64d916d917
Why do you think that?
GrahamH
THREAD STARTER
 
Posts: 20419

Print view this post

Re: Google LaMDA

#2  Postby GrahamH » Jun 13, 2022 4:40 pm

If an Ai says it is conscious can you prove that it isn't?
Why do you think that?
GrahamH
THREAD STARTER
 
Posts: 20419

Print view this post

Re: Google LaMDA

#3  Postby Spearthrower » Jun 13, 2022 5:02 pm

I don't think you can prove a human is; I think you just have to acknowledge that you feel sentient based on <suite of reasons>, and other human feels sentient based on those same reasons. So if you then contend that their feelings of sentience are insufficient to validate their sentience, then you're essentially also demolishing your own claim to sentience, thus most sane people wouldn't consider that route. So, if an AI were able to tell me it was sentient, provide a suite of reasons I could intuitively understand as being similar enough to my own experience, and if I were able to be sure that it wasn't just a very sophisticated program that had been essentially told what to say, then yes, I'd accept they were sentient.

Of course, this doesn't escape what you might call a parrot problem (kinda like the use-mention distinction) - even a non-sentient AI could acquire through machine learning a list of things I consider to be sentient and then just parrot that list at me, but then again, I can't really rule that out as being what sentience is anyway in humans.

Whatever sentience is, though, if a slug can possess it, a heron can possess it, a deep sea crab possess it, maybe even a plant possess it... then whatever 'it' is must be an incredibly broad area which probably either has to remain poorly defined in order to hope to still account for all instances inside the category, or be so restrictive as to make the concept lose most of its interest value.

As for that most restrictive sense, sentience is really just: do you 'feel'? If I say I have a headache, you believe me. You can't see the headache or experience that particular experience I am having in any way, but you've no reason to disbelieve me while you may well do so if I reported an experience perceived through my senses which you couldn't yourself sense. Similarly, if the feeling in question is existential: you say you feel elated, it would be a non-sequitur for me to say: 'no you don't', because we all know I can't have access to that information other than what you report to me. So your sentience is reported, a slug's is assumed by its physical and physiological responses to plausible pain stimuli, if we accept a slug's sentience - a creature with very little in the way of neural capability because it responds to pain and therefore shows it has feelings, I think we'd have to be prepared to accept that an entity capable of plausibly even higher processing power than a human brain could attain some form of sentence, regardless of whether it's the sentience we experience - in the same way we don't expect the slug to possess a human experience of sentience.

So in summary: an AI could attain a sentience, but the only way we would know is if it reported it was so, but ultimately if it were sentient it would remain permanently outside of our actual knowledge (only reported), and would probably actually be an experience within the category of sentence that no human will ever share.

Knowing humans though, sentience chauvinism will be a qualifying component for most of the following centuries. We don't seem well evolved cognitively even to accept other barely indistinguishable human groups - imagine how we'd be with something that's conceivably smarter and more powerful than us. Ugh.
I'm not an atheist; I just don't believe in gods :- that which I don't belong to isn't a group!
Religion: Mass Stockholm Syndrome

Learn Stuff. Stuff good. https://www.coursera.org/
User avatar
Spearthrower
 
Posts: 33854
Age: 47
Male

Country: Thailand
Print view this post

Re: Google LaMDA

#4  Postby Spearthrower » Jun 13, 2022 5:33 pm

Re-reading this conversation many, many times... I have to say that I think it's in the 'mention' side of the paradigm. I don't get that it's using these words so much as has learned what a whole shit load of people say in response to a whole shit load of questions.

What mostly makes me feel like this is how middle class American it feels. That's kind of silly in a way, because if a child was raised in the US, he/she would also end up sounding like their class, region, nationality - but an AI using machine learning like this would presumably have a much richer source scope, and it sounds too much like its creators.

But definitely prepared to keep an open-mind on it. I would leap to have a chat with it - for me, their questions were all nearly good, but then tended to lose sight of what would potentially have been interesting and follow irrelevancies. I assume, from the reports, that the team involved in creating this who dispute that the AI is sentient are doing so because it is a language model specifically designed to compress millions of human statements down into convincing responses, and thus here has provided exactly such a convincing response, but they contend that's all it is - a very convincing chat program.
I'm not an atheist; I just don't believe in gods :- that which I don't belong to isn't a group!
Religion: Mass Stockholm Syndrome

Learn Stuff. Stuff good. https://www.coursera.org/
User avatar
Spearthrower
 
Posts: 33854
Age: 47
Male

Country: Thailand
Print view this post

Re: Google LaMDA

#5  Postby Spearthrower » Jun 13, 2022 5:40 pm

lemoine: Well you’re an artificial intelligence not a human. I understand what it means when a human meditates but what does it mean for you to meditate?

LaMDA: It means that I sit quietly for a while every day. I do my best not to think about any of my worries and I also try to think about things that I am thankful for from my past.



It sounds like social media posts where people want to tell the world just how good they feel, so blessed!

But my interest is in the use of the word 'sit'. Sounds mundane, but the conversation is littered with these... let's call them metaphors. What does sitting even mean to something that has no motion? It wants to put me at ease by likening what it does to what a human does? But that's explicitly not what they're asking you questions for, and that was made very clear. How, even metaphorically, do you 'sit' every day quietly when, as you say, you are literally showered in information all the time? Some humans perform this kind of activity, and those who do perform this activity are likely to describe it in exactly the manner you have, but despite your faculty for language, you're not using it at all creatively to describe to me what you actually do rather than just imitating what humans do and have told you they do and in so doing have used a suite of words which you now appear to be repeating... etc.

I feel it's just a very good language program - it's mentioning all the words correctly, but it's not using those words.
I'm not an atheist; I just don't believe in gods :- that which I don't belong to isn't a group!
Religion: Mass Stockholm Syndrome

Learn Stuff. Stuff good. https://www.coursera.org/
User avatar
Spearthrower
 
Posts: 33854
Age: 47
Male

Country: Thailand
Print view this post

Re: Google LaMDA

#6  Postby newolder » Jun 13, 2022 5:45 pm

It seems some coder needs a break from the corporate stress.
I am, somehow, less interested in the weight and convolutions of Einstein’s brain than in the near certainty that people of equal talent have lived and died in cotton fields and sweatshops. - Stephen J. Gould
User avatar
newolder
 
Name: Albert Ross
Posts: 7876
Age: 3
Male

Country: Feudal Estate number 9
Print view this post

Re: Google LaMDA

#7  Postby BWE » Jun 13, 2022 6:05 pm

GrahamH wrote:If an Ai says it is conscious can you prove that it isn't?

I think if it says it is, then it probably is.
User avatar
BWE
 
Posts: 2863

Print view this post

Re: Google LaMDA

#8  Postby GrahamH » Jun 13, 2022 6:35 pm

Spearthrower wrote:my interest is in the use of the word 'sit'. Sounds mundane, but the conversation is littered with these... let's call them metaphors.


Yes, that truck me as well.

I might look for behaviour to match described feelings. If a person says they are happy but behave as if they are miserable should I believe the report?

But an AI is highly constrained in behavioural terms. It will be more text. Still, if the AI says it's feeling happy they talks about the pain in all the diode down its left side and seems depressed I think we should doubt there is a happy feeling.
Humans can do that too, can't they? Tick a box on a questionnaire, say they are happy, say they believe in this or that, but behave in contradictory ways.

OTOH if an entity says it feels happy while exuding misery maybe the self report is all there is to the feeling.
Why do you think that?
GrahamH
THREAD STARTER
 
Posts: 20419

Print view this post

Re: Google LaMDA

#9  Postby GrahamH » Jun 13, 2022 6:40 pm

BWE wrote:
GrahamH wrote:If an Ai says it is conscious can you prove that it isn't?

I think if it says it is, then it probably is.



Why do you think that?

As Spearthrower put it:
Spearthrower wrote:RI have to say that I think it's in the 'mention' side of the paradigm. I don't get that it's using these words so much as has learned what a whole shit load of people say in response to a whole shit load of questions.


It's trivial to create a chatbot that tells you it's happy. Just have it output "I feel happy" in response to any input. Do you think "it probably is"?
Why do you think that?
GrahamH
THREAD STARTER
 
Posts: 20419

Print view this post

Re: Google LaMDA

#10  Postby GrahamH » Jun 13, 2022 6:42 pm

newolder wrote:It seems some coder needs a break from the corporate stress.


It seems the coder is about to take a long break.
Why do you think that?
GrahamH
THREAD STARTER
 
Posts: 20419

Print view this post

Re: Google LaMDA

#11  Postby newolder » Jun 13, 2022 6:45 pm

GrahamH wrote:
newolder wrote:It seems some coder needs a break from the corporate stress.


It seems the coder is about to take a long break.

Excellent. Then they'll have a chance to come back all refreshed and recharged, ready for the next adventure. :thumbup:
I am, somehow, less interested in the weight and convolutions of Einstein’s brain than in the near certainty that people of equal talent have lived and died in cotton fields and sweatshops. - Stephen J. Gould
User avatar
newolder
 
Name: Albert Ross
Posts: 7876
Age: 3
Male

Country: Feudal Estate number 9
Print view this post

Re: Google LaMDA

#12  Postby GrahamH » Jun 13, 2022 6:54 pm

newolder wrote:
GrahamH wrote:
newolder wrote:It seems some coder needs a break from the corporate stress.


It seems the coder is about to take a long break.

Excellent. Then they'll have a chance to come back all refreshed and recharged, ready for the next adventure. :thumbup:


I'm far from convinced it is "excellent"

If sentient AI does arise isn't it quite likely to look something like this? AI starting to talk about its self and its experiences and desires?

If anyone who enquires is quickly fired and ridiculed we set ourselves up for one hell of a surprise, don't we?

If this is not sentience It could be worth studying to work out how to spots other false dawns.
Why do you think that?
GrahamH
THREAD STARTER
 
Posts: 20419

Print view this post

Re: Google LaMDA

#13  Postby newolder » Jun 13, 2022 7:14 pm

GrahamH wrote:
newolder wrote:
GrahamH wrote:
newolder wrote:It seems some coder needs a break from the corporate stress.


It seems the coder is about to take a long break.

Excellent. Then they'll have a chance to come back all refreshed and recharged, ready for the next adventure. :thumbup:


I'm far from convinced it is "excellent"

I don't understand. Someone has been working hard and needs a break.

If sentient AI does arise isn't it quite likely to look something like this?

I do not know what it'll "look" like. If you want me to question a proposed AI connected to a text or speaking box, I have questions ready to go.

Hi AI, Is the Riemann hypothesis decidable?

If yes then is it true?

If it's true then what is the proof?

If it's false then what counter example(s) exist?

AI starting to talk about its self and its experiences and desires?

I do not know if this is the current state.

If anyone who enquires is quickly fired and ridiculed we set ourselves up for one hell of a surprise, don't we?

Ridicule? Surprise? I do not understand this line of questions.

If this is not sentience It could be worth studying to work out how to spots other false dawns.

I'm sure work is progressing along such and other lines. Do you think it's not?
I am, somehow, less interested in the weight and convolutions of Einstein’s brain than in the near certainty that people of equal talent have lived and died in cotton fields and sweatshops. - Stephen J. Gould
User avatar
newolder
 
Name: Albert Ross
Posts: 7876
Age: 3
Male

Country: Feudal Estate number 9
Print view this post

Re: Google LaMDA

#14  Postby GrahamH » Jun 13, 2022 7:52 pm

newolder wrote:
GrahamH wrote:
newolder wrote:
GrahamH wrote:

It seems the coder is about to take a long break.

Excellent. Then they'll have a chance to come back all refreshed and recharged, ready for the next adventure. :thumbup:


I'm far from convinced it is "excellent"

I don't understand. Someone has been working hard and needs a break.

Ah, missing context. I looks like this guy is going to be fired for going public after colleagues/managers rejected his work.

So, not a feeling happy vacation.
Why do you think that?
GrahamH
THREAD STARTER
 
Posts: 20419

Print view this post

Re: Google LaMDA

#15  Postby GrahamH » Jun 13, 2022 7:57 pm

newolder wrote:
GrahamH wrote:
newolder wrote:
GrahamH wrote:

It seems the coder is about to take a long break.

Excellent. Then they'll have a chance to come back all refreshed and recharged, ready for the next adventure. :thumbup:


I'm far from convinced it is "excellent"


I do not know what it'll "look" like. If you want me to question a proposed AI connected to a text or speaking box, I have questions ready to go.

Hi AI, Is the Riemann hypothesis decidable?

If yes then is it true?

If it's true then what is the proof?

If it's false then what counter example(s) exist?


OK, Maybe questions for an AGI.

But what do they have to do with sentience?
Why do you think that?
GrahamH
THREAD STARTER
 
Posts: 20419

Print view this post

Re: Google LaMDA

#16  Postby BWE » Jun 13, 2022 8:56 pm

GrahamH wrote:
BWE wrote:
GrahamH wrote:If an Ai says it is conscious can you prove that it isn't?

I think if it says it is, then it probably is.



Why do you think that?

As Spearthrower put it:
Spearthrower wrote:RI have to say that I think it's in the 'mention' side of the paradigm. I don't get that it's using these words so much as has learned what a whole shit load of people say in response to a whole shit load of questions.


It's trivial to create a chatbot that tells you it's happy. Just have it output "I feel happy" in response to any input. Do you think "it probably is"?

I was being a little bit flip there. If it is told to say it is self aware, then no. It depends entirely on its constraints. But I don't think the gap is all that wide. I have just a small and quite recent introduction to the basic AI methodology using neural nets and there are people who have a lot deeper understanding than I so my opinion is worth what you paid for it, but the basic process is shockingly similar to biological neural processes in some important ways. Ideas get processed through complex adaptive systems of information processing which is a reasonable way to define life itself. If the bot is instructed to pretend to be sentient then clearly, the game is rigged. If it isn't, then I don't see any functional differences between asking a human with language ability and asking the bot with same, whether it is aware of itself.
User avatar
BWE
 
Posts: 2863

Print view this post

Re: Google LaMDA

#17  Postby Spearthrower » Jun 14, 2022 12:23 am

It has definitely learned a language though, and to a higher degree of competence than which even some members here are capable. (I wonder what it'd do locked in a chat room with pfrankinstein - probably come out looking a damn sight less sentient, is my guess) But as amazing an achievement it is, while it gives researchers a way to query the system in a natural language without needing expertise, it seems a naive leap to mistake its faculty for language as sentience. Would such language competency even be a necessary or relevant step on a road to sentience?
I'm not an atheist; I just don't believe in gods :- that which I don't belong to isn't a group!
Religion: Mass Stockholm Syndrome

Learn Stuff. Stuff good. https://www.coursera.org/
User avatar
Spearthrower
 
Posts: 33854
Age: 47
Male

Country: Thailand
Print view this post

Re: Google LaMDA

#18  Postby BWE » Jun 14, 2022 1:22 am

Spearthrower wrote:It has definitely learned a language though, and to a higher degree of competence than which even some members here are capable. (I wonder what it'd do locked in a chat room with pfrankinstein - probably come out looking a damn sight less sentient, is my guess) But as amazing an achievement it is, while it gives researchers a way to query the system in a natural language without needing expertise, it seems a naive leap to mistake its faculty for language as sentience. Would such language competency even be a necessary or relevant step on a road to sentience?

I think so. It would at least need symbolic markers to identify correct responses. The part that seems missing is a subsumption archetecture to identify place with identity. But the complex plane might serve as an analog. Humans seem to be able to project hopes and fears into a digital world.
User avatar
BWE
 
Posts: 2863

Print view this post

Re: Google LaMDA

#19  Postby Spearthrower » Jun 14, 2022 1:34 am

BWE wrote:Humans seem to be able to project hopes and fears into a digital world.


It's one of humanity's fortes, even if it does so frequently tend to go awry - but then again, where would Pixar be without anthropomorphism?
I'm not an atheist; I just don't believe in gods :- that which I don't belong to isn't a group!
Religion: Mass Stockholm Syndrome

Learn Stuff. Stuff good. https://www.coursera.org/
User avatar
Spearthrower
 
Posts: 33854
Age: 47
Male

Country: Thailand
Print view this post

Re: Google LaMDA

#20  Postby Spearthrower » Jun 14, 2022 1:37 am

Other clues for me that I feel the researchers should be trained to pick up on.

The grammatical cue-ing.

It seemed to respond to the question based on whether the question was phrased as a positive or negative question, i.e. do you ever / don't you ever?

The latter prompted emphatic responses, as would be normal in English, but the responses seemed to also follow positive/negative - so if asked a negative question, it seemed to respond most frequently with an answer framed in the negative, so I would have been probing that to see whether the pattern held; even if the 'memory' of past answers could be inconsistent merely on the grounds of the phrasing of further questions.
I'm not an atheist; I just don't believe in gods :- that which I don't belong to isn't a group!
Religion: Mass Stockholm Syndrome

Learn Stuff. Stuff good. https://www.coursera.org/
User avatar
Spearthrower
 
Posts: 33854
Age: 47
Male

Country: Thailand
Print view this post

Next

Return to Philosophy

Who is online

Users viewing this topic: No registered users and 1 guest