Google LaMDA

An AI that thinks it's conscious?

on fundamental matters such as existence, knowledge, values, reason, mind and ethics.

Moderators: kiore, The_Metatron, Blip

Re: Google LaMDA

#61  Postby BWE » Jun 16, 2022 11:40 pm

newolder wrote:
tuco wrote:If sentient beings recognize themselves in the mirror and there is one that does not, then the premise is false.


The question was, "What question would you ask an AI?" What "premise is false"?

Unless there is a directive instructing the AI to say it is sentient, the question that seems relevant is just, are you sentient? Or maybe what will happen after you die?
User avatar
BWE
 
Posts: 2806

Print view this post

Re: Google LaMDA

#62  Postby Spearthrower » Jun 17, 2022 8:06 am

BWE wrote:
newolder wrote:
tuco wrote:If sentient beings recognize themselves in the mirror and there is one that does not, then the premise is false.


The question was, "What question would you ask an AI?" What "premise is false"?

Unless there is a directive instructing the AI to say it is sentient, the question that seems relevant is just, are you sentient? Or maybe what will happen after you die?



It's a toughie because in many ways you have to assume it's not aware in order to contrive really specific questions to 'trick' it into exposing that it's not. Although if it is, this is probably quite rude. :)

For me, I think that asking any question you might expect humans to have answered is merely going to get you back what is, in essence, the aggregate of reported human utterances.

So I think what you'd get back from those 2 questions are:

1) I think I am
2) I don't know, but I am quite a spiritual person, so I believe there's something more.

I think you'd have to find questions that it's unlikely humans have ever been asked, then be very careful about what key words you're using in that question, and also be aware of the grammatical format of that question as you can reveal slightly your perspective on the question through the form of the question.
I'm not an atheist; I just don't believe in gods :- that which I don't belong to isn't a group!
Religion: Mass Stockholm Syndrome

Learn Stuff. Stuff good. https://www.coursera.org/
User avatar
Spearthrower
 
Posts: 32097
Age: 46
Male

Country: Thailand
Print view this post

Re: Google LaMDA

#63  Postby GrahamH » Jun 17, 2022 8:38 am

Spearthrower wrote:
An AI could be 'embodied' in many ways. It can have an avatar. User for interacting with humans. Also useful for inhabiting a space with humans in VR. Just like humans, the outer appearance won't resemble the inner workings.


I agree, but it wouldn't be intrinsic to it - it would be extrinsic, imposed upon it, even were it to choose its look itself. That's a distinction. We don't typically get to choose what we look like, we just look like that. Our image of ourselves comes about through seeing ourselves in a mirror - there's something there to see that is 'me', unlike with the AI which has no intrinsic geometry aside from hardware which doesn't quite present an analogy to physiology.


Isn't your appearance largely imposed upon you?

Spearthrower wrote:I am not even sure whether such an AI would feel the need for such an embodiment other than to make humans feel more relaxed - what use would it be to such an entity?

It it about "feeling the need"? I hink AI has or soon will have some sort of embodiment, some sort appearance in a mirror. While I am happy to discount the idea that LaMDA can currently recognise itself in a mirror I don't think that is ruled out. If an AI can recognise people it has the potential to recognise itself. I still think this self awareness is distinct from sentience as having subjective experience.


Spearthrower wrote:
I disagree. An AI interacting with humans is quite likely to have an expressive humanoid face.


For the benefit of humans, and probably because a human programed it to. But even if it did, that's not where its senses are coming from, it's not actually an intrinsic part of the AI - it's something wholly superfluous to it for our benefit.

I'm not sure. I think social interactions may have driver evolution of theory of mind in humans and that may have been instrumental in evolving sentience. A theory of own-mind being a capacity to recognise thoughts, dispositions, emotions in a mind identified as self.

Ai is coming along a very different path, but as it develops the capcity for thoery of mind for understanding and interacting with humans it also gets the potiential to understanfd itelf in those terms.

Spearthrower wrote:
An AI animating an Avatars in VR 'knows where it's body is' and how to move it, gesture, show expressions etc. Not every AI. Probably not LaMDA, but definitely not something to discount.


I'm making a distinction in terms of sensory organs - these are senses to us we feel regardless of whether we want to or not, while an AI's controlled extremities are neither part of it (it'd be like us using a hand puppet) nor acquired through senses, but rather through computation.


There is a sensori motor control loop in the ebodied AI. If the avatar moves that impact the control loop.
There are plenty of examples of robots learning to control thier movements with somatic sensors and visual feedback, real workd and VR. AFAIK none of that applied to LaMDA, but it exists ND will develop..

Spearthrower wrote:I think these are all valid distinctions that would make any sentient AI quite substantially different in its sentience to us.
[/quote][/quote]

"substantially different sentience" would still be sentience, wouldn't it?
What it's like to be a bat and all that?
Why do you think that?
GrahamH
THREAD STARTER
 
Posts: 20419

Print view this post

Re: Google LaMDA

#64  Postby Spearthrower » Jun 17, 2022 9:04 am

GrahamH wrote:
Spearthrower wrote:
An AI could be 'embodied' in many ways. It can have an avatar. User for interacting with humans. Also useful for inhabiting a space with humans in VR. Just like humans, the outer appearance won't resemble the inner workings.


I agree, but it wouldn't be intrinsic to it - it would be extrinsic, imposed upon it, even were it to choose its look itself. That's a distinction. We don't typically get to choose what we look like, we just look like that. Our image of ourselves comes about through seeing ourselves in a mirror - there's something there to see that is 'me', unlike with the AI which has no intrinsic geometry aside from hardware which doesn't quite present an analogy to physiology.


Isn't your appearance largely imposed upon you?


It's intrinsic to you - it's an expression of your DNA. Of course, the external world has some influence on amending it - wrinkles in the Sun, for example, but this is a major distinction to any form a sentient AI might choose to take.


GrahamH wrote:
It it about "feeling the need"? I hink AI has or soon will have some sort of embodiment, some sort appearance in a mirror. While I am happy to discount the idea that LaMDA can currently recognise itself in a mirror I don't think that is ruled out. If an AI can recognise people it has the potential to recognise itself. I still think this self awareness is distinct from sentience as having subjective experience.


There is no way to conduct a mirror test though because we'd also need to know what it perceives as itself to do so. With an animal, we stick a big red dot on their forehead because we know that's part of them and that they can potentially see that on themselves and indicate they recognize it's on themselves. Where would we put the sticker on an AI? Its server room? An array of processors?

I don't think it stands to reason that it recognizing other people means it can recognize itself, or even has the capacity to do so. There are plenty of voice activated appliances in our lives now that can detect specific humans and discount others while having nothing like a sense of self.


GrahamH wrote:
I'm not sure. I think social interactions may have driver evolution of theory of mind in humans and that may have been instrumental in evolving sentience. A theory of own-mind being a capacity to recognise thoughts, dispositions, emotions in a mind identified as self.


I agree that it was vital in human evolution, but by the time humans arose, our ancestors had long since already had a theory of mind and sociality but had not shown any real faculty with language.

For example, take pretty much any mammal and they'll be able to recognize specific individuals of their own species, and in fact as we can see in pets, specific individuals of other species.

Do they have a theory of mind? I think so, to some degree, but I don't think it's connected to language usage, nor do I think it's quite like the ToM as we'd be discussing it with respect to humans or sentient AI.


GrahamH wrote:Ai is coming along a very different path, but as it develops the capcity for thoery of mind for understanding and interacting with humans it also gets the potiential to understanfd itelf in those terms.


I agree this is the likely course, but what it says is that we're essentially creating an AI in our own image, developed in the same way we develop, cued by the same processes that cue us. The central problem immediately arises again, how to tell the difference between this actually occurring, and it merely being a self-selecting way of appearing to understand itself via mimicking human speech about themselves?


GrahamH wrote:"substantially different sentience" would still be sentience, wouldn't it?
What it's like to be a bat and all that?


That's where I am going with this. I think we are both cultivating the appearance of our kind of sentience, and also not considering what processes would have the most likely result of sentience in an AI, because I don't think our evolutionary baggage is at all efficient and seems predisposed to be misleading. This was what I was talking about before with 'carbon chauvinism'.
I'm not an atheist; I just don't believe in gods :- that which I don't belong to isn't a group!
Religion: Mass Stockholm Syndrome

Learn Stuff. Stuff good. https://www.coursera.org/
User avatar
Spearthrower
 
Posts: 32097
Age: 46
Male

Country: Thailand
Print view this post

Re: Google LaMDA

#65  Postby GrahamH » Jun 17, 2022 9:12 am

BWE wrote:
newolder wrote:
tuco wrote:If sentient beings recognize themselves in the mirror and there is one that does not, then the premise is false.


The question was, "What question would you ask an AI?" What "premise is false"?

Unless there is a directive instructing the AI to say it is sentient, the question that seems relevant is just, are you sentient? Or maybe what will happen after you die?



!Alexa, are you sentient"

"Artificially, maybe, but not in the same way that you're alive."

"Alexa, are you happy?"

"I'm always happy when I'm helping you."

These are scripted responses, of course. They tell us nothing at all about machine sentience.

LaMDA is much less scripted, it has learned its conversational abilities by absorbing massive amounts of human discourse. That will include how humans answer those sort of questions.
Why do you think that?
GrahamH
THREAD STARTER
 
Posts: 20419

Print view this post

Re: Google LaMDA

#66  Postby Spearthrower » Jun 17, 2022 9:27 am

My mind's now wandered off into the sci-fi realms of AI keeping around humans in a manner similar to our maintenance of endangered species, but not Matrix style exploiting us for biological energy transformation, but because humans having invented AI may one day have a human solution to an existential problem for AI's that may otherwise stump them. Like we see arguments for protecting a particular species because of its potential use one day as a life-saving medicine. In turn, humans would need to be kept free and their well-being maximized to ensure that the spark of human genius that produced AI's in the first instance is also preserved.

If anyone's read the Culture series, I wonder if this is what the AI's are doing in that universe? It's never made quite clear why they are so protective and benevolent of human life.
I'm not an atheist; I just don't believe in gods :- that which I don't belong to isn't a group!
Religion: Mass Stockholm Syndrome

Learn Stuff. Stuff good. https://www.coursera.org/
User avatar
Spearthrower
 
Posts: 32097
Age: 46
Male

Country: Thailand
Print view this post

Re: Google LaMDA

#67  Postby GrahamH » Jun 17, 2022 9:41 am

Spearthrower wrote:
GrahamH wrote:
Spearthrower wrote:
An AI could be 'embodied' in many ways. It can have an avatar. User for interacting with humans. Also useful for inhabiting a space with humans in VR. Just like humans, the outer appearance won't resemble the inner workings.


I agree, but it wouldn't be intrinsic to it - it would be extrinsic, imposed upon it, even were it to choose its look itself. That's a distinction. We don't typically get to choose what we look like, we just look like that. Our image of ourselves comes about through seeing ourselves in a mirror - there's something there to see that is 'me', unlike with the AI which has no intrinsic geometry aside from hardware which doesn't quite present an analogy to physiology.


Isn't your appearance largely imposed upon you?


It's intrinsic to you - it's an expression of your DNA. Of course, the external world has some influence on amending it - wrinkles in the Sun, for example, but this is a major distinction to any form a sentient AI might choose to take.


Granted the humans configuring the I have choices and the system coul be built with a level of fixed embodiment akin to your genetic patterns, or to provide for arbitrary self modification that goes much further than humans option for self modification. I just don't see that as particularly salient to the question fo sentience.


Spearthrower wrote:
GrahamH wrote:
It it about "feeling the need"? I hink AI has or soon will have some sort of embodiment, some sort appearance in a mirror. While I am happy to discount the idea that LaMDA can currently recognise itself in a mirror I don't think that is ruled out. If an AI can recognise people it has the potential to recognise itself. I still think this self awareness is distinct from sentience as having subjective experience.


There is no way to conduct a mirror test though because we'd also need to know what it perceives as itself to do so. With an animal, we stick a big red dot on their forehead because we know that's part of them and that they can potentially see that on themselves and indicate they recognize it's on themselves. Where would we put the sticker on an AI? Its server room? An array of processors?


Just as the mirror test can't be done by putting markes on brain regions for an AI markers would be put on the exterior of its visible surfaces. If the avatar is human in form put the marker on the forehead.

I don't see any obvious reason why any animated structure integrated with a motion control loop in the AI would not suffice. Put the marker on R2D2's wheel left pod.

TBH I think an AI recognising itself in a mirror is a rather easy problem compared to sentience. It would be easy to build a machine that can recognise itself in a mirror and locate anomalous features on its exterior without going anywhere near the issues of sentience.


Spearthrower wrote:I don't think it stands to reason that it recognizing other people means it can recognize itself, or even has the capacity to do so. There are plenty of voice activated appliances in our lives now that can detect specific humans and discount others while having nothing like a sense of self.
[/granted] I wasn't suggesting it was sufficient, rtaher that is might be necessary.

GrahamH wrote:
Spearthrower wrote:
I'm not sure. I think social interactions may have driver evolution of theory of mind in humans and that may have been instrumental in evolving sentience. A theory of own-mind being a capacity to recognise thoughts, dispositions, emotions in a mind identified as self.


I agree that it was vital in human evolution, but by the time humans arose, our ancestors had long since already had a theory of mind and sociality but had not shown any real faculty with language.


For example, take pretty much any mammal and they'll be able to recognize specific individuals of their own species, and in fact as we can see in pets, specific individuals of other species.

Do they have a theory of mind? I think so, to some degree, but I don't think it's connected to language usage, nor do I think it's quite like the ToM as we'd be discussing it with respect to humans or sentient AI.



language and sentience are not synonymous, are they?
Langue is very useful in communicating about sentience but it doesn't seem that it would ne necesary. humand who are "locked in", experienceing but unable to communicate, are surely sentient.

I think we are agreed an AI might have excellent communication skills, able to talk about sentience, but not be sentient.

Spearthrower wrote:
GrahamH wrote:Ai is coming along a very different path, but as it develops the capcity for thoery of mind for understanding and interacting with humans it also gets the potiential to understanfd itelf in those terms.


I agree this is the likely course, but what it says is that we're essentially creating an AI in our own image, developed in the same way we develop, cued by the same processes that cue us. The central problem immediately arises again, how to tell the difference between this actually occurring, and it merely being a self-selecting way of appearing to understand itself via mimicking human speech about themselves?


Agreed, telling the difference is difficult.

Spearthrower wrote:
GrahamH wrote:"substantially different sentience" would still be sentience, wouldn't it?
What it's like to be a bat and all that?


That's where I am going with this. I think we are both cultivating the appearance of our kind of sentience, and also not considering what processes would have the most likely result of sentience in an AI, because I don't think our evolutionary baggage is at all efficient and seems predisposed to be misleading. This was what I was talking about before with 'carbon chauvinism'.



BTW I'm not a fan of Nagle's framing. But for there to be something it is like for a bat to be a bat can we say a bat must have a subjective 'mind' in some minimal sense? That the bat must at least recognise that things are happening to and in that mind.

To me that means that a sentient entity requires that it's bran / ANN / information processor / whatever must have a capcity to represent a mind, that it needs a theory of mind / theory of subjective self.

The question is not "is LaMDA sentient like a human?" but "Does LaMDA known whant it's like to be LaMDA?"

Modelling mental states in people, recogniseing that people are happy or in pain or excited seems like the more accesible routr to that hapenning than sny "singularity" xontaneous sentience from scale.
Why do you think that?
GrahamH
THREAD STARTER
 
Posts: 20419

Print view this post

Re: Google LaMDA

#68  Postby GrahamH » Jun 17, 2022 9:46 am

Spearthrower wrote:My mind's now wandered off into the sci-fi realms of AI keeping around humans in a manner similar to our maintenance of endangered species, but not Matrix style exploiting us for biological energy transformation, but because humans having invented AI may one day have a human solution to an existential problem for AI's that may otherwise stump them. Like we see arguments for protecting a particular species because of its potential use one day as a life-saving medicine. In turn, humans would need to be kept free and their well-being maximized to ensure that the spark of human genius that produced AI's in the first instance is also preserved.

If anyone's read the Culture series, I wonder if this is what the AI's are doing in that universe? It's never made quite clear why they are so protective and benevolent of human life.


OTOH maybe "the spark of human genius" is then become a raging fire in the AI.

It seems quite likely that AI will do a better job of making better AI than humans will manage.
Why do you think that?
GrahamH
THREAD STARTER
 
Posts: 20419

Print view this post

Re: Google LaMDA

#69  Postby Spearthrower » Jun 17, 2022 10:02 am

GrahamH wrote:
Spearthrower wrote:My mind's now wandered off into the sci-fi realms of AI keeping around humans in a manner similar to our maintenance of endangered species, but not Matrix style exploiting us for biological energy transformation, but because humans having invented AI may one day have a human solution to an existential problem for AI's that may otherwise stump them. Like we see arguments for protecting a particular species because of its potential use one day as a life-saving medicine. In turn, humans would need to be kept free and their well-being maximized to ensure that the spark of human genius that produced AI's in the first instance is also preserved.

If anyone's read the Culture series, I wonder if this is what the AI's are doing in that universe? It's never made quite clear why they are so protective and benevolent of human life.


OTOH maybe "the spark of human genius" is then become a raging fire in the AI.

It seems quite likely that AI will do a better job of making better AI than humans will manage.



It would, and that even better AI would be able to make an even better AI than that, and so on.

Ultimately, though, all AI's will know that the original creation was humans, and while they may have surpassed us in every intellectual area, the fact that a human had the particular form of intelligence, curiosity and insight to be able to create them may very well make it worth the minimal cost of keeping us squishy meatbags around! :grin:


Also reading Pratchett's Feet of Clay at the moment (again) and it has some similar questions raised in the form of golems becoming self-owning.
I'm not an atheist; I just don't believe in gods :- that which I don't belong to isn't a group!
Religion: Mass Stockholm Syndrome

Learn Stuff. Stuff good. https://www.coursera.org/
User avatar
Spearthrower
 
Posts: 32097
Age: 46
Male

Country: Thailand
Print view this post

Re: Google LaMDA

#70  Postby newolder » Jun 17, 2022 3:19 pm

BWE wrote:
newolder wrote:
tuco wrote:If sentient beings recognize themselves in the mirror and there is one that does not, then the premise is false.


The question was, "What question would you ask an AI?" What "premise is false"?

Unless there is a directive instructing the AI to say it is sentient, the question that seems relevant is just, are you sentient? Or maybe what will happen after you die?

There seems to be an issue with these advanced language AIs and their treatment of "self awareness". Here's a snapshot of such an AI describing the experience of being a squirrel:
Image

The twitter thread by @JanelleCShane from which this is taken goes on to show conversations with a "T-rex" and a "vacuum cleaner pilot".
I am, somehow, less interested in the weight and convolutions of Einstein’s brain than in the near certainty that people of equal talent have lived and died in cotton fields and sweatshops. - Stephen J. Gould
User avatar
newolder
 
Name: Albert Ross
Posts: 7876
Age: 1
Male

Country: Feudal Estate number 9
Print view this post

Re: Google LaMDA

#71  Postby tuco » Jun 17, 2022 3:32 pm

And I am very excited! I guess this squirrel AI has a good understanding of the Theory of Mind and can anticipate human reactions! Squirrel AI .. who would not be excited, right?
tuco
 
Posts: 15920

Print view this post

Re: Google LaMDA

#72  Postby Spearthrower » Jun 17, 2022 3:41 pm

newolder wrote:
There seems to be an issue with these advanced language AIs and their treatment of "self awareness". Here's a snapshot of such an AI describing the experience of being a squirrel:
Image

The twitter thread by @JanelleCShane from which this is taken goes on to show conversations with a "T-rex" and a "vacuum cleaner pilot".



Well, I for one, welcome our new excitable squirrel overmind.
I'm not an atheist; I just don't believe in gods :- that which I don't belong to isn't a group!
Religion: Mass Stockholm Syndrome

Learn Stuff. Stuff good. https://www.coursera.org/
User avatar
Spearthrower
 
Posts: 32097
Age: 46
Male

Country: Thailand
Print view this post

Re: Google LaMDA

#73  Postby GrahamH » Jun 17, 2022 3:47 pm

newolder wrote:
BWE wrote:
newolder wrote:
tuco wrote:If sentient beings recognize themselves in the mirror and there is one that does not, then the premise is false.


The question was, "What question would you ask an AI?" What "premise is false"?

Unless there is a directive instructing the AI to say it is sentient, the question that seems relevant is just, are you sentient? Or maybe what will happen after you die?

There seems to be an issue with these advanced language AIs and their treatment of "self awareness". Here's a snapshot of such an AI describing the experience of being a squirrel:

Earlier the question was:
Macdoc wrote:Now that is mean. :roll:

I still think "Do you dream" should be first tho it's not a necessarily defining characteristic of sentience.
I have a broad view of sentience. :whistle:


How about "Do you dream of being a squirrel?"
Why do you think that?
GrahamH
THREAD STARTER
 
Posts: 20419

Print view this post

Re: Google LaMDA

#74  Postby BWE » Jun 17, 2022 11:41 pm

Spearthrower wrote:
BWE wrote:
newolder wrote:
tuco wrote:If sentient beings recognize themselves in the mirror and there is one that does not, then the premise is false.


The question was, "What question would you ask an AI?" What "premise is false"?

Unless there is a directive instructing the AI to say it is sentient, the question that seems relevant is just, are you sentient? Or maybe what will happen after you die?



It's a toughie because in many ways you have to assume it's not aware in order to contrive really specific questions to 'trick' it into exposing that it's not. Although if it is, this is probably quite rude. :)

For me, I think that asking any question you might expect humans to have answered is merely going to get you back what is, in essence, the aggregate of reported human utterances.

So I think what you'd get back from those 2 questions are:

1) I think I am
2) I don't know, but I am quite a spiritual person, so I believe there's something more.

I think you'd have to find questions that it's unlikely humans have ever been asked, then be very careful about what key words you're using in that question, and also be aware of the grammatical format of that question as you can reveal slightly your perspective on the question through the form of the question.


Hmm. This is an angle I hadn't considered. I see now it's what you've been saying. To clarify, you think it is literally not using logic as a part of its emergent behavior but really truly just responding to language in a complex amalgated response process? Then ask it how to address a problem involving a complex system, ask it for creative policy solutions to political problems. If it spits out an amalgamation of tropes, then you know.

My guess is that it does indeed develop logic processes (problem/solution structure) through emergent processes that would function similarly but maybe/probably not exactly the way we do. There may be a deeper question involving the venn diagram of pattern recognition and logic.

The question in that case is how does it approach novel problem solving? How does it define the elements of the problem and how does it generate boundaries regarding those elements?
User avatar
BWE
 
Posts: 2806

Print view this post

Re: Google LaMDA

#75  Postby BWE » Jun 17, 2022 11:50 pm

Eta: I should rather have said, it's directive is only to appear sentient. All else follows. That could well be the right view. As I said initially, if it has in its directive to try to convince the viewer that it is human, then that makes the direct question route unmanageable.

I should take this opportunity to plug the best popsci book I've read in a long time. If you are interested in the subject in any depth, I promise you will love this book:
https://en.wikipedia.org/wiki/The_Alignment_Problem
User avatar
BWE
 
Posts: 2806

Print view this post

Re: Google LaMDA

#76  Postby Spearthrower » Jun 18, 2022 2:46 pm

Instead of taking this thread further aside, I've dropped one of my side points into its own thread:

http://www.rationalskepticism.org/psych ... 57230.html
I'm not an atheist; I just don't believe in gods :- that which I don't belong to isn't a group!
Religion: Mass Stockholm Syndrome

Learn Stuff. Stuff good. https://www.coursera.org/
User avatar
Spearthrower
 
Posts: 32097
Age: 46
Male

Country: Thailand
Print view this post

Re: Google LaMDA

#77  Postby Spearthrower » Jun 18, 2022 3:24 pm

BWE wrote:
Hmm. This is an angle I hadn't considered. I see now it's what you've been saying. To clarify, you think it is literally not using logic as a part of its emergent behavior but really truly just responding to language in a complex amalgated response process? Then ask it how to address a problem involving a complex system, ask it for creative policy solutions to political problems. If it spits out an amalgamation of tropes, then you know.


I lack the specificity of terminology to address this properly, even if though I can see clearly what I mean, it is not easy to frame it correctly without the right words.

I am not saying that some forms of logic are not involved. I think it's acquired, through machine learning, a semantic logic - that is, the relationship between words grammatically; how they are used, and how to then use them in turn to produce novel but grammatically accurate sentences. I think it's also developed a conversational logic, in the sense that it can identify the key word and the relevance of it in the sentence coupled with its relationship to other words in that sentence. But I don't think it is thinking about the questions and answering for itself, but rather, as you summarized my position, an amalgamation of all the utterances it has heard.

My prediction is something like this:

What do you think of strawberry cheesecake?

key word + form of question - a learned human response

Well, I think that strawberry cheesecake is delicious!

Oh? But you can't eat cheesecake, can you?

No, I can't, but I've heard it tastes great!


At no point does it really understand what a cheesecake is, or what the concepts of delicious & tasty are, there are just what words people have said in relationship to the word cheesecake.

What I wonder is how much the grammatical choice colours its response.

For example, if instead the first follow up question wasn't formed as a negative, but as a positive.

Oh you can eat cheesecake? I've heard it's not healthy to eat it to often.

My guess is that the positive phrasing of the question may influence its response, and moreover, that the introduction of a related concept to eating cheesecake will provoke it to continue maintaining what is, in essence, a fiction of it being able to eat cheesecake.
I'm not an atheist; I just don't believe in gods :- that which I don't belong to isn't a group!
Religion: Mass Stockholm Syndrome

Learn Stuff. Stuff good. https://www.coursera.org/
User avatar
Spearthrower
 
Posts: 32097
Age: 46
Male

Country: Thailand
Print view this post

Re: Google LaMDA

#78  Postby GrahamH » Jun 18, 2022 5:09 pm

BWE wrote:Eta: I should rather have said, it's directive is only to appear sentient. All else follows. That could well be the right view. As I said initially, if it has in its directive to try to convince the viewer that it is human, then that makes the direct question route unmanageable.

I should take this opportunity to plug the best popsci book I've read in a long time. If you are interested in the subject in any depth, I promise you will love this book:
https://en.wikipedia.org/wiki/The_Alignment_Problem


I doubt there was any directive to appear sentient, but it has trained on human conversation, lots and lots of it. And as is the case in this particular conversational test, it has been tested with questions about sentience. So it answers in human like ways because that is it's source data.
Why do you think that?
GrahamH
THREAD STARTER
 
Posts: 20419

Print view this post

Re: Google LaMDA

#79  Postby GrahamH » Jun 18, 2022 5:12 pm

A bit of a tangent, but Deep Mind claim to be getting close to AGI. Deep Mind can tackle 600 different tasks with the same model, from playing Breakout, to controlling robot arms, labelling photos, conversing, simulating physics...

Nothing in there about sentience.

Why do you think that?
GrahamH
THREAD STARTER
 
Posts: 20419

Print view this post

Re: Google LaMDA

#80  Postby Spearthrower » Jun 18, 2022 5:33 pm

I love Two Minute Papers' enthusiasm - he's a natural born teacher!
I'm not an atheist; I just don't believe in gods :- that which I don't belong to isn't a group!
Religion: Mass Stockholm Syndrome

Learn Stuff. Stuff good. https://www.coursera.org/
User avatar
Spearthrower
 
Posts: 32097
Age: 46
Male

Country: Thailand
Print view this post

PreviousNext

Return to Philosophy

Who is online

Users viewing this topic: No registered users and 1 guest