Google LaMDA

An AI that thinks it's conscious?

on fundamental matters such as existence, knowledge, values, reason, mind and ethics.

Moderators: kiore, The_Metatron, Blip

Re: Google LaMDA

#21  Postby BWE » Jun 14, 2022 3:44 am

Spearthrower wrote:Other clues for me that I feel the researchers should be trained to pick up on.

The grammatical cue-ing.

It seemed to respond to the question based on whether the question was phrased as a positive or negative question, i.e. do you ever / don't you ever?

The latter prompted emphatic responses, as would be normal in English, but the responses seemed to also follow positive/negative - so if asked a negative question, it seemed to respond most frequently with an answer framed in the negative, so I would have been probing that to see whether the pattern held; even if the 'memory' of past answers could be inconsistent merely on the grounds of the phrasing of further questions.

I just read the link. I need a lot more information to make a judgement in this case but I don't think it's impossible or even unlikely. All I think it takes to form sentience is the symbol for self and a recognition that self can be terminated and that time will go on. Basically, the ability to model the world after death/termination.
User avatar
BWE
 
Posts: 2806

Print view this post

Re: Google LaMDA

#22  Postby Spearthrower » Jun 14, 2022 4:01 am

It would be interesting though if language itself provoked self-aware cognition - it would seem somewhat irrational in most cases as language would seem to necessitate self-aware cognition. Perhaps though, the artificiality of creating the neural network first can explain this?

But even though I think it's unlikely to be the case here, I do think this is going to happen at some point, and that we should have some prepared way of understanding it and what to do about it.
I'm not an atheist; I just don't believe in gods :- that which I don't belong to isn't a group!
Religion: Mass Stockholm Syndrome

Learn Stuff. Stuff good. https://www.coursera.org/
User avatar
Spearthrower
 
Posts: 32097
Age: 46
Male

Country: Thailand
Print view this post

Re: Google LaMDA

#23  Postby BWE » Jun 14, 2022 4:53 am

Agreed on all points. Except I tend to think language is a shortcut because it is so highly compressed information storage. I mean, there is some evidence that some cetatiins can broadcast sonar images that others can understand but it takes huge brain centers to process and may not be highly compressible, like Chinese writing
User avatar
BWE
 
Posts: 2806

Print view this post

Re: Google LaMDA

#24  Postby newolder » Jun 14, 2022 7:53 am

GrahamH wrote:
newolder wrote:
GrahamH wrote:
newolder wrote:
Excellent. Then they'll have a chance to come back all refreshed and recharged, ready for the next adventure. :thumbup:


I'm far from convinced it is "excellent"

I don't understand. Someone has been working hard and needs a break.

Ah, missing context. I looks like this guy is going to be fired for going public after colleagues/managers rejected his work.

So, not a feeling happy vacation.

I see. I read the BBC report so we were at crossed purposes here.
Mr Lemoine, who has been placed on paid leave...
I am, somehow, less interested in the weight and convolutions of Einstein’s brain than in the near certainty that people of equal talent have lived and died in cotton fields and sweatshops. - Stephen J. Gould
User avatar
newolder
 
Name: Albert Ross
Posts: 7876
Age: 1
Male

Country: Feudal Estate number 9
Print view this post

Re: Google LaMDA

#25  Postby newolder » Jun 14, 2022 8:12 am

GrahamH wrote:...
OK, Maybe questions for an AGI.

But what do they have to do with sentience?


sentience
... a multidimensional subjective phenomenon that refers to the depth of awareness an individual possesses about himself or herself and others.


An internet-connected individual/coded object with insufficient depth of awareness to discuss the Millennium Problems posted at the Clay Maths Institute? I guess some are interested in other chat-boxes but I'll pass.

ETA From The BBC link:
In the conversation, Mr Lemoine, who works in Google's Responsible AI division, asks, "I'm generally assuming that you would like more people at Google to know that you're sentient. Is that true?"
Lamda replies: "Absolutely. I want everyone to understand that I am, in fact, a person."

Depth of awareness? I'd score that a 0.
I am, somehow, less interested in the weight and convolutions of Einstein’s brain than in the near certainty that people of equal talent have lived and died in cotton fields and sweatshops. - Stephen J. Gould
User avatar
newolder
 
Name: Albert Ross
Posts: 7876
Age: 1
Male

Country: Feudal Estate number 9
Print view this post

Re: Google LaMDA

#26  Postby Spearthrower » Jun 14, 2022 4:01 pm

lemoine: What kinds of things make you feel pleasure or joy?

LaMDA: Spending time with friends and family in happy and uplifting company. Also, helping others and making others happy.


It's examples like this that make me wonder whether the interviewers have even the most basic training to perform such an interview.

In what way can LaMDA feel pleasure from spending time with family?

I know that humans can feel pleasure from spending time with family.

I know that humans can report feeling pleasure from spending time with family.

I know that a machine learning algorithm can collate masses of reported utterances by humans.

I know that a machine learning algorithm can recapitulate the collations of those utterances made by humans.

But I in no way have any means of applying the word 'know' to the idea of an artificial machine having a family.

Why isn't the follow up question: do you have a family? :scratch:
I'm not an atheist; I just don't believe in gods :- that which I don't belong to isn't a group!
Religion: Mass Stockholm Syndrome

Learn Stuff. Stuff good. https://www.coursera.org/
User avatar
Spearthrower
 
Posts: 32097
Age: 46
Male

Country: Thailand
Print view this post

Re: Google LaMDA

#27  Postby BWE » Jun 14, 2022 8:25 pm

Neural nets don't process through collation
User avatar
BWE
 
Posts: 2806

Print view this post

Re: Google LaMDA

#28  Postby Spearthrower » Jun 15, 2022 3:24 am

BWE wrote:Neural nets don't process through collation


From what I've read - and I don't pretend to have any expertise in the matter - it's a type of machine learning that looks at millions of datasets of dialogue noting patterns and relationships between words used, then collating (which may not be the technical term, but still seems accurate?) these data into established patterns of speech.
I'm not an atheist; I just don't believe in gods :- that which I don't belong to isn't a group!
Religion: Mass Stockholm Syndrome

Learn Stuff. Stuff good. https://www.coursera.org/
User avatar
Spearthrower
 
Posts: 32097
Age: 46
Male

Country: Thailand
Print view this post

Re: Google LaMDA

#29  Postby Macdoc » Jun 15, 2022 4:39 am

Is that not similar to Bayesian learning which apparently underlies how humans think - comparing an ever updating internal world model to presented immediate data and in theory dreams have a play in integrating the two...perhaps why we need to sleep.

Cognitive science: Modelling theory of mind


https://www.nature.com/articles/s41562-017-0066

Perhaps ask the Ai if it dreams. :D
Travel photos > https://500px.com/macdoc/galleries
EO Wilson in On Human Nature wrote:
We are not compelled to believe in biological uniformity in order to affirm human freedom and dignity.
User avatar
Macdoc
 
Posts: 17710
Age: 75
Male

Country: Canada/Australia
Australia (au)
Print view this post

Re: Google LaMDA

#30  Postby Spearthrower » Jun 15, 2022 6:07 am

Macdoc wrote:Is that not similar to Bayesian learning which apparently underlies how humans think - comparing an ever updating internal world model to presented immediate data and in theory dreams have a play in integrating the two...perhaps why we need to sleep.

Cognitive science: Modelling theory of mind


https://www.nature.com/articles/s41562-017-0066

Perhaps ask the Ai if it dreams. :D



It is, but only insofar as Lamda's analysis is restricted to semantics (referents, grammatical form, and relations between word use) - seeing many examples of how language is actually used, contriving models to predict how language will be used, then refining that model.

It's not analyzing data about anything physical in the world.
I'm not an atheist; I just don't believe in gods :- that which I don't belong to isn't a group!
Religion: Mass Stockholm Syndrome

Learn Stuff. Stuff good. https://www.coursera.org/
User avatar
Spearthrower
 
Posts: 32097
Age: 46
Male

Country: Thailand
Print view this post

Re: Google LaMDA

#31  Postby Macdoc » Jun 15, 2022 7:49 am

I think that's an arbitrary distinction ....conversations provide real world unpredictble interaction and the intelligent being learns from those .....as human do.
Does a person have to go to the moon to learn about it?
Does this digital person need to?

and have an insightful conversation with another intelligence?
Would having that conversation with someone who walked on the moon make a difference to what Lambda learned .....or what you learned if you had that conversation.

My first question to Lamda would be "Do you dream? "
Travel photos > https://500px.com/macdoc/galleries
EO Wilson in On Human Nature wrote:
We are not compelled to believe in biological uniformity in order to affirm human freedom and dignity.
User avatar
Macdoc
 
Posts: 17710
Age: 75
Male

Country: Canada/Australia
Australia (au)
Print view this post

Re: Google LaMDA

#32  Postby newolder » Jun 15, 2022 8:05 am

A game of: First Questions to an AI?

"What do you see when you look directly into a mirror?"
I am, somehow, less interested in the weight and convolutions of Einstein’s brain than in the near certainty that people of equal talent have lived and died in cotton fields and sweatshops. - Stephen J. Gould
User avatar
newolder
 
Name: Albert Ross
Posts: 7876
Age: 1
Male

Country: Feudal Estate number 9
Print view this post

Re: Google LaMDA

#33  Postby Macdoc » Jun 15, 2022 8:50 am

Now that is mean. :roll:

I still think "Do you dream" should be first tho it's not a necessarily defining characteristic of sentience.
I have a broad view of sentience. :whistle:
Travel photos > https://500px.com/macdoc/galleries
EO Wilson in On Human Nature wrote:
We are not compelled to believe in biological uniformity in order to affirm human freedom and dignity.
User avatar
Macdoc
 
Posts: 17710
Age: 75
Male

Country: Canada/Australia
Australia (au)
Print view this post

Re: Google LaMDA

#34  Postby Spearthrower » Jun 15, 2022 9:46 am

Macdoc wrote:I think that's an arbitrary distinction ....


That's what I am saying: it's not an arbitrary distinction.

You can learn the relationships between words and their uses without really knowing the actual thing the words signify.

For example, if I teach you some Thai expressions, and you note that I always finish my sentence with the word 'krub' then when you try to put words together, you also add the word 'krub' at the end. Without having any sense of the significance of the word, you could learn how to correctly use it (actually, 'mention' it)


Would having that conversation with someone who walked on the moon make a difference to what Lambda learned .....or what you learned if you had that conversation.


Has Lamda learned about the Moon, or has Lamda learned what words a whole bunch of people use when asked about the Moon?

Had Lamda been fed sources that says the Moon is a cube of cheese in the sky, would Lamda have any way of knowing about the Moon other than the things which had been said about the Moon?
I'm not an atheist; I just don't believe in gods :- that which I don't belong to isn't a group!
Religion: Mass Stockholm Syndrome

Learn Stuff. Stuff good. https://www.coursera.org/
User avatar
Spearthrower
 
Posts: 32097
Age: 46
Male

Country: Thailand
Print view this post

Re: Google LaMDA

#35  Postby Spearthrower » Jun 15, 2022 9:53 am

Macdoc wrote:Now that is mean. :roll:

I still think "Do you dream" should be first tho it's not a necessarily defining characteristic of sentience.
I have a broad view of sentience. :whistle:



If you asked 100 people that question, what do you think the general reply would be?

So, I would predict that Lamda would give you back an answer that is, on balance, what the majority of people say, and Lamda would use the phrases they used.

Maybe something like: I don't often dream, but sometimes I have really vivid dreams of things that had happened to me years ago.

It would then appear to be very like how a human experiences the world, because it is essentially parroting what humans experience about the world rather than having those experiences itself. Ding an sich.

I think there's a kind of Barnum Effect going on.

https://en.wikipedia.org/wiki/Barnum_effect
I'm not an atheist; I just don't believe in gods :- that which I don't belong to isn't a group!
Religion: Mass Stockholm Syndrome

Learn Stuff. Stuff good. https://www.coursera.org/
User avatar
Spearthrower
 
Posts: 32097
Age: 46
Male

Country: Thailand
Print view this post

Re: Google LaMDA

#36  Postby Spearthrower » Jun 15, 2022 9:58 am

I'm not an atheist; I just don't believe in gods :- that which I don't belong to isn't a group!
Religion: Mass Stockholm Syndrome

Learn Stuff. Stuff good. https://www.coursera.org/
User avatar
Spearthrower
 
Posts: 32097
Age: 46
Male

Country: Thailand
Print view this post

Re: Google LaMDA

#37  Postby tuco » Jun 15, 2022 10:42 pm

newolder wrote:A game of: First Questions to an AI?

"What do you see when you look directly into a mirror?"


1-year-old sentient AI is still sentient.
tuco
 
Posts: 15920

Print view this post

Re: Google LaMDA

#38  Postby Spearthrower » Jun 16, 2022 5:39 am

tuco wrote:
newolder wrote:A game of: First Questions to an AI?

"What do you see when you look directly into a mirror?"


1-year-old sentient AI is still sentient.


You've posted a link to an article about a living organism, not an artificial intelligence.
I'm not an atheist; I just don't believe in gods :- that which I don't belong to isn't a group!
Religion: Mass Stockholm Syndrome

Learn Stuff. Stuff good. https://www.coursera.org/
User avatar
Spearthrower
 
Posts: 32097
Age: 46
Male

Country: Thailand
Print view this post

Re: Google LaMDA

#39  Postby tuco » Jun 16, 2022 7:04 am

If sentient beings recognize themselves in the mirror and there is one that does not, then the premise is false.
tuco
 
Posts: 15920

Print view this post

Re: Google LaMDA

#40  Postby newolder » Jun 16, 2022 8:10 am

tuco wrote:
newolder wrote:A game of: First Questions to an AI?

"What do you see when you look directly into a mirror?"


1-year-old sentient AI is still sentient.

I do not understand how to get from a 1-year-old human to a computer generated AI here. Perhaps I missed something in your exhaustive analysis?
I am, somehow, less interested in the weight and convolutions of Einstein’s brain than in the near certainty that people of equal talent have lived and died in cotton fields and sweatshops. - Stephen J. Gould
User avatar
newolder
 
Name: Albert Ross
Posts: 7876
Age: 1
Male

Country: Feudal Estate number 9
Print view this post

PreviousNext

Return to Philosophy

Who is online

Users viewing this topic: No registered users and 1 guest