Posted: Jun 13, 2022 5:02 pm
by Spearthrower
I don't think you can prove a human is; I think you just have to acknowledge that you feel sentient based on <suite of reasons>, and other human feels sentient based on those same reasons. So if you then contend that their feelings of sentience are insufficient to validate their sentience, then you're essentially also demolishing your own claim to sentience, thus most sane people wouldn't consider that route. So, if an AI were able to tell me it was sentient, provide a suite of reasons I could intuitively understand as being similar enough to my own experience, and if I were able to be sure that it wasn't just a very sophisticated program that had been essentially told what to say, then yes, I'd accept they were sentient.

Of course, this doesn't escape what you might call a parrot problem (kinda like the use-mention distinction) - even a non-sentient AI could acquire through machine learning a list of things I consider to be sentient and then just parrot that list at me, but then again, I can't really rule that out as being what sentience is anyway in humans.

Whatever sentience is, though, if a slug can possess it, a heron can possess it, a deep sea crab possess it, maybe even a plant possess it... then whatever 'it' is must be an incredibly broad area which probably either has to remain poorly defined in order to hope to still account for all instances inside the category, or be so restrictive as to make the concept lose most of its interest value.

As for that most restrictive sense, sentience is really just: do you 'feel'? If I say I have a headache, you believe me. You can't see the headache or experience that particular experience I am having in any way, but you've no reason to disbelieve me while you may well do so if I reported an experience perceived through my senses which you couldn't yourself sense. Similarly, if the feeling in question is existential: you say you feel elated, it would be a non-sequitur for me to say: 'no you don't', because we all know I can't have access to that information other than what you report to me. So your sentience is reported, a slug's is assumed by its physical and physiological responses to plausible pain stimuli, if we accept a slug's sentience - a creature with very little in the way of neural capability because it responds to pain and therefore shows it has feelings, I think we'd have to be prepared to accept that an entity capable of plausibly even higher processing power than a human brain could attain some form of sentence, regardless of whether it's the sentience we experience - in the same way we don't expect the slug to possess a human experience of sentience.

So in summary: an AI could attain a sentience, but the only way we would know is if it reported it was so, but ultimately if it were sentient it would remain permanently outside of our actual knowledge (only reported), and would probably actually be an experience within the category of sentence that no human will ever share.

Knowing humans though, sentience chauvinism will be a qualifying component for most of the following centuries. We don't seem well evolved cognitively even to accept other barely indistinguishable human groups - imagine how we'd be with something that's conceivably smarter and more powerful than us. Ugh.