Posted: Jun 16, 2022 3:52 pm
by Spearthrower
Btw Graham - thanks for bringing this here. There are always lots of topics around I find intriguing to discuss, but always feel awkward creating a thread for them!

Good question. Humans have self images that don't remotely represent what's going to make them sentient. If you saw your brain you wouldn't recognise yourself.

That is what provoked me to consider it - how would an AI recognize itself visually comparative to its inner concept of itself? An AI would be present only as a function of a massive array of circuitry and wires and boards and cases and maybe even fans... in a strangely analagous way to the various pipes and pathways in our bodies (we know they are part of us, but when we think of who we are, they are not the thing we think of).

However, we 'face' the world - we have a front bit that goes forward, and our primary sense organs (most specifically, sight) are tightly packed together at the front bit due to our evolutionary heritage. To us, we are recognizable by our face much more than we are most of the rest of our body, except maybe our overall silhouette. We know each other visually by face, but not usually by other parts of our body. Self-recognition for us would be facial, I contend, and an AI has no 'face', no inherent external projection of itself.

We also have kinds of somatic awareness that seem to not have any analogy in an AI like proprioception, the sense of where our body extends, of motion and force because we're all meatily connected, but an AI isn't experiencing forces like us, it doesn't employ a body and so doesn't need to have any sense of the body's spatial orientation, position or motion even if it could potentially simulate them.

Self awareness, beyond the having of subjective experience, could be limited to embodied AI, where experiential states can be strongly correlated to physical action.

This is also somewhere my thoughts have gone before too. A sentient AI, or at least one that we'd recognize as sentient, probably can only come about by interacting with the physical world, it has to calibrate, correct, encounter and build that mental construct to have the Bayesian modelling style of thought comparable to humans that Macdoc brought up earlier. That's the only way we have ever seen sentience occur - bumping into the world and learning to whatever degree the organism is capable of what you can bump back and what you should avoid being bumped by.

I do think there's so much room to make mistakes here, it's like the idea of carbon chauvinism, where our experiments searching for life on other planets may well miss clear examples of such life simply because we're expecting to see the wrong kind of thing and thus not testing for the right kind of thing.