GrahamH wrote:zoon wrote:Where I’m disagreeing (though on definitions rather than on substance) is that I think the word “conscious” is more generally used to apply to the things that trigger our modelling ToM processes, whether those things are adult humans or non-human animals which almost certainly don’t model their own processes? Perhaps, since in common sense “consciousness” is taken to apply to some ineffable non-physical inwardness, those of us who don’t think there is any such thing are bound to be using the word in a non-standard way anyhow.
There is some disagreement there, because I think the word conscious cannot reasonably be separated from notions of potential responsiveness. I don't think the '-ness' can be the 'trigger' it has to be the 'modelling', the active responsive functional capacity to act act with 'awareness'.
You seem content to call a puppet conscious (but only while it's being animated by the puppeteer?)
The parallel would be that the brain attributes consciousness to the 'body puppet' when it is animating it (awake).
My feeling is still that this discussion is a bit like asking what exactly constitutes a “living” thing, consciousness is another term with an apparently clear-cut common sense meaning which doesn’t have a clear scientific basis, so it’s even more vague around the edges than most words. For example, sometimes it makes sense to talk of a “live” virus as opposed to a heat-treated inactivated one, while at other times it may equally reasonably be argued that viruses are never “alive” because they don’t actually do anything, they just set the infected cell’s machinery to producing more viruses. The science is clear, but the terminology involving the word “life” isn’t, because the word is still associated with a pre-scientific view of the world?
I would not generally be tempted to call puppets or thermostats conscious, though I can have some sympathy with those who do. I was thinking more of the continuum from ants to frogs to rats to chimps to people. If we limit consciousness to things which can indisputably have a Hard Problem, we would stay with adult humans. There’s certainly a case to be made for limiting the word “conscious” to creatures which do some self-modelling (though we know very little about which animals do, and how much), but it seems to me that this would involve a redefinition? – again, my main feeling is that “conscious” implies a world view which I don’t agree with, so if we continue to use the word we’ve redefined it anyway. A frog may not model itself, but it can certainly suddenly become aware of a wriggling worm in front of its nose, and predicting a frog successfully generally involves mental concepts, e.g. it’s afraid of something, or it believes the worm is edible.
I certainly agree with you that the Hard Problem is about human brains modelling themselves, where I’m disagreeing is that I don’t think the word “conscious” as it’s ordinarily used necessarily maps neatly on to creatures that model themselves, in the way that “water” maps to dihydrogen monoxide, it’s more like the way “reptile” turns out not to be a clade, so the word “reptile” is still used, but not in scientific classification.
GrahamH wrote:We may also differ on the related issue of self-consciousness. Hard Problem doesn't require self-consciousness of the sort that newborn humans and most animals lack. That suggests some conscious events or thoughts of 'this is me'. All that the HP requires is that the entity 'knows what pain feels like' (models that there is a self in pain). I'd feel comfortable assuming that at least all mammals have that capacity.
We know very little about whether, or to what extent, non-human animals model themselves, or work with the concept of a self. My suspicion is that we use Theory of Mind (ToM) processes to predict e.g. dogs, because this kind of prediction works well (far better than any attempt to follow their brain activities, which would get nowhere with current technology), and that because we are using ToM we easily project more of our own way of thinking on to the dog than is in fact the case. I suppose I’m not really disagreeing about whether dogs actually model themselves, rather that I think we think of them as conscious (“knowing there is a self in pain”) for other reasons? I do agree with you that the Hard Problem involves modelling oneself, I think you’ve probably identified a disagreement in that I would limit the Hard Problem to the kind of self-modelling that’s (so far) limited to adult humans, except to the extent that when using ToM we automatically attribute consciousness and all attendant problems to other creatures as well as to ourselves.
?