I and others argue that, in scott1328’s words (#21), “ToM provides the underpinnings of human consciousness”. kennyc disagrees:
kennyc wrote:Actually it's the opposite, consciousness provides the basis for theory of mind. Once an entity becomes self/aware they then realize and recognize that others are as well and can attribute mental states, knowledge, and perspectives/POV to them.
Consciousness comes first from awareness when the brain has reached to appropriate development level.
I think my main difference with kennyc is semantic, a question of what is to count as “consciousness”, and I don’t think it’s a difference I would want to get too hung up on?
As I see it, the word “consciousness”, along with mental terms generally, has traditionally been taken to refer to entities/events belonging to some essentially non-physical spirit realm, which both of us (along with most people posting on RatSkep) agree does not exist.
Science has often shown that the world is not the way it was thought to be, and language gets readjusted to fit the new reality, often fairly slowly and vaguely as the scientific model gains acceptance. Some words, like “water” or “sunset”, simply go on being used in ordinary life very much as before, while the scientific model changes underneath them. These words have to some extent been redefined: water is no longer an Aristotelean element but is still the stuff we drink, a sunset is no longer thought to be the sun moving (though it’s often spoken of as if it were), but is still the sun’s disappearance over the horizon. Other words, like “ghost” or “angel” don’t get redefined; instead, the things they stand for are no longer taken to exist.
I think that mental terms such as “consciousness” are in a considerable state of uncertainty at the moment. Very many people, including everyone who is religious, would still see minds and consciousness as something non-physical, in spite of all the evidence from scientific discoveries, so they are happy to use “consciousness” in its full traditional sense. Atheists of a scientific and physicalist bent do not use the word that way, and a question arises: whether to redefine it like “water” or “sunset” (and if so, exactly how to redefine it), or whether to treat it like “ghost” or “angel”, and say that what it refers to is something which science has shown not to exist.
In another thread
here, logical bob has been making a case for treating all mental terms including consciousness in the second way, like “ghost” or “angel”. For logical bob (if I’ve understood him correctly) and those who agree with him, consciousness is simply something that has been shown not to exist.
Both kennyc and I (again, supposing I’ve understood kennyc correctly) disagree with logical bob; we would both prefer to redefine the term “consciousness” as “water” and “sunset” have been redefined, so that it now refers to something scientifically respectable. We disagree with each other on what that something would be. I think perhaps both kennyc and I would agree that the known scientific facts could well be much as described (?hypothesized) by the Princeton researcher Michael Graziano on his website
here:
Michael Graziano wrote:
About half a billion years ago, nervous systems evolved an ability to enhance the most pressing of incoming signals. Gradually, this ability to focus on selected signals came under a more sophisticated, top-down control and became what is now called attention.
In control theory, if a brain is to control something, it should have an internal model of the thing to be controlled. According to the "attention schema theory", to effectively deploy its own attentional focus, the brain needs a constantly updated simulation or model of attention. Otherwise the brain would not possess explicit knowledge about its changing state of attention, or about the consequences of attending to something. This model of attention is schematic and lacking in detail.
There is no adaptive reason for a brain to know that it has electrochemical signals passing through neurons, or that the signals compete in a complex manner that results in some signals becoming enhanced, or that the enhanced signals have more influence over the parts of the brain involved in decision-making, movement, and memory. Brains don’t need that detailed or accurate information about themselves in order to function. Instead, the simplified model of attention attributes to the self an experience of X -- the property of being conscious of something. In this theory, a brain attributes to itself, "I am aware of X, in the sense of mentally possessing X and being able to react to X," because that attribution is a good, if simplified, model of the much more complex process of paying attention to X. The model helps keep track of the ever-changing state of attention and helps to predict the consequences of attention. Just as the brain can direct attention to external signals or to internal signals, that model of attention can attribute to the self a consciousness of external events or of internal events. Self awareness, awareness of emotions, awareness of one’s own thoughts, awareness of sensory events, all of these types of awareness can be accommodated by this theory.
In this theory, a brain does not actually have awareness. Instead it has attention, a mechanistic process. It also has information, in an internal model, that tells it that it has awareness. The information describes a self that experiences something and that can choose to react to and remember that something. The reason for this information is that it is a useful, if approximate, description of attention. The brain is captive to that internal information. On introspection — when relying on internal data — the system will always conclude that it has awareness, because that is what its internal models tell it.
As the model of attention increased in sophistication through evolutionary time, we hypothesize that it came to be used for a variety of other cognitive purposes. It may have enhanced the integration of information in the brain. For example, if your brain is attending to an apple, a model of that internal state requires a model of the apple, a model of yourself, and a model of the act of attention. These disparate pieces of information must be linked together — much like color and shape information must be linked together to form a visual model of the apple. Your brain would then possess an internal model that says, in effect, “I am aware of the apple.” An internal model of attention therefore fundamentally links information across many domains, especially between information about the self and information about the outside world.
Another use of an internal model of attention is to model the attentional state of other individuals to gain better prediction of their behavior. We suggest that in the human brain, similar and partly overlapping mechanisms attribute awareness to oneself and attribute awareness to others.
It is not clear when in evolution the social attribution of awareness began to emerge. The accompanying diagram places it at the start of primate evolution, 65 million years ago (MYA), but it could have begun much earlier. Perhaps most birds and mammals have some ability to attribute awareness to each other. Another possibility is that the social use of awareness expanded much later with hominins, beginning about 6 MYA. Now, in humans, consciousness plays a major role in social and cultural capability. We paint the world with perceived consciousness. Family, friends, pets, spirits, gods, these are all suffused with attributions of consciousness.
In this theory, awareness, the ability of brains to attribute to themselves a subjective experience of something, emerged first with a specific function related to the control of attention. It continues to evolve, however, expanding its cognitive role, becoming the intricate lattice of cognitive and social properties we call consciousness.
The attention schema theory is entirely mechanistic and therefore scientifically testable. In this theory, awareness is not a fuzzy philosophical flourish, but a key part of the brain's machinery for processing data.
Prof Graziano is using the word “awareness” to refer to a non-human animal’s hypothesised attribution of subjective experience to itself, and he only uses the word “consciousness” in connection with the more advanced and more definite human attribution of subjective experience to others as well as to self. I would prefer that usage (keeping “consciousness” for the unique, shared, interpersonal way humans think of awareness), while kennyc considers that “consciousness” should be the word for both animals and humans. Again, I don’t think either of us disagrees with Prof Graziano’s summing up of the science (at least as a reasonable set of hypotheses)?
I’m coming round to the view that the word “consciousness” (along with other mental terms) is at least currently irredeemably vague; people are going to use them to mean very different things. If the intended meaning is clear enough from the context, that’s fine, but if there’s a fair chance of being misunderstood, then the terms should be carefully defined at the outset (as Prof Graziano does in the quote above for “attention”, “awareness” and “consciousness”) or avoided altogether. I don’t think it’s helpful for atheists of a scientific persuasion to argue too much about exactly what mental terms such as “consciousness” should or should not mean, since the science is still unclear (I’m not sure how much of Prof Graziano’s exposition above is hypothetical) and the one point on which we agree is that most people (e.g. all theists) are using the words in error; it’s safer either to clarify what we mean on each occasion, or to avoid them, if we are trying to use exact terminology.
?
This does become tricky, because mental terms such as belief and intention, and indeed consciousness, are still used all the time, they express needed everyday concepts succinctly (in this respect beliefs and intentions are
not like ghosts or angels), and trying to define them exactly or to replace them becomes long-winded and often less intelligible than before.