Posted: Dec 01, 2020 11:07 am
by zoon
Fenrir wrote:My only quibble is the contention that most scientific discoveries contradict our view of reality, which I think is totally unfounded.

My guess, however, is that that minor quibble is about to be overwhelmed by a tsunami of nonsense just as soon as the narcissists idealists get here.


I had better start with the disclaimer that I’m not an idealist, I think all the evidence is behind the conclusion that our brains follow the laws of physics, not the other way round. The evidence also supports the argument that our brains are the result of evolution, so that the mental traits of an individual are, on average, “designed” by natural selection to maximise the survival of that individual’s genes (rather than the survival of the individual or the group).

It’s fair to say that the scientific worldview has no place for consciousness, however real one’s own consciousness may feel, and this is distinctly counterintuitive. However, I think it’s something we can reasonably easily accept at least theoretically, in the same way that we accept while watching a sunrise that the earth is turning, even though all our direct senses are telling us that the earth is stationary and the sun is moving. (*1)

It’s also the case that the moral rules which underlie much of our social behaviour are intuitively assumed to be dependent on the other person being sentient. If science is correct and sentience is unreal, then it appears at first sight to follow that it would be OK to disregard other people’s feelings and to treat them as badly as we want. This is the trickier part, that the reality of the non-scientific consciousness of other people is assumed to be central to the way they “should” be treated in ordinary life, so it’s dangerous to jettison that belief. I think the key point here is that we need to treat other people in our group reasonably well if we want them to cooperate, whether or not they are “really” sentient. With that in mind, maintaining at least some moral rules makes sense (both normatively for ordinary life and descriptively for evolutionary theory). Our evolved tendency to ascribe sentience to other people enables us to predict how we need to treat them if we want the benefits of cooperation. (*2)

*1
We see people as conscious because we’ve evolved to use teleological guesswork to predict them. Guessing what another person wants, and then working backwards to what they may do about it, is still a much more effective way of predicting people in ordinary social life than the best of modern science. The guesses work because human brains are similar to each other. (Citing, as usual, Wikipedia on Theory of Mind here.) The predictions of science are not teleological, they are based on looking at what has happened in the past and assuming the same will happen in the future. Science has transformed our ability to predict many inanimate objects, but human brains, so far, are too complicated. Evolved guesswork still beats science in social life, and so we still see ourselves and others as conscious.

*2
Modern evolutionary theory emphasises that natural selection is primarily about the survival of an individual’s genes, not the survival of the individual or the group. Quoting from a 2011 article in Nature signed by 135 scientists here: “Natural selection explains the appearance of design in the living world, and inclusive fitness theory explains what this design is for. Specifically, natural selection leads organisms to become adapted as if to maximize their inclusive fitness.”
Traditionally, the “scientific” view of human nature, that natural selection would design us to be selfish, has been pitted against the “religious” view, that we should morally care as much about others as ourselves (at least within our own group). Modern evolutionary theory cuts across this dichotomy; we are not entirely selfish, nor can we be expected to be entirely altruistic.
Close and extensive cooperation between individuals of the same species in non-human animals is always based on close kinship. For example, the colonies of eusocial insects such as ants and bees consist of closely related individuals. Humans are unique in the extent to which we cooperate with effectively unrelated individuals of our own species, and it is this cooperation which has made humans more successful than any other species (until we blow ourselves up, which may yet happen).
The added ingredient in human cooperation is our ability to manage reciprocity, cooperation for the benefit of all parties, without being cheated, and this needs intelligence. This is at least partly still speculation, I’m going by the abstract of a 2018 article “The coevolution of cooperation and cognition in humans” here:
Cooperative behaviours in archaic hunter–gatherers could have been maintained partly due to the gains from cooperation being shared with kin. However, the question arises as to how cooperation was maintained after early humans transitioned to larger groups of unrelated individuals. We hypothesize that after cooperation had evolved via benefits to kin, the consecutive evolution of cognition increased the returns from cooperating, to the point where benefits to self were sufficient for cooperation to remain stable when group size increased and relatedness decreased. We investigate the theoretical plausibility of this hypothesis, with both analytical modelling and simulations. We examine situations where cognition either (i) increases the benefits of cooperation, (ii) leads to synergistic benefits between cognitively enhanced cooperators, (iii) allows the exploitation of less intelligent partners, and (iv) the combination of these effects. We find that cooperation and cognition can coevolve—cooperation initially evolves, favouring enhanced cognition, which favours enhanced cooperation, and stabilizes cooperation against a drop in relatedness. These results suggest that enhanced cognition could have transformed the nature of cooperative dilemmas faced by early humans, thereby explaining the maintenance of cooperation between unrelated partners.

As the authors of that paper point out in the discussion, the enhanced cooperation which intelligence makes possible doesn’t necessarily depend on punishment of cheats. That article doesn’t discuss collective punishment, but intelligently targeted collective punishment would presumably enhance cooperation still more.

Metaphysically speaking, it might very well be true that consciousness is not “real”, in that if we understood and could predict each other more effectively by scientific methods than by evolved Theory of Mind, then we would not see ourselves as conscious. This may happen in the future, but it hasn’t happened yet. Equally, it is metaphysically possible that my own consciousness is real and nobody else’s is, which is solipsism. It is also metaphysically possible that all space is filled with invisible pink unicorns. On any of these metaphysical hypotheses, we are still stuck with needing to cooperate with other people, and with the fact that so far our evolved brains manage this cooperation best by attributing consciousness to ourselves and others, whether or not it’s “real”.