Paul Staggerman wrote:Pebble wrote:The requirement for emotional insight and empathy for 'moral' behaviour is not evidence for objective morality - rather the opposite. What this shows is that we 'learn' our morals from observing others and being aware of their needs/desires.
Yea, this is an argument against moral realism, as I stated above.
While I agree with both you and Pebble, it also seems to me that this approach, somewhat ironically, is an argument in favour of morality as a real biological phenomenon, even though philosophically it’s an argument against moral realism.
It seems to me that moral thinking is a clear predisposition of our species, in the same sort of way that language is. This is independent of any individual person, and is also independent of any culture.
For example, Yale Professor of Psychology
Paul Bloom describes how the behaviour of pre-linguistic babies can show moral thinking in a readable 2010 article
here, which begins:
Paul Bloom wrote:Not long ago, a team of researchers watched a 1-year-old boy take justice into his own hands. The boy had just seen a puppet show in which one puppet played with a ball while interacting with two other puppets. The center puppet would slide the ball to the puppet on the right, who would pass it back. And the center puppet would slide the ball to the puppet on the left . . . who would run away with it. Then the two puppets on the ends were brought down from the stage and set before the toddler. Each was placed next to a pile of treats. At this point, the toddler was asked to take a treat away from one puppet. Like most children in this situation, the boy took it from the pile of the “naughty” one. But this punishment wasn’t enough — he then leaned over and smacked the puppet in the head.
The researchers in this experiment had been careful to keep physical violence out of the puppet shows, but that child still had a strongly physical moralistic reaction to anti-social behaviour of one puppet towards another puppet – he was not reacting to anything the puppet had done to him. This is a key aspect of morality, and it’s clearly present even in one-year-old babies as described in the article. As with language, it’s not present to anything approaching the same degree in any non-human animal, though many precursors have been seen. Morality is an evolved, wired-in social adaptation which is central to our species’ ability to operate in effective groups.
Also as with language, the exact form that morality takes can vary widely between different cultures, although there are core features. One interesting example of a core feature is brought out by the trolley problems; I find it interesting because the results show that our moral thinking can be similar across cultures even when it’s actually somewhat illogical, and is very probably the upshot of a recently evolved set of brain processes clashing with a more ancient set in another part of the brain.
Joshua Greene, a professor of psychology at Harvard, describes the trolley problem
here:
Joshua Greene wrote:First, we have the switch dilemma: A runaway trolley is hurtling down the tracks toward five people who will be killed if it proceeds on its present course. You can save these five people by diverting the trolley onto a different set of tracks, one that has only one person on it, but if you do this that person will be killed. Is it morally permissible to turn the trolley and thus prevent five deaths at the cost of one? Most people say "Yes."
Then we have the footbridge dilemma: Once again, the trolley is headed for five people. You are standing next to a large man on a footbridge spanning the tracks. The only way to save the five people is to push this man off the footbridge and into the path of the trolley. Is that morally permissible? Most people say "No."
These two cases create a puzzle for moral philosophers: What makes it OK to sacrifice one person to save five others in the switch case but not in the footbridge case? There is also a psychological puzzle here: How does everyone know (or "know") that it's OK to turn the trolley but not OK to push the man off the footbridge?
(And, no, you cannot jump yourself. And, yes, we’re assuming that this will definitely work.)
As the foregoing suggests, our differing responses to these two dilemmas reflect the influences of competing responses, which are associated with distinct neural pathways. In response to both cases, we have the explicit thought that it would be better to save more lives. This response is a more controlled response (see papers in here and here) that depends on the prefrontal control network, including the dorsolateral prefrontal cortex (see papers here and here). But in response to the footbridge case, most people have a strong negative emotional response to the proposed action of pushing the man off the bridge. Our research has identified features of this action that make it emotionally salient and has characterized the neural pathways through which this emotional response operates. (As explained in this book chapter, the neural activity described in our original 2001 paper on this topic probably has more to do with the representation of the events described in these dilemmas than with emotional evaluation of those events per se.)
Research from many labs has provided support for this theory and has, more generally, expanded our understanding of the neural bases of moral judgment and decision-making. For an overview, see this review. Recent theoretical papers by Fiery Cushman and Molly Crockett link the competing responses observed in these dilemmas to the operations of “model free” and “model based” systems for behavioral control. This is an important development, connecting research in moral cognition to research on artificial intelligence as well as research on learning and decision-making in animals.
The idea is that the response in the first, switch, case is organised by a evolutionarily more recent part of our brain, which counts the number of lives to be saved and comes up with the answer: change the switch. By contrast, the second case, which involves physically shoving someone to their death, activates an older network in the brain, saying urgently “just don’t do it”, which overrides the cold-blooded counting network. But we are unaware of these mechanisms, we just think that switching to save four lives is OK while pushing to save four lives isn’t OK, and then come up with various bizarre “explanations” for these logically incompatible moral intuitions. This pair of problems has been tried out on people across the world, in very different cultures, including Amazonian Indians who had no idea what trolleys are. The question was modified to be about canoes instead of trolleys, and the same results were then found.