Posted: Aug 20, 2017 11:34 am
by zoon
Zadocfish2 wrote:Huh. I know this is an old thread, sorry for coming back to it, but... Coming into this thread, I did not expect this to be what it was about.

There's a real point to be talked about here in sociology (the thread title, not the first post). Humans form governments to ensure that everyone in a given population stays roughly in line. But, how much commonality is there to the process? Nearly all formed governments dole out punishment for murder and theft, I know, but what other things are considered universally wrong in human culture, regardless of location or time? That would be an interesting discussion. Can I re-purpose this one, or should I make a new one?

I find the trolley problem investigations interesting here, because they show at least some moral intuitions are fairly robust across very different cultures, even when they are in fact somewhat illogical. Prof Joshua Greene of Harvard lays out the problem here:
Joshua Greene wrote:
In the late 1990s, Jonathan Cohen and I initiated a line of research inspired by the Trolley Problem, which was originally posed by the philosophers Philippa Foot and Judith Jarvis Thomson.

First, we have the switch dilemma: A runaway trolley is hurtling down the tracks toward five people who will be killed if it proceeds on its present course. You can save these five people by diverting the trolley onto a different set of tracks, one that has only one person on it, but if you do this that person will be killed. Is it morally permissible to turn the trolley and thus prevent five deaths at the cost of one? Most people say "Yes."

Then we have the footbridge dilemma: Once again, the trolley is headed for five people. You are standing next to a large man on a footbridge spanning the tracks. The only way to save the five people is to push this man off the footbridge and into the path of the trolley. Is that morally permissible? Most people say "No."

These two cases create a puzzle for moral philosophers: What makes it OK to sacrifice one person to save five others in the switch case but not in the footbridge case? There is also a psychological puzzle here: How does everyone know (or "know") that it's OK to turn the trolley but not OK to push the man off the footbridge?

(And, no, you cannot jump yourself. And, yes, we’re assuming that this will definitely work.)

Joshua Greene says “most people” make these judgments, and he has much experimental evidence, this question has been asked by researchers around the world, including Amazonian Indians for whom the question had to be reframed in terms of canoes. They still turned out to have the same moral intuitions: it’s OK to divert the canoe with 5 people in it to save their lives, even if it will mean one person being hit by the canoe and dying, but it’s not OK to shove someone into the path of a canoe to save those 5 people. This is, strictly speaking, illogical, since in both cases the person making the choice is saving 5 lives at the expense of one. I think there is now a consensus that Prof Greene’s explanation is probably the right one, that there are two systems in the brain involved in making the judgment. There is the logical, more recently evolved, system in the front of the brain, which counts numbers and comes up with the result that saving five at the expense of one makes sense (the slower system, for thinking through). Then there is an evolutionarily older system which just tells us not to do it, which is activated when it comes to pushing someone to their death (the quick and dirty system, for fast action). Evolution is not necessarily entirely logical, and most normal people have this quirk of moral thinking. Further research has shown this kind of dual process moral thinking in other cases. Continuing the quote (I think I’ve redone all the links, if not, they are the ones in the original article):

Joshua Greene wrote:As the foregoing suggests, our differing responses to these two dilemmas reflect the influences of competing responses, which are associated with distinct neural pathways. In response to both cases, we have the explicit thought that it would be better to save more lives. This response is a more controlled response (see papers in here and here) that depends on the prefrontal control network, including the dorsolateral prefrontal cortex (see papers here and here). But in response to the footbridge case, most people have a strong negative emotional response to the proposed action of pushing the man off the bridge. Our research has identified features of this action that make it emotionally salient and has characterized the neural pathways through which this emotional response operates. (As explained in this book chapter, the neural activity described in our original 2001 paper on this topic probably has more to do with the representation of the events described in these dilemmas than with emotional evaluation of those events per se.)

Research from many labs has provided support for this theory and has, more generally, expanded our understanding of the neural bases of moral judgment and decision-making. For an overview, see this review. Recent theoretical papers by Fiery Cushman and Molly Crockett link the competing responses observed in these dilemmas to the operations of “model free” and “model based” systems for behavioral control. This is an important development, connecting research in moral cognition to research on artificial intelligence as well as research on learning and decision-making in animals.


At the end of that article (quoted below), Joshua Greene is cautious about normative questions, about what, if anything, is actually right or wrong; he says that he doesn’t think science can tell us that. It’s not obvious that anything can, but we do seem inclined to need some sense of outside normative reality, even though it’s almost certainly provided by our evolved social brains. (Quoting from earlier in Prof Greene’s article: “As I explain in Moral Tribes, I (along with many others) believe that morality is a suite of psychological devices that allow otherwise selfish individuals to reap the benefits of cooperation.”) I think it’s at least somewhat reassuring that our brains seem to have evolved to come up with similar moral intuitions in similar circumstances, even though we’ve almost certainly also evolved to argue about them at length.

Joshua Greene wrote:What does all of this mean for normative questions about right and wrong? As I explain in this paper and in my book, our dual-process moral brains are very good at solving some kinds of moral problems and very bad at solving others. I do not believe that science can, by itself, tell us what’s right or wrong. But I believe that scientific self-knowledge can help us make progress on distinctively modern moral problems—ones that our brains were not designed to solve. To make good moral decisions it helps to understand the tools that we bring to the job.