Psychology and Moral Realism

on fundamental matters such as existence, knowledge, values, reason, mind and ethics.

Moderators: kiore, Blip, The_Metatron

Re: Psychology and Moral Realism

#21  Postby Rumraket » Jul 25, 2016 10:34 am

Spinozasgalt wrote:
Rumraket wrote:I don't see how it makes sense to say that there exists objective moral truthes merely because there's bunch of moral subjects with common neurophysiological reasons for acting in similar ways.

Well, it's the "merely" that's a problem there. Who is a realist merely on the basis you've got there? There are plenty of realists out there who go into detail on why they think moral realism is true or has better support than the alternatives.

I'm sure there are. But what else could they talk about than, well, more behavioral and physiological commonalities between human subjects?

It seems to me that, if that is the types of reasons they use for arguing there are objectives moral truthes, they must have hidden something away in their definition of a "moral truth". Because I don't see how they "get to there" from, well, merely(sorry) the existence of said commonalities.

I suppose they could also believe in some platonic ideals or something.
Half-Life 3 - I want to believe
User avatar
Rumraket
 
Posts: 13264
Age: 43

Print view this post

Re: Psychology and Moral Realism

#22  Postby zoon » Jul 25, 2016 10:40 am

Paul Staggerman wrote:
Pebble wrote:The requirement for emotional insight and empathy for 'moral' behaviour is not evidence for objective morality - rather the opposite. What this shows is that we 'learn' our morals from observing others and being aware of their needs/desires.


Yea, this is an argument against moral realism, as I stated above.

While I agree with both you and Pebble, it also seems to me that this approach, somewhat ironically, is an argument in favour of morality as a real biological phenomenon, even though philosophically it’s an argument against moral realism.

It seems to me that moral thinking is a clear predisposition of our species, in the same sort of way that language is. This is independent of any individual person, and is also independent of any culture.

For example, Yale Professor of Psychology Paul Bloom describes how the behaviour of pre-linguistic babies can show moral thinking in a readable 2010 article here, which begins:
Paul Bloom wrote:Not long ago, a team of researchers watched a 1-year-old boy take justice into his own hands. The boy had just seen a puppet show in which one puppet played with a ball while interacting with two other puppets. The center puppet would slide the ball to the puppet on the right, who would pass it back. And the center puppet would slide the ball to the puppet on the left . . . who would run away with it. Then the two puppets on the ends were brought down from the stage and set before the toddler. Each was placed next to a pile of treats. At this point, the toddler was asked to take a treat away from one puppet. Like most children in this situation, the boy took it from the pile of the “naughty” one. But this punishment wasn’t enough — he then leaned over and smacked the puppet in the head.

The researchers in this experiment had been careful to keep physical violence out of the puppet shows, but that child still had a strongly physical moralistic reaction to anti-social behaviour of one puppet towards another puppet – he was not reacting to anything the puppet had done to him. This is a key aspect of morality, and it’s clearly present even in one-year-old babies as described in the article. As with language, it’s not present to anything approaching the same degree in any non-human animal, though many precursors have been seen. Morality is an evolved, wired-in social adaptation which is central to our species’ ability to operate in effective groups.

Also as with language, the exact form that morality takes can vary widely between different cultures, although there are core features. One interesting example of a core feature is brought out by the trolley problems; I find it interesting because the results show that our moral thinking can be similar across cultures even when it’s actually somewhat illogical, and is very probably the upshot of a recently evolved set of brain processes clashing with a more ancient set in another part of the brain.

Joshua Greene, a professor of psychology at Harvard, describes the trolley problem here:
Joshua Greene wrote:First, we have the switch dilemma: A runaway trolley is hurtling down the tracks toward five people who will be killed if it proceeds on its present course. You can save these five people by diverting the trolley onto a different set of tracks, one that has only one person on it, but if you do this that person will be killed. Is it morally permissible to turn the trolley and thus prevent five deaths at the cost of one? Most people say "Yes."

Then we have the footbridge dilemma: Once again, the trolley is headed for five people. You are standing next to a large man on a footbridge spanning the tracks. The only way to save the five people is to push this man off the footbridge and into the path of the trolley. Is that morally permissible? Most people say "No."

These two cases create a puzzle for moral philosophers: What makes it OK to sacrifice one person to save five others in the switch case but not in the footbridge case? There is also a psychological puzzle here: How does everyone know (or "know") that it's OK to turn the trolley but not OK to push the man off the footbridge?

(And, no, you cannot jump yourself. And, yes, we’re assuming that this will definitely work.)

As the foregoing suggests, our differing responses to these two dilemmas reflect the influences of competing responses, which are associated with distinct neural pathways. In response to both cases, we have the explicit thought that it would be better to save more lives. This response is a more controlled response (see papers in here and here) that depends on the prefrontal control network, including the dorsolateral prefrontal cortex (see papers here and here). But in response to the footbridge case, most people have a strong negative emotional response to the proposed action of pushing the man off the bridge. Our research has identified features of this action that make it emotionally salient and has characterized the neural pathways through which this emotional response operates. (As explained in this book chapter, the neural activity described in our original 2001 paper on this topic probably has more to do with the representation of the events described in these dilemmas than with emotional evaluation of those events per se.)

Research from many labs has provided support for this theory and has, more generally, expanded our understanding of the neural bases of moral judgment and decision-making. For an overview, see this review. Recent theoretical papers by Fiery Cushman and Molly Crockett link the competing responses observed in these dilemmas to the operations of “model free” and “model based” systems for behavioral control. This is an important development, connecting research in moral cognition to research on artificial intelligence as well as research on learning and decision-making in animals.

The idea is that the response in the first, switch, case is organised by a evolutionarily more recent part of our brain, which counts the number of lives to be saved and comes up with the answer: change the switch. By contrast, the second case, which involves physically shoving someone to their death, activates an older network in the brain, saying urgently “just don’t do it”, which overrides the cold-blooded counting network. But we are unaware of these mechanisms, we just think that switching to save four lives is OK while pushing to save four lives isn’t OK, and then come up with various bizarre “explanations” for these logically incompatible moral intuitions. This pair of problems has been tried out on people across the world, in very different cultures, including Amazonian Indians who had no idea what trolleys are. The question was modified to be about canoes instead of trolleys, and the same results were then found.
User avatar
zoon
 
Posts: 3302

Print view this post

Re: Psychology and Moral Realism

#23  Postby Spinozasgalt » Jul 25, 2016 11:17 am

Rumraket wrote:
Spinozasgalt wrote:
Rumraket wrote:I don't see how it makes sense to say that there exists objective moral truthes merely because there's bunch of moral subjects with common neurophysiological reasons for acting in similar ways.

Well, it's the "merely" that's a problem there. Who is a realist merely on the basis you've got there? There are plenty of realists out there who go into detail on why they think moral realism is true or has better support than the alternatives.

I'm sure there are. But what else could they talk about than, well, more behavioral and physiological commonalities between human subjects?

As many facets of our moral lives and/or experiences as they can pick out, really. If you cast "behavioural and physiological commonalities" wide enough, I guess you could capture a fair bit of that. Wouldn't really capture debates about meaning, normativity, companions in guilt, etc. But the problem with the initial way you described it is that it suggests a sort of reductionism that's opposed to realism from the get go, so you'd just get charged with begging the question if you tried to use that against a realist.

Rumraket wrote:It seems to me that, if that is the types of reasons they use for arguing there are objectives moral truthes, they must have hidden something away in their definition of a "moral truth". Because I don't see how they "get to there" from, well, merely(sorry) the existence of said commonalities.

Easy. Well, easy as really hard (it's ethics!). For a popular example: the realist can look for a domain that is typically understood realistically and then compare this domain to the ethical one favourably to get a presumption against anti-realism. The anti-realist then has to give convincing reasons why ethics should be understood unrealistically while the other domain should not. And then the fun begins!
When the straight and narrow gets a little too straight, roll up the joint.
Or don't. Just follow your arrow wherever it points.

Kacey Musgraves
User avatar
Spinozasgalt
RS Donator
 
Name: Jennifer
Posts: 18787
Age: 37
Male

Country: Australia
Australia (au)
Print view this post

Re: Psychology and Moral Realism

#24  Postby Rumraket » Jul 25, 2016 12:04 pm

Spinozasgalt wrote:
Rumraket wrote:
Spinozasgalt wrote:
Rumraket wrote:I don't see how it makes sense to say that there exists objective moral truthes merely because there's bunch of moral subjects with common neurophysiological reasons for acting in similar ways.

Well, it's the "merely" that's a problem there. Who is a realist merely on the basis you've got there? There are plenty of realists out there who go into detail on why they think moral realism is true or has better support than the alternatives.

I'm sure there are. But what else could they talk about than, well, more behavioral and physiological commonalities between human subjects?

As many facets of our moral lives and/or experiences as they can pick out, really. If you cast "behavioural and physiological commonalities" wide enough, I guess you could capture a fair bit of that. Wouldn't really capture debates about meaning, normativity, companions in guilt, etc. But the problem with the initial way you described it is that it suggests a sort of reductionism that's opposed to realism from the get go, so you'd just get charged with begging the question if you tried to use that against a realist.

I don't see how reductionism enters into this. Maybe moral realists mean something else by "objective moral truth" than what I understand. That is entirely possible.

Spinozasgalt wrote:
Rumraket wrote:It seems to me that, if that is the types of reasons they use for arguing there are objectives moral truthes, they must have hidden something away in their definition of a "moral truth". Because I don't see how they "get to there" from, well, merely(sorry) the existence of said commonalities.

Easy. Well, easy as really hard (it's ethics!). For a popular example: the realist can look for a domain that is typically understood realistically and then compare this domain to the ethical one favourably to get a presumption against anti-realism. The anti-realist then has to give convincing reasons why ethics should be understood unrealistically while the other domain should not. And then the fun begins!

This is too esoteric for me, I'd have to see a concrete example. The way you describe it it seems to me it would, at best, just show that the anti-realist could be holding contradictory views for different domains. If that were the case, the supposed anti-realist might just as well concede the point for the other domain, as opposed to becoming a moral realist.

Speaking for myself, I'm not at this moment aware that I'm holding views in different domains that is in some sort of logical conflict with my moral nihilism. If I was made aware of this, I dare say I'd modify the other view accordingly.
Half-Life 3 - I want to believe
User avatar
Rumraket
 
Posts: 13264
Age: 43

Print view this post

Re: Psychology and Moral Realism

#25  Postby zoon » Jul 25, 2016 12:53 pm

Spinozasgalt wrote:
Rumraket wrote:It seems to me that, if that is the types of reasons they use for arguing there are objectives moral truthes, they must have hidden something away in their definition of a "moral truth". Because I don't see how they "get to there" from, well, merely(sorry) the existence of said commonalities.

Easy. Well, easy as really hard (it's ethics!). For a popular example: the realist can look for a domain that is typically understood realistically and then compare this domain to the ethical one favourably to get a presumption against anti-realism. The anti-realist then has to give convincing reasons why ethics should be understood unrealistically while the other domain should not. And then the fun begins!

Would a possible example of “a domain that is typically understood realistically” perhaps be basic maths, so that it would be usual to say, for example: “it is true that two plus two equals four”, and that would be taken as an objective truth? Then if I am claiming that normal people typically say: “it is morally right to let one person die to save five, but not if it means actively taking steps to kill that one person”, then there’s a case for that being as much of an objective truth as “two plus two equals four”?

I think there’s a point in there, but that it mostly consists in taking down the status of “two plus two equals four”, and of “objective truth” generally, rather than raising the status of moral claims. In the end, I think of myself as a full-scale sceptic, I don’t think there’s any way we could identify objective truths even if they exist. “Two plus two equals four” is just something that effectively everyone happens to agree with. Of course, this is itself a claim about objective reality, so it’s where things get seriously messy (= fun?). Perhaps “truth” itself is a concept which depends on our hardwired capacity for moral thinking: “truth” is something that all right-thinking people should agree with, it’s a concept which engages the neural networks that judge other people.
Last edited by zoon on Jul 25, 2016 1:09 pm, edited 1 time in total.
User avatar
zoon
 
Posts: 3302

Print view this post

Re: Psychology and Moral Realism

#26  Postby Spinozasgalt » Jul 25, 2016 1:00 pm

Rumraket wrote:I don't see how reductionism enters into this. Maybe moral realists mean something else by "objective moral truth" than what I understand. That is entirely possible.

If your statement that "there's [merely] a bunch of moral subjects with common neurophysiological reasons for acting in similar ways" carries any covert presumption against realism, say, by favouring an account of morals that reduces them to purely neurophysiological terms as a shortcut to anti-realism, then I think the realist will contend that question begging is going on here under the guise of reductionism. That's all I'm getting at.

Rumraket wrote:This is too esoteric for me, I'd have to see a concrete example. The way you describe it it seems to me it would, at best, just show that the anti-realist could be holding contradictory views for different domains. If that were the case, the supposed anti-realist might just as well concede the point for the other domain, as opposed to becoming a moral realist.

Sure. The concession option is there. The sort of pressure that pushes a concession is something the realist wants though. They'd see it as something significant if realism in other domains were at stake, because the cost of sustaining anti-realism then becomes greater. And it just adds further complications as it touches more domains.
When the straight and narrow gets a little too straight, roll up the joint.
Or don't. Just follow your arrow wherever it points.

Kacey Musgraves
User avatar
Spinozasgalt
RS Donator
 
Name: Jennifer
Posts: 18787
Age: 37
Male

Country: Australia
Australia (au)
Print view this post

Re: Psychology and Moral Realism

#27  Postby Spinozasgalt » Jul 25, 2016 1:21 pm

Zoon, I do recall reading at least one paper on mathematical realism with that sort of strategy in mind, a year ago or more. But others trade on similarities with scientific realism or just more general claims. There's a bit of variety.
When the straight and narrow gets a little too straight, roll up the joint.
Or don't. Just follow your arrow wherever it points.

Kacey Musgraves
User avatar
Spinozasgalt
RS Donator
 
Name: Jennifer
Posts: 18787
Age: 37
Male

Country: Australia
Australia (au)
Print view this post

Re: Psychology and Moral Realism

#28  Postby Boyle » Jul 25, 2016 5:19 pm

igorfrankensteen wrote:
Boyle wrote:
igorfrankensteen wrote: In any case, I would suggest putting the concern differently: in order to make decisions about outcomes, you need to be able to VALUE one outcome over another. It doesn't have to be an emotional reaction per se.

How do you assign different values to different outcomes without appealing to any emotional needs?


Sociopaths seem to manage.

But seriously, are you saying that you've NEVER made a decision based on comparing possible outcomes, without allowing your emotions to decide for you?

I don't know about you, but I don't have an EMOTION associated with whether or not a given process costs more or less than another given process

Sociopaths have emotional states, too, they just tend not to feel others emotional states. A lack of empathy doesn't imply a lack of emotions.

No, what I'm saying is that I have preferences based upon emotional responses. Why do I want more money rather than less? Why do I want a good social situation rather than a bad one? Why do I to save money now rather than spend it all? To take this further: Why do I want to avoid pain and avoid inflicting suffering?

Pebble wrote:The requirement for emotional insight and empathy for 'moral' behaviour is not evidence for objective morality - rather the opposite. What this shows is that we 'learn' our morals from observing others and being aware of their needs/desires.

Isn't this also true for the physical world, though? We don't come prepackaged with facts about the world but basically everyone agrees that there is a world.

Spinozasgalt wrote:
Rumraket wrote:I don't see how reductionism enters into this. Maybe moral realists mean something else by "objective moral truth" than what I understand. That is entirely possible.

If your statement that "there's [merely] a bunch of moral subjects with common neurophysiological reasons for acting in similar ways" carries any covert presumption against realism, say, by favouring an account of morals that reduces them to purely neurophysiological terms as a shortcut to anti-realism, then I think the realist will contend that question begging is going on here under the guise of reductionism. That's all I'm getting at.

Rumraket wrote:This is too esoteric for me, I'd have to see a concrete example. The way you describe it it seems to me it would, at best, just show that the anti-realist could be holding contradictory views for different domains. If that were the case, the supposed anti-realist might just as well concede the point for the other domain, as opposed to becoming a moral realist.

Sure. The concession option is there. The sort of pressure that pushes a concession is something the realist wants though. They'd see it as something significant if realism in other domains were at stake, because the cost of sustaining anti-realism then becomes greater. And it just adds further complications as it touches more domains.

Egads, the jig is up!
Boyle
 
Posts: 1632

United States (us)
Print view this post

Re: Psychology and Moral Realism

#29  Postby Paul Staggerman » Jul 25, 2016 7:04 pm

Spinozasgalt wrote:My immediate suspicion was that you'd misunderstood in what way morals are "independent" for moral realists. (It's moral truths or facts that have some variant of independence, not the ways in which we respond to them.)


I explain at the end of the second paragraph how the fact that emotions are needed for human decision making leads to an attitude dependence of morality by pointing to the definition provided by the SEP in which morality is a code of conduct put forward by all rational people. I'd argue that human response is relevant to whether or not morality (and hence moral truth) is attitude independent in this instance as "putting forward" things is within the realm of reasoning and decision making. In other words, if you view morality as something that is put forward (hence a behavioral/mental response) then the ways in which we respond and whether or not those ways are separate from emotion and sentiment are relevant to the debate of whether morality is attitude dependent.
Last edited by Paul Staggerman on Jul 25, 2016 7:25 pm, edited 1 time in total.
Paul Staggerman
THREAD STARTER
 
Name: Pascal
Posts: 21

Country: Canada
Print view this post

Re: Psychology and Moral Realism

#30  Postby Paul Staggerman » Jul 25, 2016 7:24 pm

zoon wrote:
While I agree with both you and Pebble, it also seems to me that this approach, somewhat ironically, is an argument in favour of morality as a real biological phenomenon, even though philosophically it’s an argument against moral realism.

It seems to me that moral thinking is a clear predisposition of our species, in the same sort of way that language is. This is independent of any individual person, and is also independent of any culture.



I am a physicalist (from what I know it is the dominant position in both philosophy and the cognitive sciences), so I would say that morality is indeed a neurobiological phenomenon ultimately, like all psychological and sociological phenomenons.
You have provided some pretty interesting links , I'll read on Greene and Bloom when I have time. I only saw Bloom's video "The Psychology Of Everything", he seems interesting.
Paul Staggerman
THREAD STARTER
 
Name: Pascal
Posts: 21

Country: Canada
Print view this post

Re: Psychology and Moral Realism

#31  Postby Rumraket » Jul 25, 2016 8:08 pm

Spinozasgalt wrote:
Rumraket wrote:I don't see how reductionism enters into this. Maybe moral realists mean something else by "objective moral truth" than what I understand. That is entirely possible.

If your statement that "there's [merely] a bunch of moral subjects with common neurophysiological reasons for acting in similar ways" carries any covert presumption against realism, say, by favouring an account of morals that reduces them to purely neurophysiological terms as a shortcut to anti-realism

I don't think I'm favoring any particular view of morals. I was trying to really take "on board" the idea, as I understood the moral realist-type argument advanced earlier in this thread, that objective moral truthes exist because moral subjects have common neurophysiological causes for their moral thoughts and behaviors - and then to question how this leads to the conclusion "objective moral truthes exist". I just don't see how that follows.

As I said, maybe there's something else hiding away in what is meant by "objective", or "truth" than I how I understand it. And don't even get me started on the trying to define "moral" can of worms.

What I'm hoping for is for someone to bite the bullet and try to flesh out an argument for moral realism with premises and a conclusion.

I'd like to be persuaded out of moral nihilism to be honest. I've tried it myself, but I don't see how to get there without assuming something that seems either totally unwarranted or outright question-begging.

Spinozasgalt wrote:then I think the realist will contend that question begging is going on here under the guise of reductionism. That's all I'm getting at.

I don't understand how reductionism relates to this.

Spinozasgalt wrote:
Rumraket wrote:This is too esoteric for me, I'd have to see a concrete example. The way you describe it it seems to me it would, at best, just show that the anti-realist could be holding contradictory views for different domains. If that were the case, the supposed anti-realist might just as well concede the point for the other domain, as opposed to becoming a moral realist.

Sure. The concession option is there. The sort of pressure that pushes a concession is something the realist wants though. They'd see it as something significant if realism in other domains were at stake, because the cost of sustaining anti-realism then becomes greater. And it just adds further complications as it touches more domains.

I understand and I'm highly sympathetic to this type of argumentation. I have felt compelled to alter my views before by others using this method to highlight inconsistencies in my views and have tried to use it myself.

When I describe myself as a moral nihilist, I don't mean to say that I can prove objective moral values don't exist. Rather, analogously to how I'm an atheist, I don't think the claim that they DO exist has met it's burden of proof. And I would contend that it isn't necessary to believe there are objective moral truthes, to live a moral and ethical life.
Half-Life 3 - I want to believe
User avatar
Rumraket
 
Posts: 13264
Age: 43

Print view this post

Re: Psychology and Moral Realism

#32  Postby Rumraket » Jul 25, 2016 8:25 pm

Oh I see now that I have conflated objective moral truth, with just objective morality. There can be objective truthes about morality, if we define morality first of course.
Half-Life 3 - I want to believe
User avatar
Rumraket
 
Posts: 13264
Age: 43

Print view this post

Re: Psychology and Moral Realism

#33  Postby tuco » Jul 25, 2016 9:14 pm

zoon wrote:
Paul Staggerman wrote:
Pebble wrote:The requirement for emotional insight and empathy for 'moral' behaviour is not evidence for objective morality - rather the opposite. What this shows is that we 'learn' our morals from observing others and being aware of their needs/desires.


Yea, this is an argument against moral realism, as I stated above.

While I agree with both you and Pebble, it also seems to me that this approach, somewhat ironically, is an argument in favour of morality as a real biological phenomenon, even though philosophically it’s an argument against moral realism.

It seems to me that moral thinking is a clear predisposition of our species, in the same sort of way that language is. This is independent of any individual person, and is also independent of any culture.

For example, Yale Professor of Psychology Paul Bloom describes how the behaviour of pre-linguistic babies can show moral thinking in a readable 2010 article here, which begins:
Paul Bloom wrote:Not long ago, a team of researchers watched a 1-year-old boy take justice into his own hands. The boy had just seen a puppet show in which one puppet played with a ball while interacting with two other puppets. The center puppet would slide the ball to the puppet on the right, who would pass it back. And the center puppet would slide the ball to the puppet on the left . . . who would run away with it. Then the two puppets on the ends were brought down from the stage and set before the toddler. Each was placed next to a pile of treats. At this point, the toddler was asked to take a treat away from one puppet. Like most children in this situation, the boy took it from the pile of the “naughty” one. But this punishment wasn’t enough — he then leaned over and smacked the puppet in the head.

The researchers in this experiment had been careful to keep physical violence out of the puppet shows, but that child still had a strongly physical moralistic reaction to anti-social behaviour of one puppet towards another puppet – he was not reacting to anything the puppet had done to him. This is a key aspect of morality, and it’s clearly present even in one-year-old babies as described in the article. As with language, it’s not present to anything approaching the same degree in any non-human animal, though many precursors have been seen. Morality is an evolved, wired-in social adaptation which is central to our species’ ability to operate in effective groups.

Also as with language, the exact form that morality takes can vary widely between different cultures, although there are core features. One interesting example of a core feature is brought out by the trolley problems; I find it interesting because the results show that our moral thinking can be similar across cultures even when it’s actually somewhat illogical, and is very probably the upshot of a recently evolved set of brain processes clashing with a more ancient set in another part of the brain.

Joshua Greene, a professor of psychology at Harvard, describes the trolley problem here:
Joshua Greene wrote:First, we have the switch dilemma: A runaway trolley is hurtling down the tracks toward five people who will be killed if it proceeds on its present course. You can save these five people by diverting the trolley onto a different set of tracks, one that has only one person on it, but if you do this that person will be killed. Is it morally permissible to turn the trolley and thus prevent five deaths at the cost of one? Most people say "Yes."

Then we have the footbridge dilemma: Once again, the trolley is headed for five people. You are standing next to a large man on a footbridge spanning the tracks. The only way to save the five people is to push this man off the footbridge and into the path of the trolley. Is that morally permissible? Most people say "No."

These two cases create a puzzle for moral philosophers: What makes it OK to sacrifice one person to save five others in the switch case but not in the footbridge case? There is also a psychological puzzle here: How does everyone know (or "know") that it's OK to turn the trolley but not OK to push the man off the footbridge?

(And, no, you cannot jump yourself. And, yes, we’re assuming that this will definitely work.)

As the foregoing suggests, our differing responses to these two dilemmas reflect the influences of competing responses, which are associated with distinct neural pathways. In response to both cases, we have the explicit thought that it would be better to save more lives. This response is a more controlled response (see papers in here and here) that depends on the prefrontal control network, including the dorsolateral prefrontal cortex (see papers here and here). But in response to the footbridge case, most people have a strong negative emotional response to the proposed action of pushing the man off the bridge. Our research has identified features of this action that make it emotionally salient and has characterized the neural pathways through which this emotional response operates. (As explained in this book chapter, the neural activity described in our original 2001 paper on this topic probably has more to do with the representation of the events described in these dilemmas than with emotional evaluation of those events per se.)

Research from many labs has provided support for this theory and has, more generally, expanded our understanding of the neural bases of moral judgment and decision-making. For an overview, see this review. Recent theoretical papers by Fiery Cushman and Molly Crockett link the competing responses observed in these dilemmas to the operations of “model free” and “model based” systems for behavioral control. This is an important development, connecting research in moral cognition to research on artificial intelligence as well as research on learning and decision-making in animals.

The idea is that the response in the first, switch, case is organised by a evolutionarily more recent part of our brain, which counts the number of lives to be saved and comes up with the answer: change the switch. By contrast, the second case, which involves physically shoving someone to their death, activates an older network in the brain, saying urgently “just don’t do it”, which overrides the cold-blooded counting network. But we are unaware of these mechanisms, we just think that switching to save four lives is OK while pushing to save four lives isn’t OK, and then come up with various bizarre “explanations” for these logically incompatible moral intuitions. This pair of problems has been tried out on people across the world, in very different cultures, including Amazonian Indians who had no idea what trolleys are. The question was modified to be about canoes instead of trolleys, and the same results were then found.


..he then leaned over and smacked the puppet in the head.


lol

Its hardly surprising, innit? Moral realism in non-human animals is probably no different. The difference between non-human animals and humans is that we can influence how we behave in more profound way than any animals known to us can.

Obviously smacking the puppets in the head is not the way to go.
tuco
 
Posts: 16040

Print view this post

Re: Psychology and Moral Realism

#34  Postby Boyle » Jul 25, 2016 10:25 pm

Rumraket wrote:When I describe myself as a moral nihilist, I don't mean to say that I can prove objective moral values don't exist. Rather, analogously to how I'm an atheist, I don't think the claim that they DO exist has met it's burden of proof. And I would contend that it isn't necessary to believe there are objective moral truthes, to live a moral and ethical life.

How heavy is that burden? That is, what sort of support have you seen and accepted/dismissed?

I'm not necessarily a moral realist, but I been shifting that way. I don't know that morality exists apart from a moral agent, but if most moral agents develop the same moral framework (eg, avoid harming things that suffer) that seems to imply that morality is universal, at least among agents of that type.
Boyle
 
Posts: 1632

United States (us)
Print view this post

Re: Psychology and Moral Realism

#35  Postby Rumraket » Jul 26, 2016 6:34 am

Boyle wrote:
Rumraket wrote:When I describe myself as a moral nihilist, I don't mean to say that I can prove objective moral values don't exist. Rather, analogously to how I'm an atheist, I don't think the claim that they DO exist has met it's burden of proof. And I would contend that it isn't necessary to believe there are objective moral truthes, to live a moral and ethical life.

How heavy is that burden? That is, what sort of support have you seen and accepted/dismissed?

I'm not necessarily a moral realist, but I been shifting that way. I don't know that morality exists apart from a moral agent, but if most moral agents develop the same moral framework (eg, avoid harming things that suffer) that seems to imply that morality is universal, at least among agents of that type.

I agree but there's quite a difference between saying morality is universal and morality is objective.
Half-Life 3 - I want to believe
User avatar
Rumraket
 
Posts: 13264
Age: 43

Print view this post

Re: Psychology and Moral Realism

#36  Postby Spinozasgalt » Jul 26, 2016 9:29 am

Paul Staggerman wrote:
Spinozasgalt wrote:My immediate suspicion was that you'd misunderstood in what way morals are "independent" for moral realists. (It's moral truths or facts that have some variant of independence, not the ways in which we respond to them.)


I explain at the end of the second paragraph how the fact that emotions are needed for human decision making leads to an attitude dependence of morality by pointing to the definition provided by the SEP in which morality is a code of conduct put forward by all rational people.

I'll try to put this another way. Look at what you've said carefully. The rationality in question is practical rather than theoretical. That is, it designates rationality in the sphere of practical reason (reasoning about what we should do). Why? Because the whole debate is about normativity in the practical and moral sense. And practical rationality consists in something like being capable of responding to the full range of practical reasons. However, if minimal practical rationality means having such capability, then it's already a stipulated condition of us being "rational people" that we be emotional beings who can decide amongst the full range of alternatives.

In that case, the basis of your objection to realism seems to be on a condition already built into the definition in the SEP. So I can't see how it would work as an objection to moral realism. It's a condition for realism.
When the straight and narrow gets a little too straight, roll up the joint.
Or don't. Just follow your arrow wherever it points.

Kacey Musgraves
User avatar
Spinozasgalt
RS Donator
 
Name: Jennifer
Posts: 18787
Age: 37
Male

Country: Australia
Australia (au)
Print view this post

Re: Psychology and Moral Realism

#37  Postby Spinozasgalt » Jul 26, 2016 10:10 am

Rumraket wrote:
Spinozasgalt wrote:
Rumraket wrote:I don't see how reductionism enters into this. Maybe moral realists mean something else by "objective moral truth" than what I understand. That is entirely possible.

If your statement that "there's [merely] a bunch of moral subjects with common neurophysiological reasons for acting in similar ways" carries any covert presumption against realism, say, by favouring an account of morals that reduces them to purely neurophysiological terms as a shortcut to anti-realism

I don't think I'm favoring any particular view of morals. I was trying to really take "on board" the idea, as I understood the moral realist-type argument advanced earlier in this thread, that objective moral truthes exist because moral subjects have common neurophysiological causes for their moral thoughts and behaviors - and then to question how this leads to the conclusion "objective moral truthes exist". I just don't see how that follows.

Who put it like that in the thread? I didn't see it.

Rumraket wrote:
Spinozasgalt wrote:then I think the realist will contend that question begging is going on here under the guise of reductionism. That's all I'm getting at.

I don't understand how reductionism relates to this.

One way of sidestepping realism is to reduce the ethical to the biological, right? Such a reductive view is shared by Zoon and others here. But the realist doesn't make this reduction. So, it's fine to use realistic premises to show the realist that her view collapses into such a reductive view, but if you put the ethical domain in biological or neurological terms as a shortcut to anti-realism, then the realist will call you out. You used terms like "neurophysiology" to frame the debate, so that sort of worry pops up.

Rumraket wrote:
Spinozasgalt wrote:
Rumraket wrote:This is too esoteric for me, I'd have to see a concrete example. The way you describe it it seems to me it would, at best, just show that the anti-realist could be holding contradictory views for different domains. If that were the case, the supposed anti-realist might just as well concede the point for the other domain, as opposed to becoming a moral realist.

Sure. The concession option is there. The sort of pressure that pushes a concession is something the realist wants though. They'd see it as something significant if realism in other domains were at stake, because the cost of sustaining anti-realism then becomes greater. And it just adds further complications as it touches more domains.

I understand and I'm highly sympathetic to this type of argumentation. I have felt compelled to alter my views before by others using this method to highlight inconsistencies in my views and have tried to use it myself.

When I describe myself as a moral nihilist, I don't mean to say that I can prove objective moral values don't exist. Rather, analogously to how I'm an atheist, I don't think the claim that they DO exist has met it's burden of proof. And I would contend that it isn't necessary to believe there are objective moral truthes, to live a moral and ethical life.

I always prefer moral skeptic or sceptic, personally. Nihilism always reads as a positive view about what morals are or aren't.
When the straight and narrow gets a little too straight, roll up the joint.
Or don't. Just follow your arrow wherever it points.

Kacey Musgraves
User avatar
Spinozasgalt
RS Donator
 
Name: Jennifer
Posts: 18787
Age: 37
Male

Country: Australia
Australia (au)
Print view this post

Re: Psychology and Moral Realism

#38  Postby zoon » Jul 26, 2016 11:49 am

Spinozasgalt wrote:Zoon, I do recall reading at least one paper on mathematical realism with that sort of strategy in mind, a year ago or more. But others trade on similarities with scientific realism or just more general claims. There's a bit of variety.

Probably scientific realism would have been better than the mathematical realism I used in that post #25, since the claim I was suggesting about morality is that it’s a biological feature of our species, which would make it real on the same level as other scientific ways of carving up the world such as atoms or kin altruism.

I’m glad to be able to say as an atheist that I think morality exists and is scientifically real, but I think this biological morality behaves rather differently from the morality which is commonly defended by philosophers who are moral realists? I think it’s much fuzzier, less black and white.

For a start, it does not directly make any normative claims; if anything, it undermines the traditional normative claims of morality. If all our moral thinking and behaviour is the upshot of evolution by natural selection, such that we are designed as if to maximise our inclusive fitness (that is, the number of our genes in the next generation), then there’s no inherent reason why we ought to continue with it. At most, it can be claimed that living in tune with our predispositions is likely to make us happier.

The main difference (between scientific realism and this biological moral realism) is that biological morality, like language, is very flexible: different groups have different versions. The moral rules of a group often feel universal and part of the fabric of the world for the individuals in that group, but, outside the core features, often differ markedly from those of another group. This is a useful feature of morality regarded as a biological phenomenon, because it enables groups in different circumstances to function well with different moral rules. However, it undercuts realism as applied to the rules themselves, where it’s not the case for mathematics or science, which have disputed questions, but where eventually everyone agrees. Perhaps working agreement on morality may become more global with globalisation of communications and transport.

It’s still perhaps worth emphasising the core features of morality for any functioning human group; for example, the members of the group are regarded as sentient and goal-directed, and unjustified mistreatment of one member by another is met with punishment of various kinds from the rest. This automatic protection of each group member by all the others is unique to humans, and appears more or less spontaneously in any functioning group (as demonstrated by Paul Bloom’s pre-linguistic babies). It’s a prescientific adaptation, and depends on concepts which are not obviously supported by science such as sentience and goal-directedness, but, while we still have essentially no scientific understanding of the social mechanisms in our brains, it’s the best we’ve got.

One positively dangerous aspect of our evolved biological morality in a modern globalised world where we need to cooperate globally to avoid disastrous wars, is that it’s very good at pitting one group against another. We have no difficulty in seeing that lot over the hill as dangerous and essentially amoral subhumans, and treating them accordingly. A 2008 article in Nature by Samuel Bowles here discusses how warfare may have been central to the evolution of human altruism and morality. Our tendency to regard differences of opinion in moral rules as fighting matters, could be built into the way morality evolved.
User avatar
zoon
 
Posts: 3302

Print view this post

Re: Psychology and Moral Realism

#39  Postby zoon » Jul 27, 2016 7:52 am

Spinozasgalt wrote:
Rumraket wrote:
Spinozasgalt wrote:then I think the realist will contend that question begging is going on here under the guise of reductionism. That's all I'm getting at.

I don't understand how reductionism relates to this.

One way of sidestepping realism is to reduce the ethical to the biological, right? Such a reductive view is shared by Zoon and others here. But the realist doesn't make this reduction. So, it's fine to use realistic premises to show the realist that her view collapses into such a reductive view, but if you put the ethical domain in biological or neurological terms as a shortcut to anti-realism, then the realist will call you out. You used terms like "neurophysiology" to frame the debate, so that sort of worry pops up.

I take it that in your terms as stated above, I’m question-begging when I “put the ethical domain in biological or neurological terms as a shortcut to anti-realism”, and so I’m due to be called out by realists?

My contention is that morality is a scientifically real (if somewhat fuzzy) feature of our species, like language. The claim is that all normally functioning human groups show moral thinking and behaviour, and that the core features include the members of the group regarding each other as sentient and goal-directed, and ganging up on any group member who mistreats another. This biological morality is as real as any other scientific feature of the world, it’s an aspect of our evolved behaviour underpinned by DNA coding for wiring in the brain, it is not in any kind of competition with or contrast to scientific realism.

On the other hand, this biological morality is not the same as the moral realism of philosophers (or common sense), in which at least some moral claims, such as that “it is better that sentient beings should not suffer” are objectively true, irrespective of how a species happened to evolve – have I got that right?

I think my main argument against that kind of moral realism (assuming the statement above is more or less correct, which it may not be) is that it’s vacuous. It comes under the same heading as gods who are grounds of all being but don’t actually do anything whatsoever, or invisible pink unicorns. Certainly, a moral realist can call me out, and show that there’s no sound way of disproving the objective reality of moral claims, but then theists can also say correctly that there’s no way to disprove the existence of a god who is the ground of all being and leaves it at that. There is no way to disprove the objectively real existence of invisible pink unicorns. Entities with no impact on the scientifically real world can be ignored for practical purposes, and morality is supposed to be about practical reasoning.

Historically, claims that the objective truth of moral commands is independent of human thinking have been underpinned by claims that gods or spirits or the universe at large punish anyone who breaks a moral command. Gods may send down thunderbolts on malefactors, or consign them to hell after death, or karma may have similar effects. The problem with these claims is the lack of any evidence that they are true, it’s why theists have been reduced to claiming that god exists but chooses not to interfere in any way.

I suppose my question is here: in practical terms, what claims does moral realism make? What are the differences in practice from the biologically real morality which I’ve outlined above?
User avatar
zoon
 
Posts: 3302

Print view this post

Re: Psychology and Moral Realism

#40  Postby zoon » Jul 27, 2016 10:25 am

I think my claim about biological morality is that it’s normative for practical purposes, in that any human group which doesn’t use this evolved moral thinking to cope with intragroup competition will probably come to a quick and sticky end. Again, I am likening morality to language as an evolved feature of the human species. Scientifically, there is no normative rule that humans should use language to communicate, but in practice any human group which does not use some form of language will rapidly cease to be able to operate effectively as a group.

If neuroscience continues to improve our understanding of the mechanisms in human brains, then eventually we may communicate directly between brains, and we may not need the evolved version of morality, but for the time being neuroscience has barely begun our scientific understanding of ourselves at the basic physical level, and human groups are still entirely dependent on prescientific morality and language.
User avatar
zoon
 
Posts: 3302

Print view this post

PreviousNext

Return to Philosophy

Who is online

Users viewing this topic: No registered users and 1 guest