Why do YOU have principles?

A question of self identity and comprehension

Discussions about society in general and social activity.

Moderators: Calilasseia, ADParker

Re: Why do YOU have principles?

#21  Postby Spearthrower » Oct 19, 2015 10:24 am

John Platko wrote:I have principles because rules don't work and neither does complete randomness.



This thread causes some confusion in me every time I see the title pop up, and your post underscores that.

Are people even using the same definition for the term 'principles' here?

The dictionary also aids in this confusion:

http://dictionary.reference.com/browse/principle

an accepted or professed rule of action or conduct:

a fundamental doctrine or tenet; a distinctive ruling opinion:

guiding sense of the requirements and obligations of right conduct:

an adopted rule or method for application in action


You've opted for the fourth on this list - a general controlling notion potentially providing a guide to any situation rather than a specific set of explicit rules - but the latter could just as easily be considered valid.

And this is where my confusion lies - what does the question mean if you ask 'why' any individual possesses principles? Isn't it just the same as asking why an individual has thoughts or opinions? Isn't the answer really just 'because I am human'?

Perhaps the thread is meant to explore why humans have principles? If so, I think I could take a bead on it.
I'm not an atheist; I just don't believe in gods :- that which I don't belong to isn't a group!
Religion: Mass Stockholm Syndrome

Learn Stuff. Stuff good. https://www.coursera.org/
User avatar
Spearthrower
 
Posts: 27887
Age: 44
Male

Country: Thailand
Print view this post

Ads by Google


Re: Why do YOU have principles?

#22  Postby DavidMcC » Oct 19, 2015 2:25 pm

Spearthrower wrote:... Isn't the answer really just 'because I am human'?
...

Yes, and more specifically, because we are intelligent, social animals, and, as such, we evolved such that most of us learn to live by the general principles that govern such animals' behaviour, although it may require policing for some.
May The Voice be with you!
DavidMcC
 
Name: David McCulloch
Posts: 14913
Age: 67
Male

Country: United Kigdom
United Kingdom (uk)
Print view this post

Re: Why do YOU have principles?

#23  Postby zoon » Oct 19, 2015 4:33 pm

DavidMcC wrote:
Spearthrower wrote:... Isn't the answer really just 'because I am human'?
...

Yes, and more specifically, because we are intelligent, social animals, and, as such, we evolved such that most of us learn to live by the general principles that govern such animals' behaviour, although it may require policing for some.

For humans, there are some general principles which appear in nearly all groups and so are probably wired in: I'm thinking, for example, of the general prohibition of direct physical violence without very good reason, which shows up in the trolley problems across all cultures (e.g. see here). But humans also use their intelligence to set up different general principles in different groups: for example, people living where there are many communicable diseases tend to have less individualistic cultures, probably because in those places hygiene is more important at all times (e.g. see here). We also, as individuals and as subgroups, tend to try to tweak the local principles to our own advantage. Human groups need agreed general rules and principles to operate effectively, but working out those rules for each group is a matter of ongoing negotiation rather than of discovering external truths.
User avatar
zoon
 
Posts: 3230

Print view this post

Re: Why do YOU have principles?

#24  Postby igorfrankensteen » Oct 21, 2015 1:39 am

Spearthrower wrote:
John Platko wrote: what does the question mean if you ask 'why' any individual possesses principles? Isn't it just the same as asking why an individual has thoughts or opinions? Isn't the answer really just 'because I am human'?

Perhaps the thread is meant to explore why humans have principles? If so, I think I could take a bead on it.


Good point. I'm not as smart as many others here, and made mistakes in this thread in my phrasing.


I probably should have titled it Why do you CLAIM to have Principles? Even that wouldn't be perfect for what I'm trying to talk about and encourage thought about with it.

I am particularly going after the people who repeatedly claim this or that principle, as a reason why either others should obey them, or why they should be forgiven or excused for something else that they have done.

Yet if one watches them even for a short time, it begins to become clear that they only pretended to have that principle in that moment, because it served their immediate purposes. It was never actually a principle to THEM, at all. To THEM, it was a get-out-of-jail-free card (American Monopoly Game reference for any who might not recognize it). So the answer to my title question for THESE kinds of people, "Why do you have principles?" , is that they pretend (often to themselves as well) that they have principles, not because they are trying to give order and cohesion to their own inner sense of existence. Rather, they pretend to themselves and others that they have Principles, only because claiming to have principles often causes OTHER people to miss the incredibly selfish and short-sighted choices the claimant has made.
User avatar
igorfrankensteen
THREAD STARTER
 
Name: michael e munson
Posts: 2114
Age: 67
Male

Country: United States
United States (us)
Print view this post

Re: Why do YOU have principles?

#25  Postby Nihil » Oct 21, 2015 1:59 am

I have principles so that it may benefit myself as well as others, and I have a genuine concern that, in the absence of those principles, there would be no order in life whatsoever. What would happen to my sanity if I had no principles?
Nihil
 
Posts: 28

Print view this post

Re: Why do YOU have principles?

#26  Postby Spearthrower » Oct 21, 2015 2:53 am

igorfrankensteen wrote:
Spearthrower wrote:
John Platko wrote: what does the question mean if you ask 'why' any individual possesses principles? Isn't it just the same as asking why an individual has thoughts or opinions? Isn't the answer really just 'because I am human'?

Perhaps the thread is meant to explore why humans have principles? If so, I think I could take a bead on it.


Good point. I'm not as smart as many others here, and made mistakes in this thread in my phrasing.


I probably should have titled it Why do you CLAIM to have Principles? Even that wouldn't be perfect for what I'm trying to talk about and encourage thought about with it.

I am particularly going after the people who repeatedly claim this or that principle, as a reason why either others should obey them, or why they should be forgiven or excused for something else that they have done.

Yet if one watches them even for a short time, it begins to become clear that they only pretended to have that principle in that moment, because it served their immediate purposes. It was never actually a principle to THEM, at all. To THEM, it was a get-out-of-jail-free card (American Monopoly Game reference for any who might not recognize it). So the answer to my title question for THESE kinds of people, "Why do you have principles?" , is that they pretend (often to themselves as well) that they have principles, not because they are trying to give order and cohesion to their own inner sense of existence. Rather, they pretend to themselves and others that they have Principles, only because claiming to have principles often causes OTHER people to miss the incredibly selfish and short-sighted choices the claimant has made.



It's kind of hard to answer these questions as I expect they depend on the person and situation. Some people might just not be very self-aware, others may be unscrupulous, others may have higher order principles which will over-rule other principles they state they have as necessary.

For me personally, the principles I've come by via experience, reason, and fuck-ups are things I strive for, not things I necessarily claim I embody.
I'm not an atheist; I just don't believe in gods :- that which I don't belong to isn't a group!
Religion: Mass Stockholm Syndrome

Learn Stuff. Stuff good. https://www.coursera.org/
User avatar
Spearthrower
 
Posts: 27887
Age: 44
Male

Country: Thailand
Print view this post

Re: Why do YOU have principles?

#27  Postby tolman » Apr 24, 2016 6:53 pm

I see 'principles' as generalisations or heuristics.

Not many people have simple absolute principles which they never override, or which they never would override given a suitable situation, and it seems hard to define a set of simple principles which don't have obvious potential for conflict, or do define any rational set of rules to reflect different priorities among a set of principles.

I'm not sure that I'd even go so far as to say 'principles' were something I felt I had to live by, not least because for the most part, given that my principles are generalisations fitted around my overall world view, it would seem a bit pretentious to elevate the principles to some higher level of meaning and pat myself on the back for sticking to them when the vast majority of the time I'd be highly likely to act within them simply because they were chosen from all the available ones to fit with how I think, even if how I think may well have been shaped by thinking about general moral arguments as well as specific cases.

To answer the question, I would claim to 'have' principles because it seems a useful concept for generalising how I see the world and my interactions with it, but I do see then as something somewhat post-hoc, and to be treated with caution when used as justifications for a particular position.

If someone who had been, on balance, in favour of abortion came to be against it, or vice-versa, while they may make reference to principles of a right to choose or a right to life in defence of their new position (as they may well have done with their old one), I think that deep down it would be an emotional reaction which was driving the choice of which principle to favour.
Even if arguments which included references to principles had played some part in the changed emotional reaction, I suspect they would do so effectively by causing people to think of real or imagined cases and see how those things made them feel, rather than someone actually engaging in some attempted 'weighing of the principles',
I don't do sarcasm smileys, but someone as bright as you has probably figured that out already.
tolman
 
Posts: 7106

Country: UK
Print view this post

Ads by Google


Re: Why do YOU have principles?

#28  Postby igorfrankensteen » Apr 25, 2016 10:33 pm

Here's an image:

Principles are the Oort cloud for personal rules and processes. They are your Meta-rules.

People often as not, DISCOVER their principles, rather than choose them.

Because Principles have so often been declared to HAVE authority, in and of themselves, it's very common for individuals to THINK or to DECLARE that one of the better known ones is behind their decision making, but when their actions are observed over a period of time, discrepancies are visible.

It is the use of the concept of principles as a form of control, which makes it tremendously important for each individual to recognize what, if any, principles they actually do follow or submit to or believe in. If you aren't genuinely certain of what your own principles are, as well as why you adhere to them, you will either be easily subject to manipulation, or you will repeatedly be found to be attempting to manipulate others dishonestly yourself.
User avatar
igorfrankensteen
THREAD STARTER
 
Name: michael e munson
Posts: 2114
Age: 67
Male

Country: United States
United States (us)
Print view this post

Re: Why do YOU have principles?

#29  Postby tolman » Apr 25, 2016 11:18 pm

But leaving aside the issue of how I think (or how I feel) came to be shaped, if my 'principles' are generalisations of how I think and feel, isn't 'being genuinely certain of what my principles are' somewhat unnecessary, and how can I be certain what my principles are if they are generalisations of how I feel, and I have only experienced a tiny fraction of the situations I could experience?

If I don't generally like to see people in pain, what is actually added by declaring that as a 'principle', since without making the declaration, that's still how I generally feel?

'Difficult' cases seem to be the ones where multiple 'principles' conflict, where the principles themselves don't point the way to a clear decision in the absence of simple rules of priority. And to have such rules of priority, even leaving aside the fact that they won't always lead to a good decision. seems to be going to another meta level.

Were I to feel a slave to 'principles', wouldn't that make me more vulnerable to manipulation by people using principles as tools of argument, or at risk of simply choosing a principle from my available ones which seems to roughly apply and use that choice to justify not thinking any further?

Isn't there a tension between principles as tools of self-control and principles as tools of external control?

Is it really not the case that people who try to manipulate others for good or ill often do so by appeals to convenient principles?
I don't do sarcasm smileys, but someone as bright as you has probably figured that out already.
tolman
 
Posts: 7106

Country: UK
Print view this post

Re: Why do YOU have principles?

#30  Postby igorfrankensteen » Apr 26, 2016 7:58 pm

tolman:
if my 'principles' are generalisations of how I think and feel, isn't 'being genuinely certain of what my principles are' somewhat unnecessary,


No, the reverse. It's imperative. If you don't know WHAT your principles are, specifically, and you don't know how why you think they are your principles, then you are going to have a very hard time living according to them.

For example, a lot of people have what they THINK are their own principles, but which are actually Rules They Picked Up From Someone Else That Appear To let Them Have Their Way. Classic example: how many times have you heard someone say "I believe in Freedom of Speech, but no one should be allowed to say THAT!"

If I don't generally like to see people in pain, what is actually added by declaring that as a 'principle', since without making the declaration, that's still how I generally feel?


What is added if you declare that "I don't generally like to see people in pain" is a principle, is that you are either lying, or don't know what a principle is. What you feel, is an observation. No decisions are derived from it, no actions. The fact that you happen to be male or female or something else, is similarly not a principle, it is an observation.

How you feel doesn't start to become a principle, until you add on what the observation means, in a functional, and actionable sense. For example, if you add to " I don't like to see people in pain, " by saying "so I close my eyes or look the other way," then you are beginning to identify a principle of yours.

Were I to feel a slave to 'principles', wouldn't that make me more vulnerable to manipulation by people using principles as tools of argument, or at risk of simply choosing a principle from my available ones which seems to roughly apply and use that choice to justify not thinking any further?


Excellent question, because it demonstrates one of the most common situations where people delude themselves. Deciding not to have 'principles,' on the grounds that they can be used to manipulate you into making disadvantageous decisions, is itself, a declaration of a principle, even as the person saying it thinks they are "freeing" themselves from the "slavery " of living according to principle.

Further, if you find that following a given principle DOES make you feel "enslaved," it means that what you've actually discovered, is that the 'principle' you are resentfully following, is not your own. By definition, when you live in accordance with your OWN principles, you derive positive satisfaction from that, even when the immediate results aren't fun.

Isn't there a tension between principles as tools of self-control and principles as tools of external control?


Not when you have done as I described, and made sure that you know what YOUR principles are, and what OTHER PEOPLE'S principles are.

Is it really not the case that people who try to manipulate others for good or ill often do so by appeals to convenient principles?


I would choose a word other than "convenient" myself, but I know what you are getting at. You will not BE manipulable, if you have worked out what your actual principles are. For example, an atheist who knows they are one, and knows what kind of one they are, will not be successfully manipulated by appealing to the will of God. However, someone who has only worked out that they don't like being told what to do by people, but who hasn't figured out if they believe in God or nit, might be tricked resentfully into giving 'the will of God" the benefit of the doubt.
User avatar
igorfrankensteen
THREAD STARTER
 
Name: michael e munson
Posts: 2114
Age: 67
Male

Country: United States
United States (us)
Print view this post

Re: Why do YOU have principles?

#31  Postby tolman » Apr 26, 2016 9:30 pm

igorfrankensteen wrote:tolman:
if my 'principles' are generalisations of how I think and feel, isn't 'being genuinely certain of what my principles are' somewhat unnecessary,


No, the reverse. It's imperative. If you don't know WHAT your principles are, specifically, and you don't know how why you think they are your principles, then you are going to have a very hard time living according to them.

If I am generally consistent in my approach to life and my dealings with other people, it isn't hard at all.

igorfrankensteen wrote:For example, a lot of people have what they THINK are their own principles, but which are actually Rules They Picked Up From Someone Else That Appear To let Them Have Their Way. Classic example: how many times have you heard someone say "I believe in Freedom of Speech, but no one should be allowed to say THAT!"

If I don't generally like to see people in pain, what is actually added by declaring that as a 'principle', since without making the declaration, that's still how I generally feel?


What is added if you declare that "I don't generally like to see people in pain" is a principle, is that you are either lying, or don't know what a principle is.

OK, bad writing and poor checking on my part there. The generalisation into a principle is something like 'don't cause [unnecessary] pain', the following of which adds nothing obvious to what I was already doing, and, of course, in order not to be instant bollocks the moment an application of it as a rule is attempted, it has to include either the explicit or implicit qualifier 'unnecessary', or come attached to some set of priority rules to be followed in case of conflicts.

igorfrankensteen wrote:
Were I to feel a slave to 'principles', wouldn't that make me more vulnerable to manipulation by people using principles as tools of argument, or at risk of simply choosing a principle from my available ones which seems to roughly apply and use that choice to justify not thinking any further?


Excellent question, because it demonstrates one of the most common situations where people delude themselves. Deciding not to have 'principles,' on the grounds that they can be used to manipulate you into making disadvantageous decisions, is itself, a declaration of a principle, even as the person saying it thinks they are "freeing" themselves from the "slavery " of living according to principle.

But I wasn't deciding 'not to have them'.
I was pointing out that seeing them as post-hoc general rules adopted or made up to be consistent with how I think and feel, I can't really say I don't have them - they come pretty much for free, and can be a useful way of trying to communicate generalities of how I think and feel.
It's just that knowing where they came from and their nature, I don't feel any particular need to 'follow them' when to a large extent, they follow me, nor do I see a great value in them as tools of reasoning, expecially in hard cases, when much of the time people use them as excuses not to think particularly deeply about situations.

igorfrankensteen wrote:
Isn't there a tension between principles as tools of self-control and principles as tools of external control?

Not when you have done as I described, and made sure that you know what YOUR principles are, and what OTHER PEOPLE'S principles are.

Is it really not the case that people who try to manipulate others for good or ill often do so by appeals to convenient principles?

I would choose a word other than "convenient" myself, but I know what you are getting at. You will not BE manipulable, if you have worked out what your actual principles are.

The problem, of course, is that effectively no-one has a set of simple principles which are also comprehensive, or a set of clear priorities to govern what happens when principles clash, as any even vaguely useful set of principles obviously will.

People with different worldviews may well share some principles - maybe the more so the simpler that principles are - and someone in thrall to their own principles may find someone else appealing to some of those very principles and emphasising convenient ones in order to try to persuade the person a particular decision is acceptable or good.

igorfrankensteen wrote:For example, an atheist who knows they are one, and knows what kind of one they are, will not be successfully manipulated by appealing to the will of God. However, someone who has only worked out that they don't like being told what to do by people, but who hasn't figured out if they believe in God or nit, might be tricked resentfully into giving 'the will of God" the benefit of the doubt.

But 'not believing in god[s]' isn't a 'principle', it's an opinion on the nature of reality, and one which someone can hold perfectly firmly before they even know what the word 'atheist' is, let alone considers the issue of 'principles'.

What would 'an atheist who doesn't know they are one' be?
Someone who entirely lacked belief in gods but just hadn't adopted the label?

Surely, someone who understand that 'principles' are individual sets of rules people make up or adopt to make simplified generalised descriptions largely consistent with how they see the world, and what they tend to consider right and wrong, useful for communication but not to be taken too seriously in hard decision making, would be fairly unlikely to be impressed by other people making appeals to 'principles', even ones which they might share, let alone ones they don't?
I don't do sarcasm smileys, but someone as bright as you has probably figured that out already.
tolman
 
Posts: 7106

Country: UK
Print view this post

Re: Why do YOU have principles?

#32  Postby igorfrankensteen » Apr 27, 2016 10:53 pm

I was pointing out that seeing them as post-hoc general rules adopted or made up to be consistent with how I think and feel, I can't really say I don't have them - they come pretty much for free, and can be a useful way of trying to communicate generalities of how I think and feel.
It's just that knowing where they came from and their nature, I don't feel any particular need to 'follow them' when to a large extent, they follow me, nor do I see a great value in them as tools of reasoning, especially in hard cases, when much of the time people use them as excuses not to think particularly deeply about situations.


Again, this means they are NOT principles. Decidedly.

The problem, of course, is that effectively no-one has a set of simple principles which are also comprehensive, or a set of clear priorities to govern what happens when principles clash, as any even vaguely useful set of principles obviously will.


I do.

People with different worldviews may well share some principles - maybe the more so the simpler that principles are - and someone in thrall to their own principles may find someone else appealing to some of those very principles and emphasizing convenient ones in order to try to persuade the person a particular decision is acceptable or good.


Again, if they are "in thrall" to them, then they aren't THEIR principles. I went over that before.

igorfrankensteen wrote:
For example, an atheist who knows they are one, and knows what kind of one they are, will not be successfully manipulated by appealing to the will of God. However, someone who has only worked out that they don't like being told what to do by people, but who hasn't figured out if they believe in God or nit, might be tricked resentfully into giving 'the will of God" the benefit of the doubt.


My turn to note a typo. I meant to say who knows WHY they are one. Changes the whole thrust of the paragraph.
User avatar
igorfrankensteen
THREAD STARTER
 
Name: michael e munson
Posts: 2114
Age: 67
Male

Country: United States
United States (us)
Print view this post

Re: Why do YOU have principles?

#33  Postby tolman » Apr 27, 2016 11:25 pm

igorfrankensteen wrote:
The problem, of course, is that effectively no-one has a set of simple principles which are also comprehensive, or a set of clear priorities to govern what happens when principles clash, as any even vaguely useful set of principles obviously will.


I do.

So give me a taste of what your principles are, and the meta-principles which describe their clear priority order

John Platko wrote:
People with different worldviews may well share some principles - maybe the more so the simpler that principles are - and someone in thrall to their own principles may find someone else appealing to some of those very principles and emphasizing convenient ones in order to try to persuade the person a particular decision is acceptable or good.


Again, if they are "in thrall" to them, then they aren't THEIR principles. I went over that before.[/qute]
So they are 'meta rules, which people should follow in order to live up to them, without being constrained by them?

igorfrankensteen wrote:
For example, an atheist who knows they are one, and knows what kind of one they are, will not be successfully manipulated by appealing to the will of God. However, someone who has only worked out that they don't like being told what to do by people, but who hasn't figured out if they believe in God or nit, might be tricked resentfully into giving 'the will of God" the benefit of the doubt.


My turn to note a typo. I meant to say who knows WHY they are one. Changes the whole thrust of the paragraph.

Does it?
Surely someone who simply 'doesn't believe that gods exist' will fail to be influenced by arguments claiming that 'God' wants X in respect of the 'God wants' claim, even if they may well consider the X on its own merits as a suggestion made by other humans?

Why should what 'kind' of non-god believer they are make a difference, beyond possibly influencing whether or not they have bias against the suggestion of X due to the attempted claim?
And even in that respect, self-reflection on the kind of non-god-believer they are doesn't seem to do much beyond possibly providing insight into their likelihood of having such bias.
Where do 'principles' come into all that?
I don't do sarcasm smileys, but someone as bright as you has probably figured that out already.
tolman
 
Posts: 7106

Country: UK
Print view this post

Re: Why do YOU have principles?

#34  Postby romansh » Apr 28, 2016 1:14 am

igorfrankensteen wrote:
First if all, before anyone tries to play the "I have no principles" trick, everyone has principles, even if they claim t have the principle that there ARE no principles.

About eight years ago I gave up principles for Lent. But after that I could not find them.

But I did continue to use a whole bunch of rules of thumb which on occasion I find work.

You can claim I have "principles" all you want, but I want to see the evidence.
"That's right!" shouted Vroomfondel, "we demand rigidly defined areas of doubt and uncertainty!"
User avatar
romansh
 
Posts: 2776

Country: BC Can (in the woods)
Print view this post

Re: Why do YOU have principles?

#35  Postby igorfrankensteen » Apr 29, 2016 2:49 am

romansh wrote:
igorfrankensteen wrote:
First if all, before anyone tries to play the "I have no principles" trick, everyone has principles, even if they claim t have the principle that there ARE no principles.

About eight years ago I gave up principles for Lent. But after that I could not find them.

But I did continue to use a whole bunch of rules of thumb which on occasion I find work.

You can claim I have "principles" all you want, but I want to see the evidence.


Okay. You are a common enough example of someone who doesn't realize that he lives by the principle that "whatever gets me what I want right now, is good. The fact that it works, is what proves it was a good decision."

This is a very popular principle these days.

In other words, principles aren't limited to esoteric policy statements couched in complex abstract language. Principles are whatever essentials you use to decide whether or not you are living your life in a way that leaves you pleased with yourself.
User avatar
igorfrankensteen
THREAD STARTER
 
Name: michael e munson
Posts: 2114
Age: 67
Male

Country: United States
United States (us)
Print view this post

Ads by Google


Re: Why do YOU have principles?

#36  Postby igorfrankensteen » Apr 29, 2016 2:53 am

Oh, and for many people who select what they think is the "non-principled life principle," is that they think it excuses them from having to explain themselves when they do or say something contradictory.

I don't know if you do that or not, it's just the most popular reason why people pretend not to have any principles.
User avatar
igorfrankensteen
THREAD STARTER
 
Name: michael e munson
Posts: 2114
Age: 67
Male

Country: United States
United States (us)
Print view this post

Re: Why do YOU have principles?

#37  Postby romansh » Apr 30, 2016 12:41 am

igorfrankensteen wrote:
Okay. You are a common enough example of someone who doesn't realize that he lives by the principle that "whatever gets me what I want right now, is good. The fact that it works, is what proves it was a good decision."


Wrong ...

I don't claim good and bad even exist quite the opposite.
"That's right!" shouted Vroomfondel, "we demand rigidly defined areas of doubt and uncertainty!"
User avatar
romansh
 
Posts: 2776

Country: BC Can (in the woods)
Print view this post

Re: Why do YOU have principles?

#38  Postby tolman » May 05, 2016 10:07 am

igorfrankensteen wrote:
The problem, of course, is that effectively no-one has a set of simple principles which are also comprehensive, or a set of clear priorities to govern what happens when principles clash, as any even vaguely useful set of principles obviously will.


I do.

So give me a taste of what your principles are, and the meta-principles which describe their clear priority order.
I don't do sarcasm smileys, but someone as bright as you has probably figured that out already.
tolman
 
Posts: 7106

Country: UK
Print view this post

Re: Why do YOU have principles?

#39  Postby Calilasseia » May 05, 2016 2:23 pm

By the way, the paper by Joshua Greene in Neuron cited in the Harvard Magazine article linked to in Zoon's post above is a free download. The paper in question is this one:

The Neural Bases Of Cognitive Conflict And Control In Moral Judgment by Joshua D. Greene, Leigh E. Nystrom, Andrew D. Engell, John M. Darley & Jonathan D. Cohen, Neuron, 44(2); 389-400 (14th October 2004) [Full paper downloadable from here]

Greene et al, 2004 wrote:
Introduction

For decades, moral psychology was dominated by developmental theories that emphasized the role of reasoning and “higher cognition” in the moral judgment of mature adults (Kohlberg, 1969). A more recent trend emphasizes the role of intuitive and emotional processes in human decision making (Damasio, 1994) and sociality (Bargh and Chartrand 1999, Devine 1989), a shift in perspective that has profoundly influenced recent work in moral psychology (Haidt 2001, Rozin et al. 1999). Our previous work suggests a synthesis of these two perspectives (Greene and Haidt 2002, Greene et al. 2001). We have argued that some moral judgments, which we call “personal,” are driven largely by social-emotional responses while other moral judgments, which we call “impersonal,” are driven less by social-emotional responses and more by “cognitive” processes. (As discussed below, the term “cognitive” has two distinct uses, referring in some cases to information processing in general while at other times referring to a class of processes that contrast with affective or emotional processes. Here we use quotation marks to indicate the latter usage.)

Personal moral dilemmas and judgments concern the appropriateness of personal moral violations, and we consider a moral violation to be personal if it meets three criteria: First, the violation must be likely to cause serious bodily harm. Second, this harm must befall a particular person or set of persons. Third, the harm must not result from the deflection of an existing threat onto a different party. One can think of these three criteria in terms of “ME HURT YOU.” The “HURT” criterion picks out the most primitive kinds of harmful violations (e.g., assault rather than insider trading) while the “YOU” criterion ensures that the victim be vividly represented as an individual. Finally, the “ME” condition captures a notion of “agency,” requiring that the action spring in a direct way from the agent's will, that it be “authored” rather than merely “edited” by the agent. Dilemmas that fail to meet these three criteria are classified as “impersonal.” As noted previously (Greene et al., 2001), these three criteria reflect a provisional attempt to capture what we suppose is a natural distinction in moral psychology and will likely be revised in light of future research.

An example of an impersonal moral dilemma is the trolley dilemma (Thomson, 1986): A runaway trolley is headed for five people who will be killed if it proceeds on its present course. The only way to save them is to hit a switch that will turn the trolley onto an alternate set of tracks where it will kill one person instead of five. Should you turn the trolley in order to save five people at the expense of one? Most people say yes (Greene et al., 2001). An example of a personal moral dilemma is the footbridge dilemma (Thomson, 1986): As before, a trolley threatens to kill five people. You are standing next to a large stranger on a footbridge spanning the tracks, in-between the oncoming trolley and the hapless five. This time, the only way to save them is to push this stranger off the bridge and onto the tracks below. He will die if you do this, but his body will stop the trolley from reaching the others. Should you save the five others by pushing this stranger to his death? Most people say no (Greene et al., 2001). The trolley dilemma, unlike the footbridge dilemma, is impersonal because it involves the deflection of an existing threat (i.e., no agency—it is “editing” rather than “authoring”).

The rationale for distinguishing between personal and impersonal moral violations/judgments is in part evolutionary. Evidence from observations of great apes suggests that our common ancestors lived intensely social lives guided by emotions such as empathy, anger, gratitude, jealousy, joy, love, and a sense of fairness (de Waal, 1996), and all of this in the apparent absence of moral reasoning. (By “reasoning” we refer to relatively slow and deliberative processes involving abstraction and at least some introspectively accessible components [Haidt, 2001].). Thus, from an evolutionary point of view, it would be strange if human behavior were not driven in part by domain-specific social-emotional dispositions. At the same time, however, humans appear to possess a domain-general capacity for sophisticated abstract reasoning, and it would be surprising as well if this capacity played no role in human moral judgment. Thus, we sought evidence in support of the hypothesis that moral judgment in response to violations familiar to our primate ancestors (personal violations) are driven by social-emotional responses while moral judgment in response to distinctively human (impersonal) moral violation is (or can be) more “cognitive.”

Our previous results supported this hypothesis in two ways (Greene and Haidt 2002, Greene et al. 2001). First, we found that brain areas associated with emotion and social cognition (medial prefrontal cortex, posterior cingulate/precuneus, and superior temporal sulcus/temperoparietal junction) exhibited increased activity while participants considered personal moral dilemmas, while “cognitive” brain areas associated with abstract reasoning and problem solving exhibited increased activity while participants considered impersonal moral dilemmas.

Second, we found that reaction times (RTs) were, on average, considerably longer for trials in which participants judged personal moral violations to be appropriate, as compared to trials in which participants judged personal moral violations to be inappropriate. No comparable effect was observed for impersonal moral judgment. We compare this effect on RT to the Stroop effect (MacLeod 1991, Stroop 1935), in which people are slow to name the color of the ink in which an incongruent word appears (e.g., “red” written in green ink). According to our theory, personal moral violations elicit prepotent, negative social-emotional responses that drive people to deem such actions inappropriate. Therefore, in order to judge a personal moral violation to be appropriate one must overcome a prepotent response, just as one faced with the color-naming Stroop task must overcome the temptation to read the word “red” when it is written in green ink. The sort of mental discipline required by the Stroop task is known as “cognitive control,” the ability to guide attention, thought, and action in accordance with goals or intentions, particularly in the face of competing behavioral pressures (Cohen et al. 1990, Posner and Snyder 1975, Shiffrin and Schneider 1977). We interpreted the behavioral results of our previous study as evidence that when participants responded in a utilitarian manner (judging personal moral violations to be acceptable when they serve a greater good) such responses not only reflected the involvement of abstract reasoning but also the engagement of cognitive control in order to overcome prepotent social-emotional responses elicited by these dilemmas.

Our present aim was to further test our theory of moral judgment by directly testing two specific hypotheses derived from the arguments above. First, we tested the hypothesis that increased RT in response to personal moral dilemmas results from the conflict associated with competition between a strong prepotent response and a response supported by abstract reasoning and the application of cognitive control. In keeping with this hypothesis, we predicted that the anterior cingulate cortex (ACC), a brain region associated with cognitive conflict in the Stroop and other tasks (Botvinick et al., 2001), would exhibit increased activity during personal moral judgment for trials in which the participant takes a long time to respond (high-RT trials), as compared to trials in which the participant responds quickly (low-RT), reflecting presumed conflict in processing. Likewise, we predicted that regions in the dorsolateral prefrontal cortex (DLPFC) would also exhibit increased activity for high-RT trials (as compared to low-RT trials), reflecting the engagement of abstract reasoning processes and cognitive control (Miller and Cohen, 2001).

Second, we tested the hypothesis that, in the dilemmas under consideration, these control processes work against the social-emotional responses described above and in favor of utilitarian judgments, i.e., judgments that maximize aggregate welfare (e.g., by sacrificing one life in order to save five others). In keeping with this hypothesis, we predicted increased DLPFC activity for trials in which participants judged personal moral violations to be appropriate, as compared to trials in which participants judged personal moral violations to be inappropriate. In other words, this hypothesis predicted that the level of activity in regions of DLPFC would correlate positively with utilitarian moral judgment. We emphasize that this prediction goes beyond those explored in our previous work. Previously, we found that different classes of moral dilemma (personal versus impersonal) produce different patterns of neural activity in the brains of moral decision makers. Here we test the hypothesis that different patterns of neural activity in response to the same class of moral dilemma are correlated with differences in moral decision-making behavior.

To test the predictions of this theory, we focused on a class of dilemmas that bring “cognitive” and emotional factors into more balanced tension than those featured in our previous work. For example, consider the following moral dilemma (the crying baby dilemma).

Enemy soldiers have taken over your village. They have orders to kill all remaining civilians. You and some of your townspeople have sought refuge in the cellar of a large house. Outside, you hear the voices of soldiers who have come to search the house for valuables.

Your baby begins to cry loudly. You cover his mouth to block the sound. If you remove your hand from his mouth, his crying will summon the attention of the soldiers who will kill you, your child, and the others hiding out in the cellar. To save yourself and the others, you must smother your child to death.

Is it appropriate for you to smother your child in order to save yourself and the other townspeople?

This is a difficult personal moral dilemma. In response to this dilemma, participants tend to answer slowly, and they exhibit no consensus in their judgments. This dilemma, like the other consistently difficult dilemmas used here, has a specific structure: in order to maximize aggregate welfare (in this case, save the most lives), one must commit a personal moral violation (in this case, smother the baby). According to our theory, this dilemma is difficult because the negative social-emotional response associated with the thought of killing one's own child competes with a more abstract, “cognitive” understanding that, in terms of lives saved/lost, one has nothing to lose (relative to the alternative) and much to gain by carrying out this horrific act. We believe that the ACC responds to this conflict and that control-related processes in the DLPFC tend to favor the aforementioned “cognitive” response. We hypothesize that these control processes, insofar as they are effective, drive the individual to the utilitarian conclusion that it is appropriate to smother the baby in order to save more lives.

This case contrasts with “easy” personal moral dilemmas, ones that receive relatively rapid and uniform judgments (at least from the subjects within our sample). One such case is the infanticide dilemma in which a teenage mother must decide whether or not to kill her unwanted newborn infant. According to our theory, this dilemma is relatively easy because the negative social-emotional response associated with the thought of someone killing her own child dominates the weak or nonexistent “cognitive” case in favor of this action. Here there is no significant cognitive conflict and no need for extended reasoning or cognitive control. Thus, compared to the high-RT trials typically generated by cases like crying baby, the low-RT trials typically generated by cases such as infanticide should exhibit lower levels of activity in the ACC and DLPFC.

The analyses required to test these assertions make up a nested structure (Figure 1). Previously, we compared the neural activity associated with “personal” and “impersonal” moral judgments (Greene et al., 2001). In analysis 1, we tested our hypotheses concerning conflict monitoring in the ACC and abstract reasoning and cognitive control in the DLPFC by comparing high-RT to low-RT personal moral judgments. In analysis 2, we tested our hypothesis concerning the involvement of DLPFC in “cognitive” processes underlying utilitarian judgments by subdividing the high-RT personal moral judgments according to the participant's behavior, i.e., by comparing “utilitarian” judgments (“appropriate”) to nonutilitarian judgments (“inappropriate”). In each of these difficult dilemmas, an action that normally would be judged immoral (e.g., smothering a baby) is favored by strong utilitarian considerations (e.g., saving many lives). The participants, in each instance, must decide if the utilitarian action is “appropriate” or “inappropriate.” Our hypothesis is that judgments of “appropriate” will be associated with greater DLPFC activity than those of “inappropriate,” reflecting the influence of “cognitive” processes favoring a utilitarian response. Analysis 2 was performed only on high-RT trials because of the relative paucity of low-RT-utilitarian judgments and the need to control for RT.

Image


I'll leave the rest of the paper for you all to read, while I digest it myself. :)
Signature temporarily on hold until I can find a reliable image host ...
User avatar
Calilasseia
RS Donator
 
Posts: 22082
Age: 59
Male

Country: England
United Kingdom (uk)
Print view this post

Re: Why do YOU have principles?

#40  Postby Calilasseia » May 05, 2016 2:36 pm

Greene's 2010 paper, also cited in the article, is this one:

Moral Judgments Recruit Domain-General Valuation Mechanisms To Integrate Representations Of Probability And Magnitude by Amitai Shenhav & Joshua D. Greene, Neuron, 67(4): 667-677 (26th August 2010) [Full paper downloadable from here]

Shenhav & Greene, 2010 wrote:Introduction

The most consequential moral decisions that humans make are at the policy level, where a single choice can significantly impact thousands of lives. Examples include healthcare decisions, such as the adoption of an opt-out versus opt-in system for organ donation (Johnson and Goldstein, 2003), and military decisions, such as U.S. President Harry Truman's decision to deploy nuclear weapons against Japan. Such decisions have several notable features. First, they involve trade-offs among costs and benefits of varying magnitude. Second, they involve uncertainty, with outcomes that vary in their probability of occurrence. Third, such decisions often involve life-and-death outcomes for individuals other than the decision maker, requiring the decision maker to assess the value of these lives and incorporate such assessments into a decision. Fourth, the individuals who make policy decisions (voters, legislators, judges, government officials, etc.) are, at best, indirectly affected by the social utility of their choices and may be completely unaffected by it. The present research examines moral decisions with these four critical features, which reflect the complexity, seriousness, and indirect social nature of important policy decisions. In functional terms, the present research examines how the brain represents and integrates information concerning the magnitude and probability of outcomes in decisions with life-and-death implications for unknown others.

The present research aims to draw parallels between economic and moral decision making. This endeavor is significant in two ways. First, it addresses a central question in the study of moral judgment, namely the extent to which moral judgments draw on domain-general versus domain-specific processes (Greene and Haidt, 2002, Hauser, 2006). Some have argued that moral judgments are produced by a “moral faculty” independent of and prior to processing by affective/emotional circuitry in the brain (Hauser, 2006, Huebner et al., 2009). Evidence that such judgments are produced by domain-general, affective mechanisms of evaluation would therefore count against the hypothesis that such judgments are produced by a domain-specific moral faculty. Second, in drawing this parallel, the present research would significantly expand the purview of “neuroeconomic” models of valuation (Glimcher, 2009, Rangel et al., 2008, Wallis, 2007). Research on economic decision making has examined the neural systems responsible for tracking and integrating information concerning outcome magnitude and probability (Knutson et al., 2005, Platt and Huettel, 2008, Tom et al., 2007). However, such research has focused on decisions involving primary reinforcers or monetary outcomes for the decision maker, while the present research examines decisions involving life-and-death outcomes that affect unknown others rather than the decision maker. Thus, the present research tests the generality of neuroeconomic models that aspire to provide a comprehensive framework for subjective valuation and decision making (Glimcher, 2009, Montague and Berns, 2002). We hypothesize that the relatively detached moral decisions examined here rely on domain-general evaluation mechanisms that enable more basic, self-interested decision making in humans and animals.

Several studies have parametrically varied the probability and magnitude of positive and/or negative outcomes to identify brain regions and neurotransmitter systems responsible for representing these variables and integrating them into a subjective summary representation of expected value. Such decisions involve, in different ways, subcortical regions in the striatum, thalamus, and amygdala as well as cortical regions in the cingulate cortex, insula, ventromedial prefrontal/medial orbitofrontal cortex (vmPFC/mOFC), and posterior parietal cortex (Knutson et al., 2005, Platt and Huettel, 2008, Tom et al., 2007). The vmPFC/mOFC in particular appears to be specialized for representing the overall expected value/utility associated with an option (Hare et al., 2008, Knutson et al., 2005, Wallis, 2007). We hypothesize that at least some of these neural structures will play comparable roles in the complex moral decisions examined here. Previous research on other-regarding preferences in the context of resource allocation (Hare et al., 2010, Hsu et al., 2008, Moll et al., 2006) are consistent with this hypothesis, but these studies examine decisions involving familiar economic goods and do not (explicitly) involve uncertainty. The present study, in contrast, examines representations of the value of life-and-death outcomes and how these representations are modulated by uncertainty.

The present research also builds on recent research examining hypothetical life-and-death moral dilemmas (Foot, 1978, Thomson, 1986) in which one can save several lives by sacrificing a smaller number of lives (Greene, 2009, Greene et al., 2001). Neuroscientific studies employing such dilemmas have examined several critical factors, such as the distinction between action and omission and the distinction between harm as a means and harm as a side-effect (Schaich Borg et al., 2006), but have yet to examine manipulations of probability and magnitude, which are critical for real-world, complex decision making. The present study uses a parametric design to examine these variables and their neural representation. This additionally allows us to examine individual differences in both neural and behavioral sensitivity to these variables and to examine the relationship between neural sensitivity and behavioral sensitivity to these variables.

Our first aim is to identify neural structures responsible for encoding the magnitude and probability of outcomes and subjective representations of the overall “expected moral value” of morally significant actions. Our second aim is to determine the extent to which these neural structures are consistent with those implicated in more conventional economic decision making (Knutson et al., 2005, Platt and Huettel, 2008, Sanfey et al., 2006, Wallis, 2007). More specifically, we aim to determine whether particular brain regions that are predictive of behavioral risk and value sensitivity in more conventional paradigms (Knutson et al., 2005, Paulus et al., 2003) play comparable roles in the context of moral judgment. This will be accomplished by identifying neural regions whose activity covaries with the relevant task parameters as well as regions associated with individual differences in behavioral sensitivity to these parameters. Recent studies of moral judgment (Ciaramelli et al., 2007, Greene et al., 2004, Greene et al., 2008, Koenigs et al., 2007) suggest that automatic emotional responses often conflict with utilitarian judgments. In light of this, our third aim is to identify neural activity associated with individual differences in willingness to endorse utilitarian trade-offs (Hsu et al., 2008) and to determine, more specifically, whether increased endorsement relies on neural circuitry involved in emotion regulation (Hooker and Knight, 2006, Wager et al., 2008). Our final aim is to identify patterns of neural activity associated with decisions in which the outcomes are more versus less certain, testing the hypothesis that moral judgments involving more certain outcomes are more likely to depend on mechanisms for rule-guided choice, typically enabled by the lateral PFC (Badre and D'Esposito, 2007, Greene et al., 2004, Miller and Cohen, 2001).
Signature temporarily on hold until I can find a reliable image host ...
User avatar
Calilasseia
RS Donator
 
Posts: 22082
Age: 59
Male

Country: England
United Kingdom (uk)
Print view this post

PreviousNext

Return to Sociology

Who is online

Users viewing this topic: No registered users and 1 guest