Amoralism

on fundamental matters such as existence, knowledge, values, reason, mind and ethics.

Moderators: kiore, Blip, The_Metatron

Re: Amoralism

#161  Postby shh » Jun 19, 2010 12:04 am

zoon wrote:Since I was attempting (and in your view failing) to avoid the implication of normativity, I would be happy to substitute “ethic” for “morality” in my posts in this thread where I’m talking about codes of conduct. It would probably be better still to substitute “accepted code of conduct” or some similar phrase; I have to admit that using an ambiguous word is liable to lead to trouble.

But then it's completely off topic isn't it? (not that the subject's not interesting)
Could you point me to a post where I said that? That was Chippy’s view; I was arguing against it, for example,

It's not empathy in particular that does it, it's the whole frame of the conversation. You argue against empathy, but you replace it with "fairness".
"Fair" is a normative moral term isn't it?
Look at the sentence: "I would see fairness, rather than altruism or empathy, as the key issue in human cooperation and in morality", then replace the possibly normative terms with unambiguous ones, there's probably other ones you could use, but what you'll end up with is: "I would see socially accepted behaviour, rather than altruism or empathy, as the key issue in human cooperation and in socially acceptable behaviour".
We can get rid of the "rather than" part for now, and, since often times morality, whether normative or descriptive, opposes cooperation, we can lose that too, so what you've got is: "I would see socially accepted behaviour as the key issue in socially acceptable behaviour".
wiki wrote: despite the fact that chocolate is not a fruit[citation needed]
User avatar
shh
 
Posts: 1523

Ireland (ie)
Print view this post

Re: Amoralism

#162  Postby zoon » Jun 19, 2010 11:26 pm

shh wrote:It's not empathy in particular that does it, it's the whole frame of the conversation. You argue against empathy, but you replace it with "fairness".
"Fair" is a normative moral term isn't it?
Look at the sentence: "I would see fairness, rather than altruism or empathy, as the key issue in human cooperation and in morality", then replace the possibly normative terms with unambiguous ones, there's probably other ones you could use, but what you'll end up with is: "I would see socially accepted behaviour, rather than altruism or empathy, as the key issue in human cooperation and in socially acceptable behaviour".

Yes, “fairness” is normative, but (unlike empathy) it’s about reciprocity, which many evolutionary theorists see as a key issue in human cooperation. Humans are the only known biological species (animal, plant, bacterium, or anything else) in which individuals cooperate extensively with unrelated individuals of the same species, and theorists are still trying to work out what the mechanism is. In other social organisms, the cooperators are close relatives, and kin altruism accounts for the cooperation, but this cannot be the case (or only to a much lesser extent) in human societies, which consist chiefly of non-relatives. Of the suggested mechanisms for human cooperation, reciprocity with punishment of non-reciprocators is currently a front-runner. I would see fairness as the subjective aspect of evolved reciprocity with punishment: punishing non-reciprocators just feels like the right thing to do (because we have evolved brain mechanisms to do it), and this creates a feeling that unfairness should be punished and there is a moral order to the universe. It takes some active effort to undo this wired-in, mistaken, sense of objective normative rightness. (This does not mean ceasing to reciprocate and to punish non-reciprocators, it means ceasing to believe that this behaviour is objectively right.)

Below, I’m quoting two longish chunks from the 2005 paper by Robert Boyd and Peter Richerson (here) which I linked to earlier, and which puts the arguments very much better than I can. In these passages, the authors use the phrase “moralistic reciprocity” to mean reciprocity which includes active punishment of individuals who fail to reciprocate. They are using the word “moralistic” in the same sense in which I have been using “morality” in this thread, to refer to cooperation backed up by punishment of defectors. There is no implication of anything normative in this usage: the paper is on evolutionary biology, not philosophy. This use of the word “morality” and associated words is common in published discussions of the evolution of human cooperation.

(As you say, I’ve probably gone well off-topic by now.)

Robert Boyd and Peter Richerson wrote:
The proposition that human behavior is a product of organic evolution strongly
supports the view that people are selfish. Evolutionary theory predicts that any
heritable tendency to behave altruistically toward non-relatives will be rapidly
eliminated by natural selection. To see why suppose that some individuals in a
population have a heritable tendency to help other, unrelated members of their social
group at a cost to themselves. For example, suppose some females were motivated by
generalized maternal feelings to suckle the orphaned offspring of other females. Such
“compassionate”' females would have fewer offspring on average compared to
females who lacked this propensity because the compassionate females would have
less milk for their own offspring, and all other things being equal, this would reduce
their offspring's survival. Thus, each generation there will be fewer copies of the
genes that create the motivation to suckle orphans, and eventually, the tendency will
disappear.
Selection will favor selfless behavior in only one circumstance: when it is directed
toward genetic relatives. To see why, suppose that some females have a heritable
tendency to suckle a sister's offspring when they are in need. Since such offspring
have a 50% chance of carrying the same genes as the females own offspring, selection
will usually favor such nepotistic motivations if the increase in fitness of the sister's
offspring is more than twice the reduction in fitness of the female's own offspring.
This reasoning first elaborated by W. D. Hamilton (1964) is supported by an immense
body of field and laboratory observation and measurement. It is certainly possible that
humans are unusual in some way that caused them to evolve unselfish motives.
However, the burden of proof is on people taking this view to show exactly why
humans are odd, and in the absence of a clear demonstration of why we are odd, the
straightforward prediction of evolutionary biology is that human actions result from
selfish or nepotistic motives.
In other species, complex cooperative societies exist only when their members are
close relatives. In most animal species cooperation is either limited to very small
groups or is absent altogether. Among the few animals that cooperate in large groups
are social insects like bees, ants, and termites, and the Naked Mole Rat, a subterranean
African rodent. Multicellular plants and many forms of multicellular invertebrates
can also be thought of as eusocial societies made up of individual cells. In each of
these cases, the cooperating individuals are closely related. The cells in a
multicellular organism are typically members of a genetically identical clone, and the
individuals in insect and Naked Mole Rat colonies are siblings.
Evolutionary biologists believe that complex cooperative systems are limited to
societies of relatives because such systems are vulnerable to self-interested cheating.
The many members of an ant colony cannot easily monitor the behavior of all the
other members, thus each has the opportunity to cheat on the system. For example,
rather than maintain the colony and feeding the queen's offspring, the worker termite
can devote time and energy to laying her own fertile eggs. Since the colony has many
members, the effect of each on the functioning of the whole is group is very small, and
therefore, each is better off if he or she does cheat. Division of labor creates further
opportunities for cheating because it requires exchanges of “goods and services”
whose provision is separated in time.
In contrast to the societies of other animals, virtually all human societies are based on
the cooperation of large numbers of unrelated people. This is obviously true of
modern societies in which complex tasks are managed by enormous bureaucracies like
the military, political parties, churches, and corporations. Markets coordinate the
activity of millions of people and allow astonishing specialization. It is also true of the
human societies that have characterized the human species since first intensive broad
spectrum foraging and later agriculture allowed sedentary settlements. Consider, for
example, the societies of highland New Guinea. Here, patrilineally organized groups
number from a few hundred to several thousand. These groups have religious,
political, and economic specialists, they engage in trade and elaborate ritual exchange
with distant groups, and they are able to regularly organize parties numbering several
hundred to make war on their neighbors. Even contemporary hunter-gathers who are
limited to the least productive parts of the globe have extensive exchange networks
and regularly share food and other important goods outside the family. Other animals
do none of these things.
Thus we have an evolutionary puzzle. Our Miocene primate ancestors presumably
cooperated only in small groups mainly made up of relatives like contemporary nonhuman
primates. Such social behavior was consistent with our understanding of how
natural selection shapes behavior. Over the next 5 to 10 million years something
happened that caused humans to cooperate in large groups. The puzzle is: What
caused this radical divergence from the behavior of other social mammals? Did some
unusual evolutionary circumstance cause humans to be less selfish than other
creatures? Or, do humans have some unique feature that allows them to better
organize complex cooperation among selfish nepotists.

Solutions to the puzzle.
People have proposed five different kinds of solutions to this puzzle:
1. The “heart on your sleeve” hypothesis holds that humans are cooperative because
they can truthfully signal cooperative intentions.
2. “Big mistake” hypotheses propose that contemporary human cooperation results
from psychological predispositions that were adaptive when humans live in small
groups of relatives.
3. Manipulation hypotheses hold that people either tricked or coerced into
cooperating in the interests of others.
4. Moralistic reciprocity hypotheses hold that greater human cognitive abilities and
human language allow humans to manage larger networks of reciprocity which
account for the extent of human cooperation.
5. Cultural group selection hypotheses argue that the importance of culture in
determining human behavior causes selection among groups to be more important
for humans than for other animals.
These five are not mutually exclusive, and, in fact we believe that the most likely
explanation is some combination of the last two hypotheses.

The article discusses each of these possible solutions. The section on “moralistic reciprocity” begins:

Robert Boyd and Peter Richerson wrote:
Moralistic Reciprocity Hypotheses
A number of authors have suggested that human cooperation is based on reciprocity
(e.g. Trivers 1971, Wilson 1975, Alexander 1987, Binmore 1994), and that our more
sophisticated mental skills allow us to manage larger social networks than other
creatures. Two kinds of evidence support this hypothesis. First, reciprocity clearly
does play an important role in contemporary human societies all over the world.
Second, some measures of brain size are correlated with social complexity—animal
species which have small social networks tend to have smaller brains (corrected for
body size) than do animal species with large social networks (Dunbar, 1992). The fact
that humans have very large brains for their body size suggests that humans can
maintain reciprocal relationships in larger groups than other animals. Field and
laboratory experiments suggest that monkeys are much smarter about social problems
than non-social problems. For example, vervet monkeys do not seem to know that
python tracks (which are obvious and unmistakable) predict the presence of pythons,
but they do know that their aggression toward another vervet predicts aggression by
that individual's relatives toward them (Cheney and Seyfarth 1990) which suggests
that solving social problems is an important for brain evolution.
The defining feature of reciprocity is that ongoing interactions allow people to
monitor each other's behavior and thereby reward cooperators and punish
noncooperators. Beyond this property, there is little agreement among biologists or
anthropologists about the details of how reciprocity works. In the simplest models
punishment takes the form of withdrawal of further cooperation (for example, Axelrod
and Hamilton 1980): I will keep helping you as long as you keep helping me, but if
you cheat, I won't help you any more. We will refer to such strategies as “simple
reciprocity”. Other authors (e.g. Binmore 1994) argue that punishment takes other
forms—non-cooperators are punished by various forms of social ostracism, reduced
status, fewer friends, and fewer mating opportunities. Following Trivers (1971) we
will call this “moralistic reciprocity.” While these different types of reciprocity are
often lumped together, they have very different evolutionary properties.
It is very unlikely that large scale human cooperation is supported by simple
reciprocity. There is strong theoretical support for the idea that lengthy interactions
between pairs of individuals are likely to lead to the evolution of this kind of
reciprocating strategy (See Axelrod and Dion 1989, Nowak and Sigmund 1993 for
review), but recent work suggests that simple reciprocity cannot support cooperation
in larger groups (Boyd and Richerson 1988, 1989). Increasing group size places
simple reciprocating strategies on the horns of a dilemma. Strategies which tolerate a
substantial number of defectors in the group allow defectors to go unpunished and
therefore cannot persist when common because such defectors get the benefits of long
term cooperation without paying the cost. Thus, reciprocators must be provoked to
defect by the presence of even a few defectors. However, such intolerant strategies
cannot increase when rare unless there is a substantial chance that the groups made up
mainly of cooperators will form when cooperators are rare and they are extremely
sensitive to the existence of errors or uncertainty. This dilemma is not serious when
pairs of individuals interact; very minor perturbations allow reciprocating strategies to
increase when rare. As groups become larger, however, both of these requirements
become impossible to satisfy.
This conclusion makes intuitive sense. We know from everyday experience that
reciprocity plays an important role in friendship, marriage, and other dyadic
relationships. We will stop inviting friends over to dinner if they never reciprocate,
we become annoyed at our spouse if he does not take his turn watching the children,
and refuse to return to the auto repair shop when they do a bad job. However, it is not
plausible that each one of a thousand union members stay out on strike because they
are afraid that their defection will break the strike. Nor does each member of a Mae
Enga war party maintain his position in the line of battle because he fears that his
desertion will precipitate wholesale retreat.
Moralistic reciprocity provides a much more plausible mechanism for the
maintenance of large scale cooperation. Reciprocators can punish non-cooperators in
many ways besides withholding their own cooperation.. Strike breakers can be
physically attacked or their property can be vandalized. Even more plausibly they can
be socially ostracized—scabs lose status in their community and with it many
important benefits of social life. Much the same goes for cowards and deserters who
may be attacked by their erstwhile compatriots and shunned by their society, made the
targets of gossip, or denied access to territories or mates.


A more recent paper by Professor Boyd and others (here) discusses coordinated punishment of defectors as a possible factor in human cooperation. Edit: sorry, that link doesn’t work. There’s a link here to Robert Boyd’s home page with links that work. The article is “Coordinated punishment of defectors sustains cooperation and can proliferate when rare”, 2010.

There is more recent criticism of the argument that reciprocity cannot evolve in large groups without punishment, but that wouldn’t stop punishment from being important in human social behaviour.
User avatar
zoon
 
Posts: 3302

Print view this post

Previous

Return to Philosophy

Who is online

Users viewing this topic: No registered users and 1 guest