romansh wrote:zoon wrote:We could use the same argument on every last one of our desires? ...
Yes. This to me means I have a need (desire will wish etc) to treat this sort of conversation with care. There is some degree of recursion in it.
If we define some morality ... do no harm or even do a little bit of "good". I get it. But I suspect we live in a "zero sum" universe. Sure by working together etc we can achieve more. But are we taking resources from some other third party or perhaps borrowing from the future. eg chopping down trees to build houses.
I am trying to think of some desire I carry out that is not considered moral. My morality and desires have an amazing good correlation. It is amazing what post hoc justification will do.
But yes we can define certain actions as moral or immoral (even perhaps morally neutral), but if we don't believe in free will, is there not just a little cognitive dissonance in taking some "ultimate" [Galen Strawson] responsibility for the "good" or "bad" we might do?
Yes, I’m not arguing for ultimate free will or morality. On the contrary, I’m happy to argue against both – as you say, there’s too much cognitive dissonance, I would need to go against scientific evidence.
On the other hand, I think there is plenty of scientific evidence for much more mundane morality and free will, both of which matter to us in ordinary social lives. The free will I’m arguing for is only the freedom we have to act when we are not coerced or mentally ill, it’s the free will needed by law courts and in ordinary conversation when deciding whether somebody is to be held responsible for an action. If, or when, we have a detailed understanding of the mechanisms we are, then I think we may well not have even that kind of free will, but so far we barely understand brain mechanisms at all. It matters to me whether or not other people are holding me responsible for something I do, and it also matters to me whether I’m going to join with others in holding another person responsible for an action, and the free will of the agent matters for that responsibility, even though it’s only the limited freedom from coercion and mental illness which is compatible with ultimate determinism.
Similarly, the morality which I am arguing for is, I think, entirely compatible with determinism and with evolution, unlike the kind of morality which you mention above. I certainly agree with you that a universal moral demand: “Do no harm” would be impossible to fulfil even if we wanted to, but I don’t see that as a reason to chuck out the kind of morality we evolved to manage our social lives. That morality includes, I think for at least the great majority of societies studied, that it’s wrong to assault someone in the community without good reason. This is a much more limited version of “do no harm”, and I think it matters in our ordinary lives that a person who does commit assault without good reason is very likely to be held responsible by the rest of the community and sanctioned in some way. Further, I would want to join with the people who hold the assailant responsible and take steps to sanction him or her, and in that limited and non-universal sense I think it’s true that unjustified assault is wrong. I think this feature of human societies evolved, even though it’s unique to our species. It’s not seen in anything like the same strong form in any non-human animal, but precursors like retaliating for an individual assault are seen, it’s a trait which could have evolved gradually by natural selection as more effective groups were more likely to leave genes. (I would note here that if it turned out the assailant was mentally ill or was being coerced, then I would no longer hold them responsible, this is where the limited version of free will comes in.)