Posted: Oct 03, 2017 6:14 pm
by zoon
archibald wrote:
archibald wrote:That paper looks interesting and I hope to find the time to read it soon. Thanks for posting.

Read it now and am half-thinking, 'there's an hour of my life I'll never get back', in that the writer seemed to take about 30 pages to say what (a) might have been said in 4 and (b) roughly what Sam Harris already said (with the possible exception of the audience/popularity thing). :)

Sorry about that :oops: , I have to admit I found myself getting more lost the second time I tried to work out what he was saying, and I’m far from clear now. In my post, I was picking up the parts that suited me, I was using him as an example of a philosopher who was saying that moral realism and science are compatible. It’s true that Sam Harris is also saying very firmly that they are compatible (in which I think I agree with him), and he’s a lot clearer than Finlay, but perhaps because he’s so much clearer I find I don’t agree with him on a somewhat central point. Sam Harris says, I think, that the single aim of moral action (which comes down to all actions other than mistakes of one kind or another) is to maximise the wellbeing of conscious creatures. Actually, I think in “The Moral Landscape” he then waffles between saying we ought to care as much about all other creatures as ourselves (which is utilitarianism, and I don’t see how it could work), and saying we ought to care about our own wellbeing, which is egoistic consequentialism*, and a very different beast, more like Social Darwinism. The usefulness of any ethical system, I should have thought, is to provide some guidance between those two very different poles, rather than failing to address the point that they are different. This is where I’m happier with ethicists like Stephen Finlay, who come down in the end to the moral predispositions that we find we have, such as that suffering should not be inflicted on innocent people, and that people should act to preserve their health and should avoid irrationality. Traditional ethicists (or at any rate, some of them) merely stated firmly that these basic ethical predispositions are rational, while I think they are evolved, but I agree with those traditionalists that ethics is a somewhat messy business, based largely on a number of separate predispositions which we happen to find in ourselves. Sam Harris gives what looks like a simpler answer, but I think it relies too heavily on a vagueness at the centre, whether he expects us to care about the wellbeing of all sentient creatures equally or not, and if not, how much we should care about the others. I’m probably being too dismissive of “The Moral Landscape”, I suppose my version of ethics also comes down to a single idea, to keep the local community flourishing (which in the modern world is the global community), and it’s a distinctly less uplifting idea than the wellbeing of all conscious creatures.

*Quoting Wikipedia on Utilitarianism here
Wikipedia wrote:Utilitarianism is an ethical theory which states that the best action is the one that maximizes utility. "Utility" is defined in various ways, usually in terms of the well-being of sentient entities. Jeremy Bentham, the founder of utilitarianism, described utility as the sum of all pleasure that results from an action, minus the suffering of anyone involved in the action. Utilitarianism is a version of consequentialism, which states that the consequences of any action are the only standard of right and wrong. Unlike other forms of consequentialism, such as egoism, utilitarianism considers the interests of all beings equally.

Edited to add: I managed to miss your post #366 above. Your question was:
As to your last point, Sam Harris does attempt to justify taking human wellbeing as paramount, on the basis that it's what everybody strives for and largely what evolution has made us to do. When you say that few people in practice take it as their paramount value, what do you have in mind?

What I had in mind was the point that's made against utilitarianism, which states that we should take each person's wellbeing equally into account, not putting ourselves or our family first. This is, probably, clearly unrealistic? - some communes have tried it, but it doesn't seem to last long. I agree that Sam Harris doesn't say this part of the time, his specific examples do tend to be about people thinking for themselves and their children, but this is where I don't think he's even beginning to tackle the real tension in much, perhaps most, moral thinking, which is between how much for myself and my immediate family, versus how much for the community in general. He speaks as though everybody focusing on their own wellbeing is the same thing as everyone looking after everyone else's wellbeing equally with their own, but in fact they require us to take very different actions. At any rate, that's where my reading of the book finds a problem. (Another vagueness is that sometimes he's talking about all sentient beings, and sometimes about humans, these would again imply very different courses of action.)