Posted: Oct 04, 2017 8:12 am
by archibald
zoon wrote:Sorry about that :oops: , I have to admit I found myself getting more lost the second time I tried to work out what he was saying, and I’m far from clear now. In my post, I was picking up the parts that suited me, I was using him as an example of a philosopher who was saying that moral realism and science are compatible. It’s true that Sam Harris is also saying very firmly that they are compatible (in which I think I agree with him), and he’s a lot clearer than Finlay, but perhaps because he’s so much clearer I find I don’t agree with him on a somewhat central point. Sam Harris says, I think, that the single aim of moral action (which comes down to all actions other than mistakes of one kind or another) is to maximise the wellbeing of conscious creatures. Actually, I think in “The Moral Landscape” he then waffles between saying we ought to care as much about all other creatures as ourselves (which is utilitarianism, and I don’t see how it could work), and saying we ought to care about our own wellbeing, which is egoistic consequentialism*, and a very different beast, more like Social Darwinism. The usefulness of any ethical system, I should have thought, is to provide some guidance between those two very different poles, rather than failing to address the point that they are different. This is where I’m happier with ethicists like Stephen Finlay, who come down in the end to the moral predispositions that we find we have, such as that suffering should not be inflicted on innocent people, and that people should act to preserve their health and should avoid irrationality. Traditional ethicists (or at any rate, some of them) merely stated firmly that these basic ethical predispositions are rational, while I think they are evolved, but I agree with those traditionalists that ethics is a somewhat messy business, based largely on a number of separate predispositions which we happen to find in ourselves. Sam Harris gives what looks like a simpler answer, but I think it relies too heavily on a vagueness at the centre, whether he expects us to care about the wellbeing of all sentient creatures equally or not, and if not, how much we should care about the others. I’m probably being too dismissive of “The Moral Landscape”, I suppose my version of ethics also comes down to a single idea, to keep the local community flourishing (which in the modern world is the global community), and it’s a distinctly less uplifting idea than the wellbeing of all conscious creatures.


I didn't have a problem with Finlay, I just found myself agreeing with him very readily early on. What I thought he was making a case for was essentially that we can get oughts from is's, because in the end, all oughts (bar an explanation for them which he rejects, namely that oughts exist independently or as truly objective absolutes) are conditioned by end-related qualifiers, such as 'if you don't want to go to prison, you ought not to commit murder' and others reduce to tautologies ('in order to comply with the law, one should comply with the law'). He accepts, I think, that the regress of justifications can only ever stop at an agreed or assumed or subjective place.

To me, it was very much the same as Sam Harris, with the previously mentioned addition in Finlay's case (highlighted by you) that politics/popularity/audience concerns are a major factor.

As for Harris starting to talk about how we ought to treat other conscious or sentient creatures, I haven't got to that bit of the book yet, so can't comment. I agree that at first sight, it doesn't seem to naturally follow from his major premise (the wellbeing of humans) unless perhaps, there is a way to link the wellbeing of the rest of the planet to our wellbeing (which I think there is). But as I say, I haven't heard Harris on this yet.




zoon wrote:
As to your last point, Sam Harris does attempt to justify taking human wellbeing as paramount, on the basis that it's what everybody strives for and largely what evolution has made us to do. When you say that few people in practice take it as their paramount value, what do you have in mind?

What I had in mind was the point that's made against utilitarianism, which states that we should take each person's wellbeing equally into account, not putting ourselves or our family first. This is, probably, clearly unrealistic? - some communes have tried it, but it doesn't seem to last long. I agree that Sam Harris doesn't say this part of the time, his specific examples do tend to be about people thinking for themselves and their children, but this is where I don't think he's even beginning to tackle the real tension in much, perhaps most, moral thinking, which is between how much for myself and my immediate family, versus how much for the community in general. He speaks as though everybody focusing on their own wellbeing is the same thing as everyone looking after everyone else's wellbeing equally with their own, but in fact they require us to take very different actions. At any rate, that's where my reading of the book finds a problem. (Another vagueness is that sometimes he's talking about all sentient beings, and sometimes about humans, these would again imply very different courses of action.)


Ah, I see better now what you were saying. I agree that not prioritising our own (individual or family) wellbeing does seem unrealistic and I agree that there is a tension between personal and social issues and that this is arguably the major tension in morality, and that it's arguably inevitable, especially for a social species. And one could add that it's a tension which the blind forces of nature (be they evolutionary or otherwise) solve very readily. Whatever blend works, survives, by and large. Which brings us back to the topic of Game Theory.