good maximal happiness?

Not only are Sam Harris’ recent thoughts about morality in tension with basic philosophical distinctions such as is/ought and fact/value, it also re-raises basic questions raised by utilitarian ethics – namely which version of ‘happiness’ is right?

I listened to an interesting discussion of Sam’s ideas today (thanks to Damian for highlighting it), and at one point they were talking about the hypothetical possibility of finding a “wellness” part of the brain.  You might as well name it the “wellness according to this brain’s owner” part of the brain.  But anyway, apart from the problems with this thought experiment, it did start me on a thought path that led me to the following:

Consider that we define ((somewhat arbitrarily and non-scientifically, I hasten to add)) the goal of morality to be not only the maximal happiness of one individual, but the maximal happiness of all humans (leaving non-human ‘happiness’ aside for the moment).  Suppose we could tell from brain function when humans were feeling (and to what degree they were feeling) ‘happy’ as defined by them.

The most ethical action to achieve or work toward this goal, then, would be any action which would cause the most (or all) humans to have that specific brain function (or the highest intensity of that brain function) which signaled human happiness.

Putting aside the fact that implementing this ‘action’ (whatever it might be) with universal scope – all of humanity – would almost certainly involve forcing it upon a good many people, we can also speculate that it would be very, very expensive.  Imagine voting on that use of our tax dollars.

It could well be that (again, putting aside the issue of forcing it on unwilling humans) the most financially expedient way to accomplish this would be to find a way to make people believe that things were happening to them that were too expensive to actually make happen in reality.

Every single human could – conceivably – simultaneously enjoy a state of maximal happiness.  The utility goal getting the big tick…

What? Haven’t you seen ‘The Truman Show‘ or ‘The Matrix‘?

8 thoughts on “good maximal happiness?”

  1. If you think of Harris’ thesis this way, you may better grasp his point: think of elevation. Can we inform this notion we call elevation with science or are we doomed to never have any way of determining higher from lower except by belief in some supernatural authority figure that tells which is which?

  2. That’s exactly right. The reference point can be anything allowing us the means with an open-ended numbering system to compare two positions relative to this point. At the core of morality must be some admission that human well being matters and it is accepting this point that allows us to understand what Harris is talking about. Understanding the relativity of different moral actions to this core does not make morality subjective, any more than understanding the different elevations relative to the reference point subjective.

  3. Yes, I agree that once a value-judgment or goal is firmly and confidently established, working out which actions align with them is frankly quite easy… but I’ve not heard Sam talk about how we know what ‘well being’ is, or how ‘science’ or ‘objective facts’ can tell us what it is. He defines ‘science’ in a very broad way too…

  4. So are you suggesting that, unless exactly defined, human well being is not central to morality? I fail to see where any religious belief accomplishes this rather daunting task.

    Harris addresses this lack of exactness by claiming a moral landscape that has a multitude of peaks and valleys. It’s the comparison to achieving higher or lower determinations of specific actions as they relate to human well being that Harris points out can be informed by various scientific findings.

  5. We don’t even know that human well-being is central to morality even if it could be exactly defined? What of geological well-being (linked, however, as it would be with human well-being). It could well be that vanquishing all humans from the planet would be the best thing for its ‘flourishing’ (whatever ‘flourishing’ would mean).

  6. Yes, well, we’re not talking about geological well being, are we? We’re talking about what Harris’ thesis is and it is about how science can determine HUMAN values (that is, after all, the subtitle of the book).

  7. Who’s to say we shouldn’t be? But yes, the whole point is that without a prior idea or goal of what ‘human well being’ is, we can’t do any ethics. You cannot know if you’re advancing toward the goal if you don’t know what the goal is. So yes, the goal of human ‘well-being’ needs to be at least provisionally or partially defined (as opposed to perfectly/omnisciently known, etc. – which I agree is an impossibility for humans, philosophically speaking).

Comments are closed.