Utilitarianism (Source) |
[This chapter is based closely on an Essay on Utilitarianism by Gareth McCaughan. Its last known location was "http://web.ukonline.co.uk/g.mccaughan/g/essays/utility.html". I have adapted Gareth's original creativity by inserting my own interpretation and solution.]
What is Utilitarianism?
This sounds like an easy question. "The greatest good of the greatest number". Simple. Or is it? In any real situation, there are many people involved; they will all be affected in different ways; there is no obvious reason why the "greatest number" should receive the "greatest good".What is usually meant in practice by that slogan is something like the following procedure for choosing between two or more actions.
- Look at the state of the world after each action. Look in particular at the level of happiness of each person in the various situations.
- Add up, somehow, those levels of happiness in each case.
- Compare the results. The one which leads to the maximum total happiness is the (morally) right one.
- The only aspect of the state of the world which has any direct moral significance is the happiness or misery of people.
- Actions, as such, have no moral value. What matters is their effect on the state of the world.
- Only individuals matter. The only relevance of the state of a family or a society is the effect it has on its individual members.
- All people are, ethically speaking, equal, in all situations. One person's happiness is precisely as important as another's.
- It is possible to measure happiness, in the required sense, on some sort of comparative linear scale.
- It is possible to add up different people's comparative degrees of happiness, producing a meaningful "Total (net) happiness". And, again, the results can meaningfully be compared.
Why Utilitarianism is Plausible.
Utilitarianism has the awkward property of seeming entirely obvious to its proponents, and clearly wrong to its opponents. This can make for discussions with much more heat than light. If it already seems obvious to you that Utilitarianism is right, by all means skip this section.There are no ethical first principles which are agreed on by everyone. On the other hand, there is a striking level of agreement about what is actually right and wrong, in concrete cases. Of course, there are disagreements; anthropologists have turned up some pretty surprising ones. But there is something pretty close to a consensus that (in most cases) murder, lying, rape and theft are bad, and that (in most cases) generosity, healing, truthfulness and loyalty are good.
One obvious thing that these things have in common is that most of the things near-universally agreed to be good are things which make people happy, and most of the things near-universally agreed to be bad are things which make people miserable. And in most exceptional cases, there is a clear recognition that they are exceptional cases: excuses are made.
Furthermore, the actions usually reckoned to be the worst are often the ones which cause the most suffering. Rape, for instance, which causes lasting psychological trauma as well as involving physical injury, is generally reckoned to be morally much worse than theft, and by some much worse than murder.
So, Utilitarianism seems to do a pretty good job of giving the right answers. There is also a theoretical justification for at least something rather like Utilitarianism. It seems clear to me that, all else being equal, something which makes me happy is better than something which doesn't. After all, that's one way in which I make decisions (though, to be sure, I wouldn't in such cases call them moral decisions). Since it seems plausible that all people are ethically equal, this means that something which makes anyone happy is (all else being equal) better than something which does not. This seems to lead naturally to something very like Utilitarianism.
The Problems with "Classical Utilitarianism".
'strong Utilitarianism" has a terrible problem: it is grossly inconsistent with most people's ethical intuition in certain cases. For instance, suppose that (never mind how; all that matters is that it should be conceivable) I could, by subjecting my aged grandmother to the most appalling tortures (which I shall leave to your imagination, should you happen to have that sort of imagination), relieve a sufficiently large number of people from one minute's toothache. No matter how small the amount of suffering from which each person is thus delivered, and no matter how great the amount I cause to my grandmother, if the number of people is large enough then the total amount of suffering in the world will be decreased by this transaction. Therefore I ought to torture my grandmother. This, it seems to me, is unacceptable to most people.Of course, various Utilitarian writers have suggested ways round this problem. For instance, we could model happiness and misery with a modified number system, containing values incommensurable in the sense that no integer multiple of one was as big as the other (for mathematicians: in other words, we could work with a non-Archimedean valuation). Or we could replace the idea of adding up utilities with some other operation: take the single biggest happiness or misery, and just look at that, or something. (Actually, either this second option actually reduces to the first, or else it doesn't work. Proof left as an exercise to the reader.)
So, we can get around that particular problem. Alas, there are others. I shall take the utilitarian principles I enumerated above, and describe some objections to them.
1. The only aspect of the state of the world which has any direct moral significance is the happiness or misery of people. Suppose I tell a defamatory lie about you to an acquaintance of mine, who has never had and never will have any sort of interaction with you, and swear him to secrecy. This makes no difference whatever to your future happiness. Does that make it OK? It seems clear to me that it doesn't (and if you disagree, there is really nothing I can say to convince you). Isn't there, in fact, something fundamentally good about truth and bad about falsehood? Some such idea seems to underlie the near-universal agreement that lying is in itself bad. And what about, say, someone who takes great pleasure in annoying other people? Suppose I get enormous satisfaction from causing you minor but genuine unpleasantness. Does that mean that it's right for me to do so?
2. Actions, as such, have no moral value. What matters is their effect on the state of the world. Is this really plausible? It doesn't seem so to me. If I kill someone, isn't there something intrinsically bad about that, even if (as might be the case) the killing turns out to be right in terms of maximizing utility? I think most people would agree that a killing of this sort would be at best a necessary evil.
3. In particular, only individuals matter. The only relevance of the state of a family or a society is the effect it has on its individual members. I wouldn't like to claim that this is obviously wrong. But is it really obviously right?
4. All people are, ethically speaking, equal, in all situations. One person's happiness is precisely as important as another's. What about criminals? If I am in the process of raping your wife, do you really have to consider my well-being as carefully as your wife's in deciding how to go about stopping me? (Perhaps the answer is "yes". It certainly doesn't seem like an easy question to answer.)
5. It is possible to measure happiness, in the required sense, on some sort of comparative linear scale.
6. It is possible to add up different people's degrees of comparative happiness, producing a meaningful "Total (net) happiness". And, again, the results can meaningfully be compared. Is it really obvious that different sorts of happiness are commensurable? How do you compare the pleasure person A gets from an hour of wild sex with his wife, the contentment person B has from the knowledge that his money in the bank is earning him piles of interest for his retirement, the wonder person C feels on contemplating the starry sky, the thrill person D has when listening to her favorite piece of music, person E's enjoyment of an evening listening to a stand-up comic, and so on? And how do you weigh those up against person P's toothache, person Q's unhappy marriage, person R's fear of cancer, person S's resentment of unfair treatment long ago, person T's frustration at having spent three weeks chasing a bug in his computer program? I don't know, that's for sure. I don't even know how to do similar comparisons when all the people involved are myself. In difficult cases it feels a lot more like tossing a coin than like choosing the best of a neatly ordered set of options.
The practical problem.
Let's pretend, for the sake of argument, that all those problems are resolved, and that I'm fully persuaded that Utilitarianism (of your favorite variety) is correct. I now have a decision to make; for instance, I have to decide whether to cycle home in the dark without lights (thus endangering a few people slightly, maybe) or to be late home (thus upsetting my wife and perhaps not managing to get anything for dinner). This is a trivial example; it should be easy to work it out. . . . Not a bit of it. I have to work out the entire future of the whole universe (possibly radically different in the two cases: remember the butterfly effect), work out exactly how happy each person is in each case and for how long, and add it all up. Good grief.In practice, what the utilitarian recommends is entirely different. I should make guesses as to the likely effects of the actions I'm considering, estimate the resulting levels of happiness, and do the best I can at adding them up in my head. Anything more is impossible, and in any case I can't be blamed for things I can't predict.
That last remark, if actually made by a Utilitarian, would amount to an abandonment of one of the key principles of Utilitarianism which I haven't mentioned so far: you don't do things so as to have done the Right Thing; you do them because that has results which are good. In other words, when making an ethical decision you aren't out to maximize your own righteousness: to the utilitarian, that is a horribly selfish way of thinking. You're acting for the common good.
But we can't have it both ways. Either I take into account all the effects of my actions (impossible), or I abandon the attempt to maximize overall utility -- for the future consequences of my actions are in most cases much greater than those in the foreseeable future. Perhaps it's possible to get round this by claiming that there is some irreducible random element in exactly what happens, and that beyond some point in the future the consequences of my actions will be swamped by the results of amplified random noise; if this is so, I need only consider a finite portion of the future, and things look less bleak.
What's right with Utilitarianism.
If you are some sort of non-utilitarian, you have probably been reading the foregoing sections with either boredom or glee, depending on whether you've already thought of all those arguments against Utilitarianism yourself. I'd now like to suggest that there is much to be said in favor of Utilitarianism, despite its problems.The first point is one I've made already: Utilitarianism, in so far as we can actually apply it (which means, in practice, only looking at a small chunk of the future and only looking at a small region of the universe), actually does a pretty good job of giving answers to ethical questions. And, subject to those approximations, it's quite easy. Most of us are capable of guessing "what will happen if. . .", and of imagining others' responses to the ensuing situations; and in many cases it's possible to compare the resulting utilities without too much trouble.
Secondly, Utilitarianism provides a valuable corrective against the sort of excessively rule-based ethics which come naturally to the religious, and perhaps to anyone who lives in a society with a very well-defined set of laws.
Thirdly, utilitarian arguments are, so to speak, portable. If you need to discuss ethical questions with someone else who doesn't share your system of ethics, you can often get some way towards agreement by considering the utilitarian question first. Then you can discuss the corrections that need to be made.
Comments
Post a Comment