Here Radgeek mention…

Here Radgeek mention a criticism of Williams that utilitarianism is bad because “utilitarianism seems to obliterate me and my projects in favor of rigidly impersonal rule-following”. What this objection misses is that doing rigidly impersonal rule-following is bound to decrease the total happiness in the world, since people highly value their life projects. Thus, utilitarianism actually prescribes that we shouldn’t ignore our life projects and goals in favour of mindlessly hedonistic or permanently altruistic practises.

Well, Williams has a straightforward reply to objections like this; to wit, if utilitarianism is true, then emotional investment in projects that aren’t productive of happiness (according to a rigidly impersonal utilitarian calculus) is irrational.

To take an example, suppose that Jones is a committed vegetarian (for utilitarian reasons; she includes the suffering and happiness of non-human animals in her utilitarian calculations); suppose that she also has just lost her job and is facing penury for herself and her family if she can’t find a new one. She’s having trouble, but there is one place that’s always hiring: the local slaughterhouse. Now, suppose she sits down one night and determines, using exacting utilitarian calculation, that the benefits that financial security from the job distinctly outweigh whatever contribution her taking the job will make to global suffering (a pretty miniscule one, since if she didn’t take the job someone else surely would). Since the fact that she, personally, is doing the killing plays no role in utilitarian calculation, in and of itself (the only thing that matters is whether an action is productive of global happiness, not who is doing it), it seems that utilitarianism would demand that she take the job at the slaughterhouse in spite of the fact that it would violate her every conviction on a daily basis. Here’s where you might object: but wait, the fact that it violates her every conviction on a daily basis would make her miserable, so if she also accounts for her being miserable every day in her calculation, she’ll find that the utilitarian calculus demands she not take the job after all. But the Williams reply is that there are two ways you could deal with being miserable over working in a slaughterhouse when you’re a vegetarian: you could (a) not work in a slaughterhouse, or (b) stop being miserable about it. The question is which you should do; and Williams argues that if you’re a good utilitarian, you should do (b), since on utilitarian grounds it’s irrational to let your conscience make you miserable over a course of action that would otherwise be more productive of global happiness than the alternatives. (You might say that she can’t be expected to do (b) instead of (a) because her emotional reactions are not under her control but her actions are. But that’s certainly not so; she came to have the emotional reactions to slaughtering animals that she does because of a voluntary process of ethical reasoning, and there’s no reason that she couldn’t come to a state of emotional indifference about her, personally, doing the slaughtering by a voluntary process of ethical reasoning as well.)

I only alluded briefly to how I think that Moore could actually get on Williams’ side of this objection rather than being stuck on the business end of it with Mill and Bentham; to be a bit clearer, I think that Moore has two things in his form of consequentialism which may exempt him from the Williams critique: (1) he doesn’t think that goodness is either reducible to, or even proportionate with, the quantity of any other observable property (like pleasure or intensity of desire or evolutionary fitness or …); and (2) because he thinks that the consequences that matter for determining goodness as a means include every consequence into an infinite future, he thinks that it is next to impossible, at least without making some possibly unwarranted metaphysical assumptions, to determine the full consequences, with respect to value, of any particular act. Both (1) and (2) dramatically undermine the idea of ethical calculation for Moore; and they put such a wide gap between what ought to exist and what I ought to do that it’s unlikely that Williams’ objection — which is based on the fact that the utilitarian answer to the first question rigidly excludes any considerations other than an impersonal accounting of global pleasure or suffering, and the fact that for utilitarians the second question is so tightly bound to the first — gets a grip. It’s true that Moore has a rigidly impersonal account of what makes a partiular outcome good, but since we cannot be in a position to calculate the degrees of goodness in the outcomes of different possible actions, he explicitly makes a lot of room in his account for cooperation in social projects, and implicitly makes a lot of room for commitment to personal projects as well.

I don’t think, incidentally, that this is the best way to deal with Williams’ objections; the best way is to become a virtue ethicist. But I do think it’s interesting how Moore’s consequentialism, for all its faults, fails to be vulnerable to many of the classical objections raised against utilitarianism and other forms of consequentialism.

Advertisement

Help me get rid of these Google ads with a gift of $10.00 towards this month’s operating expenses for radgeek.com. See Donate for details.