Posts filed under atopian.org

There aren’t many things…

There aren’t many things in this world that Leo Strauss is right about, but I think the following may be one of them: the alleged problem between the scientific worldview and ethical norms has less to do with science per se than it has to do with the philosophical victory of mechanistic over teleological accounts of nature in the language used to discuss scientific discoveries. It becomes much less difficult to square the idea of moral facts with your conception of science and the natural order if the language that you feel entitled to use in descriptions of the natural includes terms like “purpose” and “end” and “form of life”, than if you are methodologically committed to burning those kind of terms out of the language wherever you can find them. If there is a distinctively human form of life and virtues can be explained in terms of the ways of being and kinds of activity that are appropriate to that form of life, then morality becomes much easier to fit into something that you might call a naturalist worldview. (See, for example, Aristotle’s Nichomachean Ethics, or more recently Philippa Foot’s Natural Goodness.)

Of course, it’s probably no coincidence that the same people who revolutionized mechanics, chemistry, etc. in the early modern period were also the people leading the philosophical charge against teleology and in favor of mechanism. So the victory of mechanism over teleology probably has had some concrete historical pay-offs. But of course that’s not the same thing as being true, and anyway the fact that teleological language was once abused and as a result (in the context of a rather complex set of historical, political, and intellectual factors) science stagnated, does not mean that it would have similarly harmful consequences today.

eric w. pleasure…

eric w. pleasure wrote:

i’ve always thought of it as the belief that morals can be cast aside when it’s necessary, or merely convenient.

Well, relativist arguments probably encourage this kind of opportunistic thinking; and people are probably also often attracted to relativist arguments because they help make excuses for opportunistic thinking. But strictly speaking they are not the same claim; what you mention here is more properly a form of situational ethics.

A consistent cultural relativist, for example, need not hold that white slavers could ignore moral principles when they were enslaving Black people if they could get good results from it. What they hold is that, since the culture in which the white slavers lived generally approved of slavery, there were no true moral principles that condemned slavery for them in the first place. (The relativism happens as soon as you presume that that “for them” can be inserted—that is that making a moral claim doesn’t bind you to holding that claim in all frames of reference.) They might also hold that moral principles can be ignored under the right circumstances, as a separate claim; but they might just as consistently be absolutist cultural relativists (i.e., they could believe that you are always obligated to do what your culture morally approves and avoid what it morally disapproves, whatever the circumstances are).

Here Radgeek mention…

Here Radgeek mention a criticism of Williams that utilitarianism is bad because “utilitarianism seems to obliterate me and my projects in favor of rigidly impersonal rule-following”. What this objection misses is that doing rigidly impersonal rule-following is bound to decrease the total happiness in the world, since people highly value their life projects. Thus, utilitarianism actually prescribes that we shouldn’t ignore our life projects and goals in favour of mindlessly hedonistic or permanently altruistic practises.

Well, Williams has a straightforward reply to objections like this; to wit, if utilitarianism is true, then emotional investment in projects that aren’t productive of happiness (according to a rigidly impersonal utilitarian calculus) is irrational.

To take an example, suppose that Jones is a committed vegetarian (for utilitarian reasons; she includes the suffering and happiness of non-human animals in her utilitarian calculations); suppose that she also has just lost her job and is facing penury for herself and her family if she can’t find a new one. She’s having trouble, but there is one place that’s always hiring: the local slaughterhouse. Now, suppose she sits down one night and determines, using exacting utilitarian calculation, that the benefits that financial security from the job distinctly outweigh whatever contribution her taking the job will make to global suffering (a pretty miniscule one, since if she didn’t take the job someone else surely would). Since the fact that she, personally, is doing the killing plays no role in utilitarian calculation, in and of itself (the only thing that matters is whether an action is productive of global happiness, not who is doing it), it seems that utilitarianism would demand that she take the job at the slaughterhouse in spite of the fact that it would violate her every conviction on a daily basis. Here’s where you might object: but wait, the fact that it violates her every conviction on a daily basis would make her miserable, so if she also accounts for her being miserable every day in her calculation, she’ll find that the utilitarian calculus demands she not take the job after all. But the Williams reply is that there are two ways you could deal with being miserable over working in a slaughterhouse when you’re a vegetarian: you could (a) not work in a slaughterhouse, or (b) stop being miserable about it. The question is which you should do; and Williams argues that if you’re a good utilitarian, you should do (b), since on utilitarian grounds it’s irrational to let your conscience make you miserable over a course of action that would otherwise be more productive of global happiness than the alternatives. (You might say that she can’t be expected to do (b) instead of (a) because her emotional reactions are not under her control but her actions are. But that’s certainly not so; she came to have the emotional reactions to slaughtering animals that she does because of a voluntary process of ethical reasoning, and there’s no reason that she couldn’t come to a state of emotional indifference about her, personally, doing the slaughtering by a voluntary process of ethical reasoning as well.)

I only alluded briefly to how I think that Moore could actually get on Williams’ side of this objection rather than being stuck on the business end of it with Mill and Bentham; to be a bit clearer, I think that Moore has two things in his form of consequentialism which may exempt him from the Williams critique: (1) he doesn’t think that goodness is either reducible to, or even proportionate with, the quantity of any other observable property (like pleasure or intensity of desire or evolutionary fitness or …); and (2) because he thinks that the consequences that matter for determining goodness as a means include every consequence into an infinite future, he thinks that it is next to impossible, at least without making some possibly unwarranted metaphysical assumptions, to determine the full consequences, with respect to value, of any particular act. Both (1) and (2) dramatically undermine the idea of ethical calculation for Moore; and they put such a wide gap between what ought to exist and what I ought to do that it’s unlikely that Williams’ objection — which is based on the fact that the utilitarian answer to the first question rigidly excludes any considerations other than an impersonal accounting of global pleasure or suffering, and the fact that for utilitarians the second question is so tightly bound to the first — gets a grip. It’s true that Moore has a rigidly impersonal account of what makes a partiular outcome good, but since we cannot be in a position to calculate the degrees of goodness in the outcomes of different possible actions, he explicitly makes a lot of room in his account for cooperation in social projects, and implicitly makes a lot of room for commitment to personal projects as well.

I don’t think, incidentally, that this is the best way to deal with Williams’ objections; the best way is to become a virtue ethicist. But I do think it’s interesting how Moore’s consequentialism, for all its faults, fails to be vulnerable to many of the classical objections raised against utilitarianism and other forms of consequentialism.

3) Democracy should…

3) Democracy should be pervasive; not limited to some small area of life.

Our democracy is not pervasive. A lot of the most important things in your life are not set by government. They’re set by your employer: How much you’re paid, how you spend most of your day, whether you have a job, and so on. Do we get a say in this? Nope.

Why in the world would you want the government to set (1) how much you’re paid, (2) how you spend most of your day, (3) whether you have a job, etc.? I understand why it’s objectionable that your boss has so much power over your daily life, but isn’t putting the government in control of these things just exchanging one boss for another one?

One that you have no meaningful control over (see #1 and #2 above) and cannot even escape without fleeing the country?