I am sooooooooooo far behind on answering questions from the studio audience. I am going to try to concentrate on that this week.
From Atheist Observer
Question 1: Do you think desire utilitarianism leads you to your moral positions, or do you think your moral positions come from your “heart” and you developed desire utilitarianism to match them?
There are two different levels to this question.
Level 1: Is desire utilitarianism itself something that I created to match my previous moral positions?
Level 2: When I use desire utilitarianism to draw a specific moral conclusion (e.g., on abortion, capital punishment, etc.), do I seek a conclusion that matches my moral positions?
I have worried about the answer to these two questions a lot. Anybody who simply observes the world around him will see how easy it is for somebody to adopt a belief and then to hold firm to it with utmost conviction even in the face of clear evidence to the contrary. I am human, with absolutely no reason to believe that I am immune to this type of problem.
I have seen this, not only among theists, but also among atheists. Ayn Rand Objectivists, in spite of that theory’s own expressed love of reason as a near perfect good, still casually brush aside fundamental problems (e.g., their inability to deal with the is-ought problem).
I think that it is far more likely that I would make a mistake such as this when answering a Level 2 question (e.g., on pornography or capital punishment) than with the Level 1 question. Desire utilitarianism itself is not a prescriptive theory. It explains what a value is (a relationship between a state of affairs and some desire) and its relationship to reasons for action (desires are the only reasons for action that exist). However, to get real-world prescription out of this theory, you actually have to have desires. Without desires, there is no value.
As I see it, the best way to find out if I am making a mistake of this type is to expose my ideas to public scrutiny and to see what objections they come up with. (For which, Atheist Observer, I want to take an opportunity to say that I am grateful for the efforts you have made in this regard, as well as those of the other readers.)
This does not eliminate the possibility that I will blind myself to clear objections, but it at least gives me an opportunity to look at them.
Ultimately, the objection that I am, in fact, guilty of blinding myself to clear objections to my theory requires that there be clear objections that I am blinding myself to. It is not a valid argument to assert, “A lot of people have adopted positions on these issues and blinded themselves to clear objections – therefore, you have blinded yourself to clear objections to your theory.”
A valid argument would be, “Here is a clear objection to your theory. However, you seem to have the traditional problem a lot of other people have had in blinding yourself to clear objections to your theory, such as this one.”
Show me the objection to my theory, and if I do not take it seriously, then you can accuse me of closed-mindedness.
Question 2: Have you had instances where you felt something is wrong, but desire utilitarianism convinced you it was morally OK?
When I first came up with the idea, I started to apply it to contemporary moral issues to see what came out of it.
My position on abortion, before coming up with this theory, was that abortion is a prima-facie wrong. It involved the killing of an innocent person. However, a greater wrong could be found in the use of one person’s body by another person without her consent. On this theory, an entity with no desires has no interests, and an entity with no interests cannot be wronged. So, an abortion carried out before the fetus has desires is not even a prima facie wrong.
My position on capital punishment is that it is not substantially different from any other punishment. It is not even a most extreme form of punishment, since (speaking personally) I would rather be executed than imprisoned for life – particularly if I was innocent. The possibility that my captors would find out about the mistake in 20 years time is of no interest to me. However, when I looked at the statistics of capital punishment through the lens of desire utilitarianism, the evidence seems to suggest that a society that teaches such an aversion to killing that they will not execute prisoners raises fewer children to be murderers. For our own safety and the safety of those we care about, we have reason to promote an aversion to killing that is strong enough to save prisoners from this fate.
Question 3: Do you think desire utilitarianism is precise enough to actually use as a prescriptive tool, that everyone attempting to apply it would come to the same moral conclusions, or that it could be interpreted in different ways, so that “what a person with good desires would do” or “the desires that fulfill other desires” are sufficiently general that a great variety of moral conclusions is possible?
Do you think that science is precise enough that no two people will come to the same conclusions? Note: I did not say “two scientists will come to the same conclusions,” I said “two people”. We seem to have people coming up with different answers all the time – about the age of the earth, about the origins of human beings.
Even within science, there is always a maximum potential level of description. We can only know answers within a certain degree of certainty. Below that, we cannot go. There is no reason to demand that a theory of ethics must give us perfect precision on all issues. All we can expect is that it give us as much precision as is possible.
If you know of a way to get more moral precision using another method (one that doesn’t simply make things up), then that theory is obviously to be preferred over the theory I defend here. Without the possibility of finding greater precision elsewhere, then this is the best we can do.
I answered this question in part in the essays on pornography. Desire utilitarianism is substantially a theory on what value is – a relationship between states of affairs and desires. We can lament about our inability to come up with precise answers to all moral questions. However, this will not change the fact about what value is. We simply have no choice but to make decisions in the face of imperfect information.
However, some wrongs are extremely easy to detect using desire utilitarianism – including some that get a pass in our current culture. One of the most significant sets of wrongs is intellectual recklessness. The wrong of drunk driving is clearly illustrated using desire utilitarianism methods (the lack of concern for the well-being of those that one might harm). That intellectual recklessness is just as wrong as drunk driving is also easily demonstrated. The lack of concern with the soundness of one’s arguments when addressing issues that affect the life, health, liberty, and well-being of others is, I would argue, the most underappreciated moral wrong in our society today.
Question 4: Another measure is does it provide something that is actually useful in making real time decisions in real life, i.e., does it provide a better guide than simpler concepts such as “be fair, be honest, and treat others as you want to be treated?”
What is fairness? How do we tell if we are being fair or not?
There are clear exceptions to the moral requirement to “be honest.” There are the famous example of lying to the Nazi soldiers who come to ask if you know about any Jews in the area and feigning loyalty to the Fuhrer while you sabotage the German war effort. There’s the white lies to your significant other so as not to hurt feelings. There are the lies used when planning a surprise party or a practical joke. How do we distinguish when to be honest and when not to be honest?
As for “treat others as you want to be treated” – what if the person you are talking about likes to be treated differently from the way you like to be treated? How about, “If I were gay, I certainly would want others to do everything in their power to save me from this sick and disgusting lifestyle; therefore, I must do whatever I can to save others from this sick and disgusting lifestyle.”
Or, “Given the great value in being a Christian, I would certainly want others to do whatever they can to convince me of the truth of Christianity, in spite of all of my protestations to the contrary. Therefore, I am obligated to do whatever I can to convert others to Christianity, ignoring all of their protests to the contrary.”
I would argue that desire utilitarianism does a far better job than any of these types of rules. “Fairness” is too vague of a concept to be useful without a theory of fairness.
“Honesty” comes with exceptions – just as we can kill an aggressor to protect an innocent victim we can clearly lie to an aggressor to kill an innocent victim. Similarly, just as a surgeon can slice into a patient for his own benefit, we can, under certain circumstances, have our aversion to lying outweighed by our desire to benefit the person we are lying to.
And a far better rule than “treat others as you would want to be treated” is “act on those desires that you would want everybody to have” or, in other words, “act as a person with desires that tend to fulfill other desires would act.” This takes into consideration the fact that others need not always like the same things that you like.
Like I said, I know that there is a want to find a simple set of rules that everybody can instant see are the best rules no matter what their background. It’s a fantasy. It’s not going to happen. Anybody who claims to have come up with such a set of rules is living in a fantasy universe. If morality could guarantee such easy answers, we would have found them by now.
However, the fact that some moral facts are hard to discover does not discredit those that are more easily known. Nor does it prevent the possibility of moral progress. Some scientific facts are hard to discover as well. But, some are easy, and we can make scientific progress.
It just takes work