Questions of Normativity

Over at Catallarchy, I made what I thought was pretty much just a passing observation on a throwaway comment to a post on diminishing marginal utility. Oddly enough, that comments sparked a spirited (if IMHO hugely misinformed) defense from Constant, a regular Catallarchy commenter. The issue has to do with normativity. Constant mentioned that he finds less value in normative uses of economics, citing “X is efficient” or “X is Pareto optimal” as his examples.

I responded that

Technically, “efficient” and “pareto optimal” are descriptive and not normative terms. It’s often the case that people make statements like “X is efficient” with the suppressed premise “Efficient is good” or “I ought to act efficiently” but the normative force lies in the suppressed premises. Efficiency and Pareto optimality (is that a word?) are purely descriptive terms with set definitions.

I’m pretty sure that it’s actually impossible to use economics to make moral claims, just as it’s impossible to use physics to make moral claims. One can make moral claims by using normative statements in conjunction with scientific ones, but, as Scott argues so frequently, you can’t get an is from an ought. At least not from those is-es (and I know that one’s not a word).

There follows a long exchange that really isn’t all that worth reading as it mostly consists of my making the same point again and again and Constant responding by missing that point again and again. Plus some snark. Finally, though, I received this response:

Joe, a statement can be used to do something other than what it literally states. For example, I can say, “you have gained weight”, and use that statement make someone feel bad.

Let me start by saying that, yes, undoubtedly this is true. Let me also say that it leaves out a whole host of other issues. Like the fact that when I say this sort of thing in casual conversation, I’m actually offering a whole host of other nonverbal clues as to my intentions. Technically, what I am doing in that sort of case is making an argument, one in which I suppress a whole host of other premises, premises that are clear, nonetheless, from the context. If that were all that Constant meant to be saying, I’d completely agree with him. Of course, I’d completely agree with him largely because that was exactly the point that I was making in the first place, namely, that descriptive statements can be used in making normative arguments, but the descriptive claim is not itself a normative claim.

Given, however, that Constant seems to want to deny that point, the alternative interpretation would be something like the following:

  1. X is Pareto optimal is a descriptive claim about X.
  2. A claim that is literally a descriptive claim can be used to do things other than describe things.
  3. One of the other things that I can use a descriptive claim to do is to make a normative statement.
  4. Those normative uses do not require an additional, supporting normative claim.
  5. Thus, even though “X is Pareto optimal” is a descriptive claim, it can be used normatively.

If this really is the argument, though, then I’m not at all sure how it’s supposed to get around the is/ought problem. Indeed, this just is the is/ought problem. Hume’s whole point is that descriptive claims cannot be used normatively, at least not without a normative claim to go along with them.

Now I’m sympathetic to the argument that the naturalistic fallacy isn’t really a fallacy, but I’m pretty unlikely to go along with the idea that Pareto optimality is the sort of brute descriptive claim that has normative value. “X is pleasurable” has, I think, some normative pull to it. There’s something strange about asking, “But why should I value pleasure?” To ask that question, it seems to me, is just to misunderstand what we mean by the word “pleasure.” OTOH, I can ask of the term “Pareto efficiency,” “But why should I value Pareto efficiency?” Even if I fully understand the term, the question makes sense in a way that the pleasure question doesn’t.

What Constant seems to misunderstand is that a descriptive statement might require evaluative elements and still be a descriptive statement. To take a similar example, I might very well claim that “Y is utility maximizing.” That is just a factual claim about Y; it can be true or false, and we could (given time and the proper conditions) determine the truth value of that claim. That doesn’t entail that there are no normative elements involved in the term “utility maximizing.” When we unpack the term itself, we find that it has lots of normative elements: we have to decide what we mean by “utility,” whose utility gets measured, in what units we measure utility, etc. Lots of these are normative questions with normative answers.

None of that changes the fact, though, that “Y is utility maximizing” is purely descriptive. The statement says absolutely nothing about what it is that we ought to do, about what states of affairs should obtain, about what things we ought to value. It is descriptive. You and I can be in perfect agreement that Y is utility maximizing and still disagree about whether that gives us any reason for acting. By definition, then, “Y is utility maximizing” is a descriptive claim. Even though determining that Y is in fact utility maximizing required us to make other normative claims.

Let’s put the point in simpler terms. Determining what “Pareto optimal” really means requires making normative judgments. Given that “Pareto optimal” actually has an accepted definition, determining that “X is Pareto optimal” is a descriptive claim.

Everybody Lies

I’ve acquired a new vice over the holiday period. It’s been both wonderful and (at least in some abstract purely intellectual way) a tad distressing. You see, I’ve managed to become completely addicted. When I get my fix, it’s all good. Unfortunately, I find my thoughts drifting throughout the day, looking forward to my next chance, worrying about what will happen when I run out. It’s awful. And it’s a great rush. Like Vicodin. Only a lot funnier.

I’m talking, of course, about “House,” another in a line of very good shows from a network with a (somewhat deserved) reputation for trashy TV. Personally, I’m not usually much of a TV junkie. In fact, I haven’t actually had TV service since last spring. And my only TV was the one I got back in grad school. The one with the picture that was always sort of red thanks to the failing picture tube. Divorces are fun that way. Missy, however, has been raving about what a good show “House” really is. And Peter King has been talking about it for three years now. I figured that if a show appeals both to my very intellectual, philosophy major gf and to a SI football columnist, then it’s gotta be doing something right. So I watched a couple of episodes at my brother’s house this fall. And I was hooked. So much so that I bought the first two seasons on DVD. Presents for Missy. Really. It’s for my friend.

Now you all have to understand that I’m a philosopher by long training. Moreover, I’m a philosopher who has a thing for linking philosophy with pop culture. So naturally I’ve begun to turn my philosopher’s eye (it’s the left one, in case you’re wondering) to “House.” There’s rather a lot of interesting material, much of which is (or can be, anyway) philosophically interesting, especially if your philosophical interests incline toward ethics. Perhaps most obvious of these is House’s rather casual relationship with truth.

For those of you who don’t regularly watch the show (come on, try it. just this once. everybody else is.), the central premise is that Dr. House is the kind of doc who actually tells patients all the snarky things that doctors usually just think about their patients. Except that House is brilliant enough that he gets away with it. And he mostly gets away with it by pretty much refusing to have anything at all to do with his patients, preferring instead to send his team of impossibly attractive, brilliant (if not nearly as brilliant as House himself), 20-something doctors to interact with the patients. Meanwhile House sits back in his office waiting for reports from his team, at which point, to paraphrase, House gets their theories, mocks them, then embraces his own. The treatment for which pretty frequently is hugely dangerous, a problem that House avoids by lying to the patients. Or by sending a member of his team to lie to the patient. And, this being television (and House being brilliant), the hugely dangerous treatment based on the hugely unlikely diagnosis (eventually) turns out to be exactly right.

So why the disdain for patients? Because House’s guiding principle is that everybody lies. Patients lie about symptoms (I swear I didn’t induce my seizures). They lie about their past (Of course this is my biological child). They lie about their actions (Of course I didn’t sleep with my daughter the supermodel). You just can’t trust patients to tell you the truth. If patients are going to lie, House reasons, then they don’t necessarily deserve the truth in return.

In this respect, House’s reasoning is very strangely Kantian. Yes, Kant, the guy who famously claims that we have to tell the truth even to the murderer at the door. Kant himself would claim that House can’t lie to his patients whatever they might do to him first. Yet there is a Kantian argument for lying back to lying patients. Consider Kant’s arguments for capital punishment. Here Kant claims that, in murdering someone, I have said, in effect, that I am okay with the maxim, “I can kill someone who doesn’t consent to being killed.” In acting on that maxim, I have said that I am okay with treating that maxim as a universal law of nature. And since I’ve consented to the maxim (that is, since I’ve accepted that it is okay to treat people in this way), you are merely showing me the respect my autonomy deserves when you treat me in the way that I have said it’s okay to be treated. It’s strange reasoning, I’ll admit. But it’s Kant’s reasoning.

So let’s apply Kant’s reasoning to House’s lying to patients. If the patient lies to the doctor, then the patient is saying, in effect, that it’s okay to lie. So when House lies about the treatment, he respects the patient’s autonomy by treating the patient exactly as the patient has consented to be treated.

Obviously there is much more to be said here. House’s relationship with the truth is actually far more complicated — after all, much of what makes the show so amusing is House’s tendency to alternate lies with brutal, unvarnished truth. But that’s a topic for another post. Right after I watch just one more episode. Stop looking at me like that. I don’t have a problem. I can quit anytime. I just don’t want to.

Interpersonal Comparisons of Utility

At Catallarchy, there’s a rousing, extended (several days now, which is extended in blogosphere-time) discussion of interpersonal comparisons of utility. Several of the regulars there have weighed in on the issue, and there is some disagreement among the Catallarchists themselves regarding whether or not it’s possible to make interpersonal comparisons of utility. (See here, here, here, here, and here. Keep checking back, too, as Patri Friedman has promised a post on the subject as well.) It will probably come as no surprise to anyone who reads this blog regularly to hear that I do in fact think that it’s possible to make interpersonal comparisons of utility. It’d be rather hard to be a utilitarian otherwise. For the record, here’s the nutshell version of the argument.

For starters, different activities bring different amounts of happiness to different people. That’s obvious enough. Also fairly obvious is that utility has a diminishing marginal value. What that means is that for some given activity, X units of that activity will bring Y pleasure. Adding an additional unit of the activity will not, however, deliver a steady increase in the amount of pleasure. In other words, once X is reached, then X+1 units of activity will bring about Y+Z pleasure. But adding X+2 units of the activity will bring about Y+Z+A, where A is less than Z. To put the point in layman’s terms, a shot of bourbon will provide a certain amount of pleasure. The second shot of bourbon probably will too. But the 7th additional shot brings less pleasure than did the one before. Keep going and they will stop being pleasurable at all. We can also translate this sort of claim into talk of money. Here the idea is that my 1st dollar has more value to me than does my 100th which in turn has less value to me than does my 1,000,000th. This is all just basic economics and isn’t really in dispute at all.

Where utilitarians sometimes get into trouble is in assuming that one can straightforwardly apply diminishing marginal utility across persons. While it is certainly true that I get more pleasure from my first than from my millionth dollar, it’s not at all certain that I will get more pleasure from my first dollar than you get from your millionth. Your utility function and mine are different and while we can expect some rough similarities, there are just no guarantees that the two functions will be identical to one another. Indeed, there is good reason to expect that they are not identical at all. After all, our preferences are not identical. So if, for example, you think a quality evening would mean a six-pack of Bud Light (hey Jimmy), while I think that a quality evening involves a 1982 Chateau Lafite-Rothschild, then your utility curve at $4.97 and mine at $842.72 (Sotheby’s auction price) will just about match up. The numbers will map far more closely if I prefer a six-pack of Harp to your Bud Light.

Strictly speaking, then we cannot apply the principle of diminishing marginal utility across persons. But there is a danger in pushing this point too far. Brandon’s post at Catallarchy makes this point rather nicely. The comments and Brian’s response rather nicely illustrate Brandon’s claims. (Couldn’t resist.) Brandon grants the point that utilities cannot be compared with complete certainty, but then points out that

in some cases, particularly those in which proponents of redistribution are most keenly interested, you can guess with near certainty which of two people will value a particular resource more. In virtually all cases, taking $1,000 from a billionaire and giving it to a starving beggar will help the beggar more than it hurts the billionaire. No, it’s not true 100% of the time, but you don’t need complete certainty; if you’re right 95% of the time, that’s good enough for government work.

This strikes me as being exactly right. Brian (and others) object that Brandon and Jonathan and I all mistakenly see utility as being somehow objective when in reality, utility is not at all objective. Brian quotes approvingly an analogy from Rothbard, who argues that utility is purely a subjective and not an objective experience at all (and, of course, since it’s not objective, it can’t be compared). Here’s Rothbard (via Brian):

A favorite rebuttal is that subjective states have been measured; thus, the old, unscientific subjective feeling of heat has given way to the objective science of thermometry. But this rebuttal is erroneous; thermometry does not measure the intensive subjective feelings themselves. It assumes an approximate correlation between the intensive property and an objective extensive event—such as the physical expansion of gas or mercury. And thermometry can certainly lay no claim to precise measurement of subjective states: we all know that some people, for various reasons, feel warmer or colder at different times even if the external temperature remains the same. Certainly no correlation whatever can be found for demonstrated preference scales in relation to physical lengths. For preferences have no direct physical basis, as do feelings of heat. (Brian’s emphasis)

I’m sure that at least a few of you can anticipate my response here: this is a great argument if you’re a spooky Cartesian dualist who thinks that minds (and hence qualia) cannot be reduced to an objective, purely physical phenomenon. I can’t think of any good reasons for being a spooky Cartesian dualist, though, nor can I see any reasons whatsoever for privileging subjective mental phenomena over objective brain states. It’s bad neuroscience. It’s also a bit surprising that Catallarchists, who so pride themselves on their scientific rationalism when it comes to applying economics to politics, would then turn around and ground their economics on the mumbo-jumbo of psychology rather than the hard facts of brain chemistry. The simple fact is that what I express (very roughly and imprecisely) as a subjective preference is itself an approximation of the–very empirical–brain state that I’m currently experiencing. Those are objective and can be measured. That you express a stronger preference for A just means that you have more of whatever brain state leads to preferences-for-A than I have. Unless you want to posit that your brain and mine are radically different in the way that they operate, then we can compare the two things. What you report subjectively (and hence fuzzily and imprecisely) would seem to have exactly zero relevance to the way the world actually is.

So yes, I do think that utility can be compared. In terms of utils, even. The problem is not an in principle problem. Rather, the problem is an in practice one. We don’t yet know how to do this with 100% certainty. That doesn’t mean that we can’t do it at all. Our fuzzy intuitions that, as Brandon says, get us to 90% or so are generally pretty good. In fact, it’d be just plain silly to deny that we can and do accurately gauge how much happiness we’ll get from some amount of money (or time) versus how much happiness someone else will get from the same amount. That’s why I buy people presents. It’s why I give to charity. It’s why I give up my afternoon to talk to my friend when she’s having a bad day. (Though one of the commentors, Constant, simply denies that we do even this much. He argues the psychological egoist line–that I do those things because they work out well on my own utility function. But arguments for psychological egoism are so laughably bad that it seems not really worth it to reject them. I’d suggest an introduction to moral philosphy course. We’ll do it in about 20 min.)

One other point…sort of unrelated, but at least on the general topic. Brian accuses me at one point of arguing in bad faith, pointing out that even if my claims were true, the fact that interpersonal comparisons of utility are not 100% accurate leaves me with the following dilemma:

The bad faith argumentation comes when redistributionists do start to try and defend their policy on utility. I say this because of the following: if it were such that one could show that the billionaire’s utility loss is greater than the utility gain of the starving man, the redistributionist would have to make two choices- either the redistributionist agrees to let the starving man starve, and therefore reveals that they don’t actually care about the starving/less well off at all (and thus their program is sold to the public on a lie), or else they say “take it from him anyway”, in which case the whole exercise in ‘utility comparison’ is moot and a sham. All the hand-waving in the world about how ‘this will never happen’ does not change the fundamental problem posed by the extreme case. I suspect that Joe would not let the starving man starve, even in the presence of a ‘utility monster’, and thus I think his arguments in favor of the IUC are in bad faith, since they are not the actual justification for the taking. (no offense)

I’m not offended here. That doesn’t make the argument any less bad, though. No offense. 😉 The problem here is that Brian has posited a false dilemma. These aren’t the only two options available to a utilitarian who defends redistribution. Admittedly, if I were an act-utilitarian I’d have precisely this problem. But I’m not an act-utilitarian and I wouldn’t defend redistribution on act-utilitarian grounds. Indeed, a policy of redistribution is just that–a policy. That makes it a rule-utilitarian principle, almost by definition. Here’s a definition of rule-utilitarianism (one of the better ones, IMHO) from Brad Hooker (those of you in my Moral Theory course this fall will come to know this definition well, as we’ll be reading Brad’s book through):

An act is wrong if and only if it is forbidden by a code of rules whose internalization by the overwhelming majority of everyone everywhere in each generation has maximum expected value in terms of well-being.[1]

Now whether or not we will want a policy of redistribution on rule-utilitarian grounds will require considering a lot of possible consequences. But to the extent that we might decide on a policy of redistribution, we will do so not because we think that every single case of redistribution will be utility-maximizing. Rather, we’ll do so because most cases are utility-maximizing. And we’ll redistribute even in cases where that isn’t true because having the policy in place–having that particular set of rules–is itself optimific. Either way, the alternatives that Brian points to do not exhaust the set of possible utilitarian responses. I need neither give up on the claim that I’m interested in helping the poor nor argue in bad faith.


[1] Brad Hooker. Ideal Code, Real World (Oxford: Clarendon Press, 2000), p. 32.

Sports Illustrated Endorses Peter Singer

by Joe Miller

Okay, so the magazine doesn’t really endorse Peter Singer. I doubt that Rick Reilly has the faintest idea who Singer is. But he does nicely channel Singer’s utilitarian argument for famine relief. Reilly’s target, however, isn’t famine relief but malaria relief. Check it out here. Then click on over here and donate your $20 to by mosquito nets for kids in Africa. Private charity at work. Even an anarchist can dig it.

Singer on Factory Farming

by Joe Miller

In preparation for a campus lecture at the University of Minnesota, Peter Singer has written an editorial for the Minnesota Daily, the campus newspaper, condemning factory farming. Singer chronicles a fairly standard set of objections to factory farms:

Cows and veal calves are confined in crates too narrow for them even to turn around, let alone walk a few steps. Egg-laying hens are unable to stretch their wings because their cages are too small and too crowded. With nothing to do all day, they become frustrated and attack each other. To prevent losses, producers sear off their beaks with a hot knife, cutting through sensitive nerves.

Chickens, reared in sheds that hold 20,000 birds, now are bred to grow so fast that most of them develop leg problems because their immature bones cannot bear the weight of their bodies.

Another consequence of the genetics of these birds is that the breeding birds — the parents of the ones sold in supermarkets — constantly are hungry, because, unlike their offspring that are slaughtered at just 45 days old, they have to live long enough to reach sexual maturity. If fed as much as they are programmed to eat, they soon would be grotesquely obese and die or be unable to mate. So they are kept on strict rations that leave them always looking in vain for food.

Moreover, as Singer notes, these practices don’t actually result in a more efficient method for feeding a growing population. Meat animals consume more calories than they yield as food (after all, some of those calories that animals require go into things like growing bones and other inedible parts, and other calories are burned in moving around and generating heat and the like). So if we’re really trying to feed more people, it’s far more efficient simply to use crops to feed humans directly.

So, then, what’s the point of factory farming? As Singer puts the point:

It has nothing going for it except that it produces food that is, at the point of sale, cheap. But for that low price, the animals, the environment and rural neighborhoods have to pay steeply.

Oh, only that going for it. At this point, my brain tries to go in two directions at once. The pro-market side of me says, “hey, efficient equals cheap equals more money to spend on other stuff equals ultimately more wealth and that’s all good.” OTOH, part of the reason that I value markets is that they square so nicely with utilitarianism–the same utilitarianism that tells me that I really ought to consider the suffering of cows and pigs and chickens in my moral deliberations.

In part, the dilemma is easy enough to reconcile. I buy eggs from free-range chickens and milk from free-range cows and avoid eating animals (except for my weakness for sushi). But $4 per dozen eggs or $16 per pound sashimi-grade tuna is a tough sell to the minimum-wage earning single mother who would far rather buy the $0.99 package of hot dogs for her 3 kids.

One (partial) solution that should please the marketists and the minimum-wage earners alike: get rid of agricultural subsidies and eliminate all tariffs on agricultural products. Perhaps that would drop the prices of food enough to provide some real alternatives to the $0.99 hot dogs.