in Ethics

Interpersonal Comparisons of Utility

At Catallarchy, there’s a rousing, extended (several days now, which is extended in blogosphere-time) discussion of interpersonal comparisons of utility. Several of the regulars there have weighed in on the issue, and there is some disagreement among the Catallarchists themselves regarding whether or not it’s possible to make interpersonal comparisons of utility. (See here, here, here, here, and here. Keep checking back, too, as Patri Friedman has promised a post on the subject as well.) It will probably come as no surprise to anyone who reads this blog regularly to hear that I do in fact think that it’s possible to make interpersonal comparisons of utility. It’d be rather hard to be a utilitarian otherwise. For the record, here’s the nutshell version of the argument.

For starters, different activities bring different amounts of happiness to different people. That’s obvious enough. Also fairly obvious is that utility has a diminishing marginal value. What that means is that for some given activity, X units of that activity will bring Y pleasure. Adding an additional unit of the activity will not, however, deliver a steady increase in the amount of pleasure. In other words, once X is reached, then X+1 units of activity will bring about Y+Z pleasure. But adding X+2 units of the activity will bring about Y+Z+A, where A is less than Z. To put the point in layman’s terms, a shot of bourbon will provide a certain amount of pleasure. The second shot of bourbon probably will too. But the 7th additional shot brings less pleasure than did the one before. Keep going and they will stop being pleasurable at all. We can also translate this sort of claim into talk of money. Here the idea is that my 1st dollar has more value to me than does my 100th which in turn has less value to me than does my 1,000,000th. This is all just basic economics and isn’t really in dispute at all.

Where utilitarians sometimes get into trouble is in assuming that one can straightforwardly apply diminishing marginal utility across persons. While it is certainly true that I get more pleasure from my first than from my millionth dollar, it’s not at all certain that I will get more pleasure from my first dollar than you get from your millionth. Your utility function and mine are different and while we can expect some rough similarities, there are just no guarantees that the two functions will be identical to one another. Indeed, there is good reason to expect that they are not identical at all. After all, our preferences are not identical. So if, for example, you think a quality evening would mean a six-pack of Bud Light (hey Jimmy), while I think that a quality evening involves a 1982 Chateau Lafite-Rothschild, then your utility curve at $4.97 and mine at $842.72 (Sotheby’s auction price) will just about match up. The numbers will map far more closely if I prefer a six-pack of Harp to your Bud Light.

Strictly speaking, then we cannot apply the principle of diminishing marginal utility across persons. But there is a danger in pushing this point too far. Brandon’s post at Catallarchy makes this point rather nicely. The comments and Brian’s response rather nicely illustrate Brandon’s claims. (Couldn’t resist.) Brandon grants the point that utilities cannot be compared with complete certainty, but then points out that

in some cases, particularly those in which proponents of redistribution are most keenly interested, you can guess with near certainty which of two people will value a particular resource more. In virtually all cases, taking $1,000 from a billionaire and giving it to a starving beggar will help the beggar more than it hurts the billionaire. No, it’s not true 100% of the time, but you don’t need complete certainty; if you’re right 95% of the time, that’s good enough for government work.

This strikes me as being exactly right. Brian (and others) object that Brandon and Jonathan and I all mistakenly see utility as being somehow objective when in reality, utility is not at all objective. Brian quotes approvingly an analogy from Rothbard, who argues that utility is purely a subjective and not an objective experience at all (and, of course, since it’s not objective, it can’t be compared). Here’s Rothbard (via Brian):

A favorite rebuttal is that subjective states have been measured; thus, the old, unscientific subjective feeling of heat has given way to the objective science of thermometry. But this rebuttal is erroneous; thermometry does not measure the intensive subjective feelings themselves. It assumes an approximate correlation between the intensive property and an objective extensive event—such as the physical expansion of gas or mercury. And thermometry can certainly lay no claim to precise measurement of subjective states: we all know that some people, for various reasons, feel warmer or colder at different times even if the external temperature remains the same. Certainly no correlation whatever can be found for demonstrated preference scales in relation to physical lengths. For preferences have no direct physical basis, as do feelings of heat. (Brian’s emphasis)

I’m sure that at least a few of you can anticipate my response here: this is a great argument if you’re a spooky Cartesian dualist who thinks that minds (and hence qualia) cannot be reduced to an objective, purely physical phenomenon. I can’t think of any good reasons for being a spooky Cartesian dualist, though, nor can I see any reasons whatsoever for privileging subjective mental phenomena over objective brain states. It’s bad neuroscience. It’s also a bit surprising that Catallarchists, who so pride themselves on their scientific rationalism when it comes to applying economics to politics, would then turn around and ground their economics on the mumbo-jumbo of psychology rather than the hard facts of brain chemistry. The simple fact is that what I express (very roughly and imprecisely) as a subjective preference is itself an approximation of the–very empirical–brain state that I’m currently experiencing. Those are objective and can be measured. That you express a stronger preference for A just means that you have more of whatever brain state leads to preferences-for-A than I have. Unless you want to posit that your brain and mine are radically different in the way that they operate, then we can compare the two things. What you report subjectively (and hence fuzzily and imprecisely) would seem to have exactly zero relevance to the way the world actually is.

So yes, I do think that utility can be compared. In terms of utils, even. The problem is not an in principle problem. Rather, the problem is an in practice one. We don’t yet know how to do this with 100% certainty. That doesn’t mean that we can’t do it at all. Our fuzzy intuitions that, as Brandon says, get us to 90% or so are generally pretty good. In fact, it’d be just plain silly to deny that we can and do accurately gauge how much happiness we’ll get from some amount of money (or time) versus how much happiness someone else will get from the same amount. That’s why I buy people presents. It’s why I give to charity. It’s why I give up my afternoon to talk to my friend when she’s having a bad day. (Though one of the commentors, Constant, simply denies that we do even this much. He argues the psychological egoist line–that I do those things because they work out well on my own utility function. But arguments for psychological egoism are so laughably bad that it seems not really worth it to reject them. I’d suggest an introduction to moral philosphy course. We’ll do it in about 20 min.)

One other point…sort of unrelated, but at least on the general topic. Brian accuses me at one point of arguing in bad faith, pointing out that even if my claims were true, the fact that interpersonal comparisons of utility are not 100% accurate leaves me with the following dilemma:

The bad faith argumentation comes when redistributionists do start to try and defend their policy on utility. I say this because of the following: if it were such that one could show that the billionaire’s utility loss is greater than the utility gain of the starving man, the redistributionist would have to make two choices- either the redistributionist agrees to let the starving man starve, and therefore reveals that they don’t actually care about the starving/less well off at all (and thus their program is sold to the public on a lie), or else they say “take it from him anyway”, in which case the whole exercise in ‘utility comparison’ is moot and a sham. All the hand-waving in the world about how ‘this will never happen’ does not change the fundamental problem posed by the extreme case. I suspect that Joe would not let the starving man starve, even in the presence of a ‘utility monster’, and thus I think his arguments in favor of the IUC are in bad faith, since they are not the actual justification for the taking. (no offense)

I’m not offended here. That doesn’t make the argument any less bad, though. No offense. 😉 The problem here is that Brian has posited a false dilemma. These aren’t the only two options available to a utilitarian who defends redistribution. Admittedly, if I were an act-utilitarian I’d have precisely this problem. But I’m not an act-utilitarian and I wouldn’t defend redistribution on act-utilitarian grounds. Indeed, a policy of redistribution is just that–a policy. That makes it a rule-utilitarian principle, almost by definition. Here’s a definition of rule-utilitarianism (one of the better ones, IMHO) from Brad Hooker (those of you in my Moral Theory course this fall will come to know this definition well, as we’ll be reading Brad’s book through):

An act is wrong if and only if it is forbidden by a code of rules whose internalization by the overwhelming majority of everyone everywhere in each generation has maximum expected value in terms of well-being.[1]

Now whether or not we will want a policy of redistribution on rule-utilitarian grounds will require considering a lot of possible consequences. But to the extent that we might decide on a policy of redistribution, we will do so not because we think that every single case of redistribution will be utility-maximizing. Rather, we’ll do so because most cases are utility-maximizing. And we’ll redistribute even in cases where that isn’t true because having the policy in place–having that particular set of rules–is itself optimific. Either way, the alternatives that Brian points to do not exhaust the set of possible utilitarian responses. I need neither give up on the claim that I’m interested in helping the poor nor argue in bad faith.

[1] Brad Hooker. Ideal Code, Real World (Oxford: Clarendon Press, 2000), p. 32.

Write a Comment


  1. This strikes me as being exactly right. Brian (and others) object that Brandon and Jonathan and I all mistakenly see utility as being somehow objective when in reality, utility is not at all objective.

    That’s not really how I would put it. I’d say that utility is an ordering, not a scalar quantity, and that the ordering happens within a person because it concerns the person’s preferences. It’s simply not the sort of thing that can be done across persons because it refers to the preferences of individual persons.

    So, utility can be entirely objective in the sense that the orderings can be objectively real. You can, in principle, fully know my orderings and I can fully know yours. There can be an objective reality as to what our respective orderings are. The orderings are “subjective” only in the sense that they concern our respective preferences, and you have your preferences and I have mine.

  2. Constant,
    Again, though, unless we’re dualists, the term “preference” has to mean something. Specifically, it has to map on to something that is happening somewhere inside your brain. That something can be measured and compared to the something that is going on inside my brain.

    It’s fine to retreat to talking about preferences, but you have to be aware that when you talk of preferences as if they cannot be reduced to anything else, then you take on all the (highly unscientific) baggage of dualism along with it. It’s an odd move to save your scientific economics by appealing to the pseudoscience of dualism.

  3. Joe,

    Wherever you got that stuff about Cartesian dualism, it has nothing to do with my point, which is about ordinals and cardinals. Here’s my extended comment on the relationship between ordinals and cardinals. (Hope this will work, trying to make a link.)

    Link to my comment

    In case it doesn’t do the anchor thing right, my comment starts “You can get ordinals from cardinals but you can’t get cardinals from ordinals.”

    I peg you as being one of those who are conjecturing the existence of cardinals lying behind ordinals. Now while I agree with you that physical reality underlies everything – I’m a materialist (or whatever is the best word for it – none of the woo woo stuff). But there are more than one way for the superstructure to relate to the substructure. Just for example, suppose that I have ordered things by writing down an arbitrary list, say my grocery list. There’s something that’s the first thing on the list, something that’s the second, and so forth. I may not even have written the list in order, I may have filled it in back and forth. Now this list of course exists in real reality, not in woo woo land, but I would suggest that any cardinal that you try to recover from this ordinal is spurious. In particular, I challenge you to say, in constant units, just how many times greater the third thing on the list is from the first thing.

  4. I’m fairly optimific (Joe wrote it; it must be a word) about the prospects of a generalized utility-measurement abilities of humans.  I do have a question though… but first a statement that gets to the crux of the question.  People are, in most cases, actually fairly bad at judging utility.  The case in point that I wish to bring up is the WIC clients my wife deals with on a daily basis (nearly a third of the Robeson county population and doesn’t even map 1:1 with the life-time welfare/foodstamp bandits).  These are habitually miserable people who, through disgusting acts of thievery, have convinced everyone that cares (including, sometimes, car dealerships… they drive brand-new Escalades with about 10k in wheels) that they need food for their sixth child (the other five live in foster care, but in the same house in order to qualify the mother for welfare).  Keep in mind, they aren’t actually buying food for their children, but they are collecting the vouchers for the food, which they use and then sell the food to people who are too afraid or have been denied WIC supplemental assistance (note the word supplemental; most of the WIC population don’t see it that way or just flat are too ignorant to know what it means).  These people are horrendously bad at judging utility and are merely an example of the most obvious case.  I know how I would explain away the contradiction of some people being able to make those calls while others are simply incapable (incapable is a bit strong here, perhaps ignorant of the methodology or simply too dumb to be taught).  I’d like to know what you would say about it.  How do we trust some people to be able to measure utiles, while we simply tell others what to do based on those measurements.  Is it something fairly simple like: “just listen to us philosophers and economists?”  I tend to agree with that statement, no matter how repugnant it seems at first glance.

    Okay, I’m rambling again.  Back to you.

  5. Joe, you say that “arguments for psychological egoism are so laughably bad that it seems not really worth it to reject them. I’d suggest an introduction to moral philosphy course. We’ll do it in about 20 min.” How about a post in which you summarize the arguments, or a link to a killer argument against psychological egoism?