Content Strategy, Philosophy, and a Bit More Philosophy

After going nearly a year (yikes!) without really writing much—or at least not much that I published under my own name—suddenly this week, I have two new pieces out.

At WonkComms, I make the case for smarter content that helps Google answer questions rather than just offer up links to pages. Much of that piece was inspired by a fantastic workshop on adaptive content, led by Noz Urbina at Confab last month.

And new this morning: my first-ever piece for The Pastry Box Project. This one is a lot more personal. I talk a bit about my winding path from academia into this whole weird world of content strategy.

On a side note: I still can’t quite believe they actually accepted my piece. Every time I look at the list of contributors, this clip starts playing in the back of my brain.

Oh, and one more note: we launched the redesigned ERG homepage this week, too. Still the same old site under the hood, but: progress. Next stop: a CMS. (I know.)

Philosophy and Economics

So, I wore my new David Hume shirt to work yesterday. 

Joe's Hume ShirtAnd before you ask, yes, I am in fact that much of a nerd. Also, yes, I do in fact have a job where I can wear that shirt on a Thursday and no one cares. Some parts of my job I really like.

Anyway, I found the whole experience to be just a tiny bit depressing.

“Why’s that?” you ask, not unreasonably, perhaps expecting me to say something about how I’m an adult who still wears printed t-shirts to work or maybe waiting for me to comment on the fact that when I walked out of the office yesterday in my t-shirt it had suddenly turned really fucking cold.

But no. It’s not any of those things.

The depressing thing is that no one recognized David Hume.

Now I realize that I work with economists and budget analysts. I probably wouldn’t recognize a lot of fairly famous economists, either. But this is David Hume. The David Hume who was both a close friend of and huge influence on this other 18th century Scot you may have heard of, a fellow by the name of Adam Smith. Yeah, that Adam Smith. I mean, I’ve definitely heard that economics programs have stopped doing much in the way of history/theory and turned into applied mathematics programs, but I mean…

So my typical conversation ended up going something like this.

Composite character: Who is David Hume?

Me: He was a philosopher.

Composite character: * Blank look *

Me: 18th century. Scottish enlightenment. Friend of Adam Smith.

Composite character: Ah. Your shirt makes him look like a rock star.

Me: Yeah, that’s supposed to be the joke. It’s Daft Punk’s logo.

Composite character: Who is Daft Punk?

Me: Can we talk about something else now?

Reason on House

At Hit & Run, Reason’s Jacob Sullum takes issue with the first three episodes of season 6 of House. Now some of you probably already know of my unabashed love for House. Well, Jacob also manages to hit on another of my hobby-horses, namely, the stupid way in which Americans stigmatize drug use. (To be clear, I think that it’s pretty stupid to abuse drugs, but I also think that it’s pretty stupid to jump out of non-crashing airplanes. I think, however, that you should be free to do either, if you please.) Unfortunately, in this case, I think that Jacob’s assessment is a bit wide-of-the-mark in this case. Here’s a snippet (warning: spoilers):

In this week’s episode, House, on the advice of his psychiatrist after he’s released from the hospital, quits his job and tries to find a “hobby” to distract himself from the leg pain, which is so severe that it prevents him from sleeping. He and his psychiatrist ultimately conclude that what he really needs is to go back to work, even though the stress and drug-associated surroundings may increase his risk of “using,” because that is the only thing that will engage his mind enough to make the leg pain bearable.

This plot is stupid in several ways. First, unless House plans to diagnose disease 24 hours a day, going back to work is not a solution to the pain that keeps him up at night. Second, if all House needed to relieve his pain was his work, why was he taking the Vicodin to begin with? Third, we never get a clear explanation of why House is forever forbidden to use painkillers, no matter how much he is suffering, especially since he managed to do his job brilliantly while he was taking them.

Jacob’s questions are meant to be rhetorical, but I think that there are actually some pretty decent answers to most of them.

First, the show has pretty much always indicated that House’s use (abuse?) of Vicodin increases when he’s bored. That’s a plot device lifted straight from Sherlock Holmes, on whom, if you didn’t already know, House is very much based. Holmes’ drug of choice is cocaine, which he uses between cases. “Give me problems, give me work, give me the most abstruse cryptogram, or the most intricate analysis,” he tells Watson, “and I am in my own proper atmosphere. I can dispense then with artificial stimulants. But I abhor the dull routine of existence. I crave for mental exaltation. That is why I have chosen my own particular profession, or rather created it, for I am the only one in the world.”

We’ve seen evidence of House having the same issues. In the S2 episode, “Skin Deep,” House begs Cuddy for a spinal injection of morphine to combat his rising leg pain. Cuddy complies reluctantly, and House improves. Or he improves as long as he’s working on his case. She notes later that his pain came back — surprise — just an hour after he completed the case. And, even bigger surprise, his previous injection was saline.

What’s more, House has been Vicodin free once before. In between S2 and S3, House is completely free of the drug. An experimental medical procedure has eliminated House’s pain. S3 begins with a scene of House jogging apparently pain free. The Vicodin begins again not because of physical pain, but because of House’s depression at (apparently) failing to solve a case.

IOW, House’s writers have offered ample evidence that House is psychologically dependent on painkillers. Yes, there is real physical pain. But there is also a far more troubling psychological addiction to the drug, one that transcends it’s palliative properties. And that, of course, is at least part of the answer to Jacob’s second question. House takes the drug even when working because he is dependent on it, even when he doesn’t need it to combat the pain (as, for example, when he’s working).

It’s also the answer to why House’s new psychiatrist calls him an addict and keeps him away from painkillers. Remember, this is the same House who, after a bad day, ends up in a stupor on the floor, covered in his own vomit, having downed the better part of a bottle of oxycodone. House is labeled an addict because he’s an addict. House is not simply a chronic pain-sufferer who responsibly takes pain medication under a doctor’s supervision. He’s a guy who steals his best friend’s prescription pad to get more Vicodin, who hides pills inside Lupus textbooks and sneakers and the like, who pops pills because he’s bored or stressed or just had a bad day.

Now I’ll gladly join with Jacob in saying that we vilify narcotic use badly enough that many who are in chronic pain are unable to get the relief that it is possible to give them. Prosecuting physicians who provide high levels of painkillers to their patients who are actually in high levels of pain is beyond dumb. It’s even dumber when we’re talking about terminally ill patients who are left to suffer needlessly lest they * gasp * end up addicted to morphine. I’m just not at all convinced that Greg House is the ideal posterchild for the Why Are We Taking Their Pills Away? movement.

Political Sleight-of-Hand, in Which Pundits Attempt to Derive an Ought from an Is

Every once in a while I find myself banging my head against the wall watching some pundit or making the claim that Pure Science tells us exactly what we should do. Why this is supposed to be a point in the pundit’s favor is beyond me. Is there anything more frightening than the image of a bunch of folks in white lab coats cooking up a set of rules we all must follow? And yet I seem pundits making the appeal again and again, from both the right and the left.

For example, Michael Peroski, writing for the left-leaning Center for American Progress, argues that bioethics should be a “data-driven” inquiry, a process that “entails considering the best available evidence before making decisions.” That process would require one to “gather information about public sentiment on the topic, carefully analyze the costs and benefits of proceeding with or prohibiting the research, and offering [sic] a pragmatic recommendation that takes all of these considerations into account.”

In other words, if we simply gather the right facts and maybe attach a few numbers, the correct policy will just fall out.

Bryan Caplan, a libertarian economist at George Mason University, makes a similar case for economics, claiming that a correct economic analysis will point the way to correct policy, whatever moral perspective one might have.

Caplan and Peroski both seem to misunderstand the role that normative (philosopher-speak for “ought”) claims play in setting public policy.

Let’s start with Peroski, whose conflation of facts with morality is much more egregious. Now don’t get me wrong – I’m all for getting facts right. But facts are only part of the story. They’re an important part, and when the facts are wrong, policy decisions are likely to go astray. Facts tell us how the world is. But policy – and indeed, politics more generally – is about what we ought to do. And those two things, the is and the ought, are very different.

The 18th-C Scottish philosopher David Hume is usually credited with making this point. In A Treatise of Human Nature, Hume writes that it seems “altogether inconceivable” that we can deduce anything about what we ought to do from claims about what is the case. Philosophers have called this the is/ought problem, though I prefer the more colorful if less frequently used “Hume’s Guillotine.” To put Hume’s point in the preferred jargon of philosophers everywhere, we can say that normative claims (or value claims like ought or should) cannot be derived from descriptive claims (which are just claims about the way the world is – the sort of claims that science makes, for example.) The late Oxford and University of Florida philosopher R.M. Hare calls Hume’s point a logical rule, and states the principle formally as:

No imperative conclusion [i.e., statement about what one ought to do] can be validly drawn from a set of premises which does not contain at least one imperative.

A nice way of demonstrating Hume’s point is to borrow a method from another British philosopher. In Principia Ethica, G.E. Moore presents what he called the open-question argument. Moore was actually asking whether we can use natural terms (like “pleasant”) to define good. Moore says that since we can answer sentences like, “This is pleasant, but is it good?” with a no, then it must be an open question whether pleasant and good mean the same thing.

We can use the same method to show that ought claims don’t follow from is claims. Suppose we ask something like: “Reducing carbon emissions by 20 percent will reduce overall carbon levels to 550 parts per million, but should I do it?” As long as it makes conceptual sense to answer such questions with a “no,” it can’t be the case that the fact alone is sufficient to justify the normative claim. The point is that any argument that ends with the claim”We ought to do ____” can’t appeal just to a set of facts.

Of course, some philosophers will tell you that there are normative facts. The arguments on this score are as complicated and abstract as they are esoteric. And even if this view (which philosophers call “moral realism”) is correct, those moral facts are of a very specific sort (e.g., “violating autonomy is wrong” or “desire-satisfaction is good”). Even the most ardent moral realist wouldn’t take a straightforward scientific claim about, say, carbon emissions to be a normative fact. The fact (as it were) is that normative conclusions have to be supported by at least one normative reason.

Caplan, for his part, recognizes the necessity of ought claims. His claim is rather that one can combine economic analysis with any reasonable moral premise and the same policy will fall out. Caplan says, for example, that if economists were able to demonstrate conclusively that allowing people to sell organs (as in, body parts, not the things you find in cathedrals) “would make sick people healthy and poor people rich,” then there couldn’t possibly be any plausible moral reason to object to an organ market. Caplan’s fellow-libertarian, Will Wilkinson of the Cato Institute, objects to that characterization. Wilkinson points to a number of perfectly plausible reasons for disapproving of a market in organs, even if such a market really would “make sick people healthy and poor people rich.” But even leaving Wilkinson’s objection aside, Caplan has falsely blurred the line between normative and factual disputes.

The very discipline of economics is loaded with normative assumptions. Consider two propositions:

  1. All goods can be exchanged.
  2. Money is a useful proxy for value.

Plenty of people reject these propositions. You might very well reject the first claim, holding, as the English philosophy John Locke did, that you cannot morally sell yourself into slavery. For Locke, individual freedom and material wealth simply were not exchangeable. Similarly, you might join the ancient Greek philosopher Aristotle in believing that lots of things that we consider to be good can’t really be directly compared with one another. After all, Aristotle might say, just how many units of honesty will a dollar buy? How much would it cost to purchase your best friend from you?

But to make a cost-benefit analysis (sort of the bread-and-butter of economics) work, you have to assume that both (1) and (2) are true. In other words, economic models like a cost-benefit analysis simply build in the normative claim. But this isn’t getting us around the problem, so much as it is obscuring it.

And that, at the end of the day, is my objection to both Peroski and Caplan. Both reach ideologically-driven conclusions while pretending that they are operating in a morally neutral fashion. Indeed, there is just not any way to go about making policy in a completely neutral way. You simply must have normative claims somewhere if you’re ever going to make a conclusion of the form “we ought to do X.”

Caplan and Peroski would have us smuggle normativity in with our data. That strikes me as a dangerous path.

Public Choice as Implied Space

So, as part of my rot-your-brain-with-SF summer marathon, I recently checked out Walter Jon Williams’ Implied Spaces from the good folks at the Arlington County Public Library). As a novel, it was more-or-less on par with Williams’ other novels — entertaining hard SF that is absorbing enough to fill out a lunch hour but generally pretty forgettable 10 minutes after finishing.

Except that in this case, the central metaphor of the book has a staying power that (for me anyway) has transcended the rest of the book.

See, Williams’ protagonist, a 1000-year-old computer-programmer turned philosopher-king, bills himself as “a scholar of the implied spaces.” So what exactly does that mean? Well, as Aristide explains it, it’s all about the squinch.

Look, it's a squinch! Now you know.

Look, it's a squinch! Now you know.

For those of you who have no idea what “squinch” means, it’s an architectural term. Specifically, it’s a reference to the support structure that you’ll any time you have a dome that’s built on top of a square building. See, you can’t just stick a circle on top of a square. Or, well, you can, but only if you’re happy with big holes in the ceiling at all four corner. If you’d like an actual enclosed building, though, you have to do something to fill in the corners. That filling (which can take lots of different forms) is called a squinch.

Now, as Aristide points out, no one ever sets out to build a squinch. No architect says, “Hey, it’d be really cool to design a building with 18 squinches!”  But people do set out to design buildings with domes sitting atop square bases. Interestingly, though, any design that incorporates domes atop squares contains squinches by implication. Hence the central conceit of the novel: squinches are implied spaces. Aristide spends his time studying features that aren’t designed but that must exist by implication given the things that are designed. (In Aristide’s world, where humans design and build pocket universes, this means looking at the parts of the world that weren’t explicitly designed but that are implied by features that were explicitly designed.)

So why the fascination with the metaphor?

Well, besides the fact that “implied spaces” is a much cooler term than “squinch,” I also think it serves as an interesting metaphor for public choice theory.

See, there’s long been a tendency to think of governments (or at least the governments of Western liberal democracies) as being staffed largely by public-spirited technocrats who design and administer laws with an eye toward the overall public good. J.S. Mill famously contrasts bureaucracy with democracy, holding that the virtues of the former could be used to counteract the vicissitudes of the latter. This view held sway, particularly among liberals (broadly understood), until the middle of the 20th C, when economists first began positing that bureaucrats, like everyone else, are subject to incentives.

And so it was that the public choice school began paying real attention to things like rent seeking. That is, we began asking, “What happens when we combine capitalism, a regulatory system, and representative democracy?” The answer: we get businesses seeking to manipulate the regulatory structure by playing on the various incentives of politicians (by, say, offering money for reelection) and of bureaucrats (by, say, offering lucrative post-government gigs to the people responsible for crafting and/or enforcing regulations).

In short, public choice theorists study the implied spaces of our regulatory-capitalist-democratic structure.

And, on an only slightly-related note, apropos the recent Crooked Timber discussion about the relative standing of philosophy and economics within the humanities, I wonder whether public choice economists would be more or less welcomed by other humanists had they chosen to call themselves “scholars of the implied spaces”?

Questions of Normativity

Over at Catallarchy, I made what I thought was pretty much just a passing observation on a throwaway comment to a post on diminishing marginal utility. Oddly enough, that comments sparked a spirited (if IMHO hugely misinformed) defense from Constant, a regular Catallarchy commenter. The issue has to do with normativity. Constant mentioned that he finds less value in normative uses of economics, citing “X is efficient” or “X is Pareto optimal” as his examples.

I responded that

Technically, “efficient” and “pareto optimal” are descriptive and not normative terms. It’s often the case that people make statements like “X is efficient” with the suppressed premise “Efficient is good” or “I ought to act efficiently” but the normative force lies in the suppressed premises. Efficiency and Pareto optimality (is that a word?) are purely descriptive terms with set definitions.

I’m pretty sure that it’s actually impossible to use economics to make moral claims, just as it’s impossible to use physics to make moral claims. One can make moral claims by using normative statements in conjunction with scientific ones, but, as Scott argues so frequently, you can’t get an is from an ought. At least not from those is-es (and I know that one’s not a word).

There follows a long exchange that really isn’t all that worth reading as it mostly consists of my making the same point again and again and Constant responding by missing that point again and again. Plus some snark. Finally, though, I received this response:

Joe, a statement can be used to do something other than what it literally states. For example, I can say, “you have gained weight”, and use that statement make someone feel bad.

Let me start by saying that, yes, undoubtedly this is true. Let me also say that it leaves out a whole host of other issues. Like the fact that when I say this sort of thing in casual conversation, I’m actually offering a whole host of other nonverbal clues as to my intentions. Technically, what I am doing in that sort of case is making an argument, one in which I suppress a whole host of other premises, premises that are clear, nonetheless, from the context. If that were all that Constant meant to be saying, I’d completely agree with him. Of course, I’d completely agree with him largely because that was exactly the point that I was making in the first place, namely, that descriptive statements can be used in making normative arguments, but the descriptive claim is not itself a normative claim.

Given, however, that Constant seems to want to deny that point, the alternative interpretation would be something like the following:

  1. X is Pareto optimal is a descriptive claim about X.
  2. A claim that is literally a descriptive claim can be used to do things other than describe things.
  3. One of the other things that I can use a descriptive claim to do is to make a normative statement.
  4. Those normative uses do not require an additional, supporting normative claim.
  5. Thus, even though “X is Pareto optimal” is a descriptive claim, it can be used normatively.

If this really is the argument, though, then I’m not at all sure how it’s supposed to get around the is/ought problem. Indeed, this just is the is/ought problem. Hume’s whole point is that descriptive claims cannot be used normatively, at least not without a normative claim to go along with them.

Now I’m sympathetic to the argument that the naturalistic fallacy isn’t really a fallacy, but I’m pretty unlikely to go along with the idea that Pareto optimality is the sort of brute descriptive claim that has normative value. “X is pleasurable” has, I think, some normative pull to it. There’s something strange about asking, “But why should I value pleasure?” To ask that question, it seems to me, is just to misunderstand what we mean by the word “pleasure.” OTOH, I can ask of the term “Pareto efficiency,” “But why should I value Pareto efficiency?” Even if I fully understand the term, the question makes sense in a way that the pleasure question doesn’t.

What Constant seems to misunderstand is that a descriptive statement might require evaluative elements and still be a descriptive statement. To take a similar example, I might very well claim that “Y is utility maximizing.” That is just a factual claim about Y; it can be true or false, and we could (given time and the proper conditions) determine the truth value of that claim. That doesn’t entail that there are no normative elements involved in the term “utility maximizing.” When we unpack the term itself, we find that it has lots of normative elements: we have to decide what we mean by “utility,” whose utility gets measured, in what units we measure utility, etc. Lots of these are normative questions with normative answers.

None of that changes the fact, though, that “Y is utility maximizing” is purely descriptive. The statement says absolutely nothing about what it is that we ought to do, about what states of affairs should obtain, about what things we ought to value. It is descriptive. You and I can be in perfect agreement that Y is utility maximizing and still disagree about whether that gives us any reason for acting. By definition, then, “Y is utility maximizing” is a descriptive claim. Even though determining that Y is in fact utility maximizing required us to make other normative claims.

Let’s put the point in simpler terms. Determining what “Pareto optimal” really means requires making normative judgments. Given that “Pareto optimal” actually has an accepted definition, determining that “X is Pareto optimal” is a descriptive claim.

Interpersonal Comparisons of Utility

At Catallarchy, there’s a rousing, extended (several days now, which is extended in blogosphere-time) discussion of interpersonal comparisons of utility. Several of the regulars there have weighed in on the issue, and there is some disagreement among the Catallarchists themselves regarding whether or not it’s possible to make interpersonal comparisons of utility. (See here, here, here, here, and here. Keep checking back, too, as Patri Friedman has promised a post on the subject as well.) It will probably come as no surprise to anyone who reads this blog regularly to hear that I do in fact think that it’s possible to make interpersonal comparisons of utility. It’d be rather hard to be a utilitarian otherwise. For the record, here’s the nutshell version of the argument.

For starters, different activities bring different amounts of happiness to different people. That’s obvious enough. Also fairly obvious is that utility has a diminishing marginal value. What that means is that for some given activity, X units of that activity will bring Y pleasure. Adding an additional unit of the activity will not, however, deliver a steady increase in the amount of pleasure. In other words, once X is reached, then X+1 units of activity will bring about Y+Z pleasure. But adding X+2 units of the activity will bring about Y+Z+A, where A is less than Z. To put the point in layman’s terms, a shot of bourbon will provide a certain amount of pleasure. The second shot of bourbon probably will too. But the 7th additional shot brings less pleasure than did the one before. Keep going and they will stop being pleasurable at all. We can also translate this sort of claim into talk of money. Here the idea is that my 1st dollar has more value to me than does my 100th which in turn has less value to me than does my 1,000,000th. This is all just basic economics and isn’t really in dispute at all.

Where utilitarians sometimes get into trouble is in assuming that one can straightforwardly apply diminishing marginal utility across persons. While it is certainly true that I get more pleasure from my first than from my millionth dollar, it’s not at all certain that I will get more pleasure from my first dollar than you get from your millionth. Your utility function and mine are different and while we can expect some rough similarities, there are just no guarantees that the two functions will be identical to one another. Indeed, there is good reason to expect that they are not identical at all. After all, our preferences are not identical. So if, for example, you think a quality evening would mean a six-pack of Bud Light (hey Jimmy), while I think that a quality evening involves a 1982 Chateau Lafite-Rothschild, then your utility curve at $4.97 and mine at $842.72 (Sotheby’s auction price) will just about match up. The numbers will map far more closely if I prefer a six-pack of Harp to your Bud Light.

Strictly speaking, then we cannot apply the principle of diminishing marginal utility across persons. But there is a danger in pushing this point too far. Brandon’s post at Catallarchy makes this point rather nicely. The comments and Brian’s response rather nicely illustrate Brandon’s claims. (Couldn’t resist.) Brandon grants the point that utilities cannot be compared with complete certainty, but then points out that

in some cases, particularly those in which proponents of redistribution are most keenly interested, you can guess with near certainty which of two people will value a particular resource more. In virtually all cases, taking $1,000 from a billionaire and giving it to a starving beggar will help the beggar more than it hurts the billionaire. No, it’s not true 100% of the time, but you don’t need complete certainty; if you’re right 95% of the time, that’s good enough for government work.

This strikes me as being exactly right. Brian (and others) object that Brandon and Jonathan and I all mistakenly see utility as being somehow objective when in reality, utility is not at all objective. Brian quotes approvingly an analogy from Rothbard, who argues that utility is purely a subjective and not an objective experience at all (and, of course, since it’s not objective, it can’t be compared). Here’s Rothbard (via Brian):

A favorite rebuttal is that subjective states have been measured; thus, the old, unscientific subjective feeling of heat has given way to the objective science of thermometry. But this rebuttal is erroneous; thermometry does not measure the intensive subjective feelings themselves. It assumes an approximate correlation between the intensive property and an objective extensive event—such as the physical expansion of gas or mercury. And thermometry can certainly lay no claim to precise measurement of subjective states: we all know that some people, for various reasons, feel warmer or colder at different times even if the external temperature remains the same. Certainly no correlation whatever can be found for demonstrated preference scales in relation to physical lengths. For preferences have no direct physical basis, as do feelings of heat. (Brian’s emphasis)

I’m sure that at least a few of you can anticipate my response here: this is a great argument if you’re a spooky Cartesian dualist who thinks that minds (and hence qualia) cannot be reduced to an objective, purely physical phenomenon. I can’t think of any good reasons for being a spooky Cartesian dualist, though, nor can I see any reasons whatsoever for privileging subjective mental phenomena over objective brain states. It’s bad neuroscience. It’s also a bit surprising that Catallarchists, who so pride themselves on their scientific rationalism when it comes to applying economics to politics, would then turn around and ground their economics on the mumbo-jumbo of psychology rather than the hard facts of brain chemistry. The simple fact is that what I express (very roughly and imprecisely) as a subjective preference is itself an approximation of the–very empirical–brain state that I’m currently experiencing. Those are objective and can be measured. That you express a stronger preference for A just means that you have more of whatever brain state leads to preferences-for-A than I have. Unless you want to posit that your brain and mine are radically different in the way that they operate, then we can compare the two things. What you report subjectively (and hence fuzzily and imprecisely) would seem to have exactly zero relevance to the way the world actually is.

So yes, I do think that utility can be compared. In terms of utils, even. The problem is not an in principle problem. Rather, the problem is an in practice one. We don’t yet know how to do this with 100% certainty. That doesn’t mean that we can’t do it at all. Our fuzzy intuitions that, as Brandon says, get us to 90% or so are generally pretty good. In fact, it’d be just plain silly to deny that we can and do accurately gauge how much happiness we’ll get from some amount of money (or time) versus how much happiness someone else will get from the same amount. That’s why I buy people presents. It’s why I give to charity. It’s why I give up my afternoon to talk to my friend when she’s having a bad day. (Though one of the commentors, Constant, simply denies that we do even this much. He argues the psychological egoist line–that I do those things because they work out well on my own utility function. But arguments for psychological egoism are so laughably bad that it seems not really worth it to reject them. I’d suggest an introduction to moral philosphy course. We’ll do it in about 20 min.)

One other point…sort of unrelated, but at least on the general topic. Brian accuses me at one point of arguing in bad faith, pointing out that even if my claims were true, the fact that interpersonal comparisons of utility are not 100% accurate leaves me with the following dilemma:

The bad faith argumentation comes when redistributionists do start to try and defend their policy on utility. I say this because of the following: if it were such that one could show that the billionaire’s utility loss is greater than the utility gain of the starving man, the redistributionist would have to make two choices- either the redistributionist agrees to let the starving man starve, and therefore reveals that they don’t actually care about the starving/less well off at all (and thus their program is sold to the public on a lie), or else they say “take it from him anyway”, in which case the whole exercise in ‘utility comparison’ is moot and a sham. All the hand-waving in the world about how ‘this will never happen’ does not change the fundamental problem posed by the extreme case. I suspect that Joe would not let the starving man starve, even in the presence of a ‘utility monster’, and thus I think his arguments in favor of the IUC are in bad faith, since they are not the actual justification for the taking. (no offense)

I’m not offended here. That doesn’t make the argument any less bad, though. No offense. 😉 The problem here is that Brian has posited a false dilemma. These aren’t the only two options available to a utilitarian who defends redistribution. Admittedly, if I were an act-utilitarian I’d have precisely this problem. But I’m not an act-utilitarian and I wouldn’t defend redistribution on act-utilitarian grounds. Indeed, a policy of redistribution is just that–a policy. That makes it a rule-utilitarian principle, almost by definition. Here’s a definition of rule-utilitarianism (one of the better ones, IMHO) from Brad Hooker (those of you in my Moral Theory course this fall will come to know this definition well, as we’ll be reading Brad’s book through):

An act is wrong if and only if it is forbidden by a code of rules whose internalization by the overwhelming majority of everyone everywhere in each generation has maximum expected value in terms of well-being.[1]

Now whether or not we will want a policy of redistribution on rule-utilitarian grounds will require considering a lot of possible consequences. But to the extent that we might decide on a policy of redistribution, we will do so not because we think that every single case of redistribution will be utility-maximizing. Rather, we’ll do so because most cases are utility-maximizing. And we’ll redistribute even in cases where that isn’t true because having the policy in place–having that particular set of rules–is itself optimific. Either way, the alternatives that Brian points to do not exhaust the set of possible utilitarian responses. I need neither give up on the claim that I’m interested in helping the poor nor argue in bad faith.

[1] Brad Hooker. Ideal Code, Real World (Oxford: Clarendon Press, 2000), p. 32.