Responding to Matthew (“Bentham’s Bulldog”) on Utilitarianism

Ben Burgis
45 min readJul 24, 2022

I recently did a debate about utilitarianism and organ harvesting with “Benthan’s Bulldog” (hereafter referred to just as “Matthew”). You can read his opening statement, which he graciously provided online, here. He made a very long list of arguments and at a couple of points in the debate he listed off ones I hadn’t responded to — it would have been impossible in that format to respond to all of them, given that many of these arguments can be stated very briefly but considering their merits in any kind of depth takes a lot longer — but I wanted to consider them all for the benefit of anyone who watched the debate and was curious about what I thought about some of the ones I never got to, and also because I wanted to get clear in my own head about some of this. If you decide to read on, you should probably watch the debate first since I’m not sure how much sense this would make out of context. I feel bad about not explaining the background of some of these disagreements in what follows in the way that I normally would — when I write about philosophy these days I usually do it in a way that’s intended to be accessible to a broad audience of people who don’t necessarily have much background in the field, and I know anyone following my stuff is going to come in with the expectation that I’ll write that way— but, well, this got pretty absurdly long without pausing to give any background or unpack key concepts before using them, I’m just going to have to leave this the way it is. Sorry about that!

OK, let’s do it.

Matthew starts with some general arguments against the concept of moral rights.

Here’s the first one:

1 Everything that we think of as a right is reducible to utility considerations. For example, we think people have the right to life, which obviously makes people’s lives better. We think people have the right not to let other people enter their house, but we don’t think they have the right not to let other people look at their house. The only difference between shooting bullets at people, and shooting soundwaves (ie making noise) is one causes a lot of harm, and the other one does not. Additionally, if things that we currently don’t think of as rights began to maximize hedonic value to be enshrined as rights, we would think that they should be recognized as rights. For example, we don’t think it’s a violation of rights to look at people, but if every time we were looked at we experienced horrific suffering, we would think it was a rights violation to look at people.

It’s generally true about moral theories that partisans of one moral theory will try to show how their theory can capture what’s plausible about another. How successful any given attempt is, of course, has to be evaluated case-by-case. So for instance I claim that much of what’s plausible about utilitarianism can be (better) explained by Rawls’s theory of justice. (Consequences are given their due, but we’re still treating individuals as the unit of moral evaluation in a way we’re just not when we throw all harms and all benefits onto the scales as if they were all being experienced by one great big hive mind.) Matthew, meanwhile, claims that the appearance of (non-reducible) moral rights can be better explained by the moral importance of good and bad consequences.

A straightforward moral rights intuition is that we have a right not to be shot that’s much more serious than our right not to have certain soundwaves impact our ears. It should be noted that everyone, on reflection, actually does believe that we have at least some rights against people producing soundwaves that will enter our ears without our consent — -hence noise laws. Even if we limit ourselves to talking, if the talking in question is an ex-boyfriend who won’t stop following around his ex-girlfriend telling her about how much he loves her and how he won’t be able to live without her, how he might commit suicide if she doesn’t come back, etc., that’s very much something we all actually do think on reflection the ex-girlfriend has a right against. But Matthew’s general point stands. We all consider our rights against being shot much weightier than our rights against hearing noises we don’t want to hear. For one thing, we tend to think — not always but in a very broad range of normal situations — that the onus is on the person who objects to some noise to say “hey could you turn that down?” or “please leave me alone, I’m not interested in talking” while we don’t think the onus is usually on the person who’s dodging gunshots to say “please stop shooting at me.” And in the noise case there’s a broad range of cases where we think the person who doesn’t want to be exposed to some noise is being unreasonable and they’ll have to suck it up and the range of cases where we’d say something parallel about bullets is, at the very least, much narrower.

So — what’s the difference?

Matthew thinks the only difference is that the consequences of being shot are worse than the consequences of someone talking to you. He further thinks that if it’s the only difference, we have no reason to believe in (non-reducible) moral rights. Both of these inferences are, I think, far too quick, and my contention is that neither really holds up to scrutiny.

The last sentence of my first book (the one on logic) is, “Slow the hell down.” So I’m going to take my own advice here and look at the two components of Matthew’s bullet/soundwave argument one at a time, slowly.

He’s certainly right that degree of harm is one morally relevant difference between soundwaves and and bullets. Note that, as we’ll discuss in great depth in a moment, this is morally relevant on both a utilitarian framework and standard non-utilitarian assumptions about how rights work. We’ll get to that. But one thing at a time.

Now, depending on what kind of noise we’re talking about, the context in which you’re hearing it, etc., noises can cause all sorts of harms — irritation, certainly, but also lost sleep, headaches, or even hearing loss. But the effect of bullets entering your body are typically way worse! Fair enough. But is this the only difference between firing soundwaves and bullets without prior consent?

It’s really not. For example, one difference that’s relevant from a rights perspective is that a great many normal cases of Person A talking to Person B when Person B would rather they didn’t to them are cases where Person A holds a reasonable belief that there’s at least a non-negligible chance that Person B will welcome being talked to. (In fact, I suspect that the great majority of cases are like this.) Cases where Person A shoots bullets into Person B while holding the reasonable but mistaken belief that Person B would welcome being shot are…very rare.

Another relevant difference is that it’s often difficult or even impossible to secure permission to talk to someone without, well, talking to them. Shooting people isn’t like that. You don’t have to shoot a couple of bullets at someone first to see if they like it. You can just ask, “Would you by any chance be amendable to me shooting you?” and then you’re talking not shooting.

A third relevant difference, especially if we’re talking about soundwaves in general and not just talking, is that we often feel that there are (at best) competing rights claims at play in soundwave situations in a way that typically isn’t true when people shoot each other. If John stays out until dawn drinking whiskey on Friday night and a few hours after he goes to sleep he’s woken up by the noise of his neighbor Jerry mowing his (Jerry’s) lawn, we tend to think that however little John might like it, he’ll just have to get over it if there’s nothing he can do on his end to block out the noise — because we think Jerry has a right to mow his own lawn. And notice that this seems correct even though the bad consequences for Jerry from his lawn not being mowed that day might well be far less than the bad consequences for John from being woken up so soon! For example, John might experience a pounding headache for hours and Jerry might simply be vaguely displeased about his grass not being completely even.

Let’s move on to the second half of Matthew’s claim about the bullet/soundwave distinction and pretend for the sake of argument that the only relevant difference in typical cases between non-consensually acting to produce soundwaves impacting someone’s ears and bullets ripping through their chest was the degree of harm. Even if this were true, it would tell us exactly nothing by itself about whether rights are reducible to utility considerations, given standard non-utilitarian assumptions about how rights works. Rights are very often rights against having harms inflicted on you by others. Remember, Matthew said:

Additionally, if things that we currently don’t think of as rights began to maximize hedonic value to be enshrined as rights, we would think that they should be recognized as rights. For example, we don’t think it’s a violation of rights to look at people, but if every time we were looked at we experienced horrific suffering, we would think it was a rights violation to look at people.

The first sentence is just wrong. There are plenty of things that might maximize hedonic value that no one would normally think should be enshrined as rights. (Enshrining the right of people otherwise in danger of dying of kidney failure to a spare kidney would plausibly maximize hedonic value.) The second, though, is under-described but plausibly correct — because we think we having “horrific” suffering inflicted on us is exactly the sort of thing against which we have a right.

We’ll return to that point in a moment, but first note that we have rights against some kinds of harms but not against others. You have a right against being killed, for example, but you don’t have a right against having your heart broken. And degree of harm is often very relevant to which things you can plausibly be said to have rights against.

Even when we’re specifically talking about bodily autonomy, that comes in degrees. It doesn’t seem crazy, for instance, to say that laws against abortion are a far more profound violation of bodily autonomy (and hence far less likely to be justifiable by weighing competing values) than vaccine mandates. It might be reasonable to (a) refuse to let a mental patient leave the hospital because you judge them to be a threat to themselves and/or others but (b) have moral reservations (even if we’re assuming a time and a place where it’s entirely up to the doctor) about giving that same patient electroshock therapy against their will even if it’s absolutely true that they would benefit from it.

The difference between a pure utilitarian framework where you assume all we’re doing is weighing harms against benefits and a rights framework in which we (non-reducibly) have a right against being harmed in certain ways is nicely demonstrated by thinking about the debate started by Peter Singer about whether it’s wrong not to donate excess income to famine relief charities. Whatever position you take on that issue, we’re all going to agree — I suppose some particularly ferocious bullet-biter might disagree? — that it would definitely be wrong to fly to a famine-stricken country and shoot some people yourself. Even if we could be sure that the people you shot would have starved to death — even if, as is plausible if you’re a good enough shot, they would have suffered more if they’d starved instead of being shot — you still can’t do that. In fact, I suspect that most people who Singer has convinced that it’s wrong not to donate excess income to famine relief charities would still think an expert marksman could at delivering headshots that will kill people quickly flying to a famine-stricken country to shoot some famine victims would in fact be much much much worse than just not donating.

Another of Matthew’s examples is looking at a house vs. entering the house. We have a right against people entering our homes without our permission, but not against them looking at our homes. True! But why? Matthew thinks the difference is about harm but that doesn’t really seem to capture our intuitions about these cases — we don’t typically think people have a right against having their homes entered only when they’ll experience some sort of harm as a result. If Jim enjoys watching Jane sleep, for example, and he knows she’s a very heavy sleeper who won’t hear him slip in her window and pulling up a chair by her bed to watch — and he leaves long before she wakes up — this is surely the kind of thing Jane has a very strong right against. Part of the difference between that and looking at her house is about property rights (the kind even socialists believe in, the right to personal as opposed to productive property!) but there’s part of it that’s not about that and we can draw out that distinction nicely by imagining that he’s watching her sleep through high-powered binoculars from just off her property. Jane may have a legal right against this, and she certainly has a moral rights against it, because it’s an invasion of privacy — even if she experiences no harm whatsoever as a result.

Before moving on to (2) in Matthew’s list of general arguments against rights, one quick note about methodology that the Jane/Jim case brings out nicely. If two moral views both have the same result in some instance — for example, there are many cases in which we normally think people have some right where that can be explained either in utilitarian terms or in terms of non-reducible rights — a useful way of deciding between them is to consider cases (which might have to be hypothetical ones) where the frameworks would diverge. In act-utilitarian terms, it’s a little tricky to explain what’s wrong with Jim’s actions. There are moves the act-utilitarian could make here, but it’s a little tricky. In terms of bog-standard rights assumptions, though, the wrongness is straightforward.

OK, here’s (2):

2 If we accept that rights are ethically significant then there’s a number of rights violated that could outweigh any amount of suffering. For example, suppose that there are 100 trillion aliens who will experience horrific torture, that gets slightly less unpleasant for every leg of a human that they grab, without the humans knowledge or consent, such that if they grab the leg of 100 million humans the aliens will experience no torture. If rights are significant, then the aliens grabbing the legs of the humans, in ways that harm no one, would be morally bad. The amount of rights violations would outweigh and not only be bad, but they would be the worst thing in the world. However, it doesn’t seem plausible that the aliens should have to experience being burned alive, when no humans even find out about what’s happening, much less are harmed. If rights matter, a world with enough rights violations, where everyone is happy all the time could be worse than a world where everyone is horrifically tortured all of the time but where there are no rights violations.

This one can be dispensed with much more easily. The conclusion just straightforwardly doesn’t follow from the premise. To see that, let’s strip the example down to a simpler and easier to follow version that (I think, see below) preserves the key point.

As Matthew himself reasonably pointed out to me in a different discussion, our intuitive grasp on situations gets hazier when we get up to truly absurdly large numbers, so let’s at least reduce both sides of the equation. 100 trillion is a million times more than 100 million. One human leg being non-consensually but harmlessly grabbed by an alien will mean a million aliens won’t experience the sensation of being burned alive. Matthew thinks the alien should grab away. I agree! In fact, it’s not clear that the human’s rights would be violated at all, considering that any remotely psychologically normal (or really even psychologically imaginable) human would retroactively consent to having their leg grabbed for unfathomably more trivial harm-prevention reasons. But even if we do assume that the one human is having his rights violated, that assumption just gets you to “any rights we might have against certain extremely trivial violations of personal space are non-absolute,” not “there are no non-reducible moral rights.”

To see why not, think about a more familiar and less exciting example — pushing a large man off a footbridge to save five people from a trolley. Here, the harm is of the same type (death) and five times as much of it will happen if you don’t push him, but most people’s intuition about this case is that it would be wrong to push anyway. That strongly suggests that most of us think there are indeed moral rights that can’t be explained away as heuristics for utility calculations.

Contrast that to a trolley case structurally more like Matthew’s aliens-in-agony scenario although at a vastly smaller scale. As always, five people are on a trolley track. As in a familiar variant, there’s a lever that can be pulled to divert the train onto a secondary track. But in this version (a) the second track is empty, so you aren’t killing anyone by pulling the lever. But there happens to be someone standing in front of you with his hand idly resting on the lever. His eyes are closed and he’s listening to loud music on his Airpods and he has no idea what’s going on. By the time you got his attention, the five people would be dead. So you grab his hand and yank it.

If we were just considering this last example, you could end up drawing utilitarian conclusions..but the example just before nicely demonstrates why that would be a mistake.

A final thought about Matthew’s point 2 before moving onto 3 — rereading some of his formulations quoted above (particularly the one about the amount of rights violations involved in his original version of the example allegedly being something someone who believes in non-reducible rights would have to regard as the worst thing in the world) maybe my simplification of 100 trillion aliens not experiencing the sensation of burning alive vs. 100 million humans not having their legs grabbed into a million aliens and one humans missed something important in Matthew’s example. Maybe his idea goes something like this:

“Sure, grabbing one leg to save a million aliens from unspeakable torment might make sense given standard non-utilitarian assumptions about rights. But remember in each individual instance of leg-grabbing in the original example, the effect of that individual act will be to reduce each aliens’ collective suffering by one one hundred millionth — the aliens would barely notice — so when we consider each one individually, it would be too trivial to justify the rights violation.”

If so, I’d say two things. First, just as Matthew is correct to point out that intuitions can be confused when we’re talking about very large numbers, it’s similarly hard to gage with very small fractions. In this case, I’m not sure there even is such a thing as a one one-hundred-millionth reduction of the sensation of being burned alive. I suspect that sensations don’t work like that Perhaps in some way that’s totally inaccessible to human minds, it does work like that for aliens. At any rate, I don’t really know what “they’ll experience one one hundred millionth less suffering than the senation of being burned alive” means, and frankly neither do you, so asking me to have a moral intuition one way or the other about whether it’s significant enough to justify grabbing someone’s leg without their knowledge or consent is deeply unlikely to shed much light on my overall network of moral intuitions. Second, even it couldn’t be morally justified in weighing rights against consequences when each individual leg-grabbing was considered in isolation, it just wouldn’t follow that all hundred million leg-grabbings were unjustified when considered in tandem. “This will be the impact of doing this a hundred million times, including this one” is morally relevant information.

3 A reductionist account is not especially counterintuitive and does not rob our understanding of or appreciation for rights. It can be analogized to the principle of innocence until proven guilty. The principle of innocent until proven guilty is not literally true. A person’s innocence until demonstration of guilt is a useful legal heuristic, yet a serial killer is guilty, even if their guilt has not been demonstrated.

It’s not counterintuitive until we start to think about the many examples (like the first of the two trolley cases above) where it has wildly counterintuitive consequences! The Innocent Until Proven Guilty, I think, starts to look less helpful the more we poke into it. One thing IUPG is absolutely not is a heuristic or anything like one. It’s not a useful rule of thumb — it’s an unbending legal rule, that someone (who may or may not be actually innocent) who hasn’t been proven guilty has the legal status of an innocent person.

While we’re talking about IUPG, by the way, it’s worth pausing to ask whether pure utilitarianism can make sense of why it should be the legal standard. Think about the classic justification for it — Blackstone’s Ratio (it’s better for ten guilty persons to go free than one innocent person to be imprisoned). That makes perfect sense if we think there’s something like a categorical moral prohibition on the state punishing the innocent that’s so important it can outweigh the benefits of saving the victims of those ten guilty people. But it’s at the very least not obvious that the utility calculus will work out that way.

4 We generally think that it matters more to not violate rights than it does to prevent other rights violations, so one shouldn’t kill one innocent person to prevent two murders. If that’s the case, then if a malicious doctor poisons someone’s food, and then realizes the error of their ways, the doctor should try to prevent the person from eating the food, and having their rights violated, even if it’s at the expense of other people being poisoned in ways uncaused by the doctor. If the doctor has the ability to prevent one person from eating the food poisoned by them, or prevent five other people from consuming food poisoned by others, they should prevent the one person from eating the food poisoned by them, on this view. This seems deeply implausible. Similarly, this view entails that it’s more important for a person to eliminate one landmine that will kill a child set down by themself, rather than eliminating five landmines set down by other people — another unintuitive view.

No. None of this actually follows from belief in rights per se, or even from the view that it’s important not to violate rights than to prevent rights violations (which itself is a substantive extra commitment on top of belief in rights). Here’s the trick: The attempt at drawing out a counterintuitive consequence relies on the rights-believer seeing “poisoning food and then not stopping it from being eaten” as a single action (or “setting a landmine and not eliminating it”) as a single action, but the intuition itself relies on thinking of them as two separate actions, so that the poisoning/landmine setting is in the background of the decision, and now we’re thinking about a new decision about which poison/landmine to save who from and it seems arbitrary to save the victims of your own past self as opposed to someone else’s victims. But here’s the thing: Whichever you think is the right way to cut up what counts as the same action or a new one, you really do have to pick. If you consistently think of these as two separate actions, the rights-believer has no reason to believe the counterintuitive thing Matthew attributes to them. On this view, they’re not choosing between killing and letting die. They’ve committed attempted murder in the past but now they’re choosing who to let die and none of the options would constitute killing. On the other hand, if we somehow manage to truly feel this in our bones as one action (which I don’t know how to do, btw — it seems like two to me), I’m not so sure we’d have the intuition Matthew wants us to have. To see why not, think about a nearby question. Who would you judge more positively — someone who goes to a war zone, intentionally kills one child with a landmine (while simultaneously deciding to save four others from other people’s landmines) or someone who never travels to the war zone in the first place, spending the war engaged in normal peacetime activities, and thus neither commits nor foils a single war crime? “OK, but I saved more children than I killed” would not, I think, get you much moral approval from any ordinary human being.

5 We have lots of scientific evidence that judgments favoring rights are caused by emotion, while careful reasoning makes people more utilitarian. Paxton et al 2014 show that more careful reflection leads to being more utilitarian.

People with damaged vmpc’s (a brain region responsible for generating emotions) were more utilitarian (Koenigs et al 2007), proving that emotion is responsible for non-utilitarian judgements. The largest study on the topic by (Patil et al 2021) finds that better and more careful reasoning results in more utilitarian judgements across a wide range of studies.

The studies, I’m sure, are accurately reported here, but the inference from them is as wrong as wrong could be. I won’t go into this in too much depth here because this was a major theme of my first book (Give Them An Argument) but basically:

All moral judgments without exception are rooted in moral feelings. Moral reasoning, like any other kind of reasoning, is always reasoning from some premises, which can be supplied by factual information, moral intuition (i.e. emotional feelings of approval or disapproval) or some combination of the two, but moral intuition is always in the mix any time you’re validly deriving moral conclusions. There’s just no other place for your most basic premises to come from, ever, and couldn’t be. I don’t doubt that people whose initial emotional reactions (thinking about good and bad consequences) lead them to endorse moral principles and who henceforth reason in very emotionless ways end up sticking to utilitarianism more than people who open themselves to ordinary human moral intuitions about things like organ harvesting examples. For precisely similar reasons, I’d be pretty shocked if people with damaged VMPCs weren’t far more likely to be deontic libertarians than people more likely to have regular emotional reactions. (No clue if anyone’s done a study on that, but if you’re a researcher in relevant areas you can have the idea for free!)

6 Rights run into a problem based on the aims of a benevolent third party observer. Presumably a third party observer should hope that you do what is right. However, a third party observer, if given the choice between one person killing one other to prevent 5 indiscriminate murders and 5 indiscriminate murders, should obviously choose the world in which the one person does the murder to prevent 5.

There’s absolutely nothing obvious about that! Is a Benevolent Third-Party Observer benevolent because they want everyone to do the right thing, or benevolent because they want the best outcome? Unless you question-beggingly (in the context of an argument against rights) assume that the right thing is whatever leads to the best outcomes, those goals will be in tension, so if the BT-PO holds both we need to find out what principle they’re using to weigh the two goals or resolve conflicts before we can even begin to have the slightest idea what a BT-PO might say about a case where rights violations lead to good consequences.

Matthew continues the point:

An indiscriminate murder is at least as bad as a murder done to try to prevent 5 murders. 5 indiscriminate murders are worse than one indiscriminate murder, therefore, by the transitive property, a world with one murder to prevent 5 should be judged to be better than 5 indiscriminate murders. If world A should be preferred by a benevolent impartial observer to world B, then it is right to bring about the state of the world. All of the moral objections to B would count against world A being better. If despite those objections world A is preferred, then it is better to bring about world A. Therefore, one should murder one person to prevent 5 murders. This seems to contradict the notion of rights.

To put what I said earlier slightly differently:

Unless you beg the question against the rights-believer by assuming these can’t come apart, you have to pick whether the BT-PO wants whatever makes the world better or whatever’s morally preferable (or perhaps goes back and forth between preferring these depending on some further consideration?). If the BT-PO’s consistent principle is to prefer whatever makes the world better, then bringing them up has zero possible argumentative weight against belief in a non-consequentialist notion of rights — that there can be such conflicts and that rights should at least sometimes win is what anyone who says there are non-consequentialist rights is saying. If the BT-PO’s consistent principle is to prefer that everyone does the right thing, on the other hand, then it’s not clear what the source of counter-intuitiveness for the rights-believer is supposed to be here. And that’s still true if the BT-PO’s principle is to apply some further consideration to navigate conflicts between rights and good consequences.

7 We can imagine a case with a very large series of rings of moral people. The innermost circle has one person, the second innermost has five, third innermost has twenty-five, etc. Each person corresponds to 5 people in the outer circle. There are a total of 100 circles. Each person is given two options.

1 Kill one person

2 Give the five people in the circle outside of you corresponding to you the same options you were just given.

The people in the hundredth circle will be only given the first option if the buck doesn’t stop before reaching them.

The deontologist would have two options. First, they could stipulate that a moral person would choose option 2. However, if this is the case then a cluster of perfectly moral people would bring about 5⁹⁹ murders, when the alternative actions could have resulted in only one murder, because they’d keep passing the buck until the 100th circle. This seems like an extreme implication.

Secondly, they could stipulate that you should kill one person. However, the deontologist holds that you should not kill one person to prevent five people from being murdered. If this is true, then you certainly shouldn’t kill one person to give five perfectly moral people two options, one of which is killing one person. Giving perfectly moral beings more options that they don’t have to choose cannot make the situation morally worse. If you shouldn’t kill one person to prevent five murders you certainly shouldn’t kill one person to prevent five things that are judged to be at most as bad as murders by a perfectly moral being, who always chooses correctly.

While my instinct is to agree that whatever you can say about killing vs. letting die you can say about killing vs. not preventing killing, the first thing to note about this is that the “5⁹⁹ murders” once you get to the outermost circle aren’t actually murders at all, since by stipulation they’re involuntary. So (at least if we’re considering the morality of everyone in the first 99 circles refusing to murder) this reduces to a classic but extreme killing vs. letting die dilemma — it’s no different from stipulating that, say, the entire human race other than you and the large man on the bridge has been shrunken down to microscopic size by a supervillain who then put a container containing all however-many-billion people on the trolley track. Anti-utilitarian intuitions generally crumble in the face of sufficiently awe-inspiring numbers and that’s what Matthew is relying on here. There’s an interesting question here about whether to take that as an instance of the general problem of humans having a hard time fitting their heads around scenarios involving sufficiently large numbers or whether to take this as a straightforward intuition in favor of a sort of “moral state of exception” whereby an imperative to prevent genocide-level amounts of death overrides the moral principles that would apply in other cases. (Which of these two is correct? Here’s the good news: You don’t really need to decide because nothing remotely like this will ever come up and both answers are compatible with anti-utilitarian intuitions about smaller-scale cases.) But as with the leg-grabbing aliens above, the apparent difference between this and a simple dilemma between killing one innocent and letting 5⁹⁹ innocents die is that, considered in isolation, standard anti-utilitarian moral intuitions would seem to recommend individual decisions that, in aggregate, would amount to permitting the deaths of 5⁹⁹ people. But (as Larry Temkin emphasizes in his response to “money pump” arguments for transitivity) it’s irrational not to reason about a series of decisions with aggregate effects…in aggregate. A reasonable principle is that the first person in the first ring should do whatever all of the saints in all the rings would agree on if they had a chance to talk it all through together to decide on a collective course of action. If we assume that the “moral state of exception” view is correct, they would presumably all want the person in the first ring to kill the five people in the second one. (Just for fun, by the way, since that “everyone” would include the five victims, in this scenario it would be more like assisted suicide than murder.) If it’s not correct, then I suppose they would all abstain and it would be the fault of whatever demon set this all up rather than any of his victims. As I mentioned in my first conversation with Matthew, I’m also very open to the possibility that this could just be a moral tragedy with no right answer — as an extremely convoluted scenario designed precisely to make moral principles that seem obviously correct in simpler cases difficult to apply, if anything’s an unanswerable moral tragedy, this strikes me as a good candidate on its face! But no matter which of these three answers you go with (do kill the five in the name of a moral state of exception, refuse to play the demon’s game, or just roll with “both answers are indefensibly wrong, its an unanswerable moral tragedy”) I have a hard time seeing how any of those three roads are supposed to lead me to abandoning normal rights claims. At absolute worst, normally applicable rights are overridden in this scenario. Even if that’s the case, that gives me no particular reason to think they’re overridden in, say, ordinary trolley or organ harvesting cases.

Oh, and it’s worth saying a brief word about this:

Giving perfectly moral beings more options that they don’t have to choose cannot make the situation morally worse.

At this point I’ve had multiple conversations with Matthew where this has come up and I still have no idea why he thinks this is true, never mind obviously true. It vaguely sounds like the sort of thing that could turn out to be true, but the same can be said of plenty of (mutually inconsistent) moral principles. When you’re thinking about a principle this abstract, it’s easy to nod along but right methodology is to test it out by applying it to cases — like this one!

8 Let’s start assuming one holds the following view

Deontological Bridge Principle: This view states that you shouldn’t push one person off a bridge to stop a trolley from killing five people.

This is obviously not morally different from

Deontological Switch Principle: You shouldn’t push a person off a bridge to cause them to fall on a button which would lift the five people to safety, but they would not be able to stop the trolley.

In both cases you’re pushing a person off a bridge to save five. Whether their body stops the train or pushes a button to save other people is not morally relevant.

Suppose additionally that one is in the Switch scenario. They’re deciding whether to make the decision and a genie appears to them and gives them the following choice. He’ll push the person off the bridge onto the button, but then freeze the passage of time in the external world so that the decision maker can have ten minutes to think about it. At the end of the ten minutes, they can either lift the one person who was originally on the bridge back up or they can let the five people be lifted up.

It seems reasonable to accept the Genie’s offer. If, at the end of ten minutes, they decide that they shouldn’t push the person, then they can just lift the person back up such that nothing actually changes in the external world. However, if they decide not to then they’ve just killed one to save five. This action is functionally identical to pushing the person in switch. Thus, accepting the genie’s offer is functionally identical to just giving them more time to deliberate.

It’s thus reasonable to suppose that they ought to accept the genie’s offer. However, at the end of the ten minutes they have two options. They can either lift up one person who they pushed before to prevent that person from being run over, or they do nothing and save five people. Obviously they should do nothing and save five people. But this is identical to the switch case, which is morally the same as bridge.

It looks to me like, once again, Matthew is trying to have it both ways here. Either the genie’s offer just delays the decision (which is what we need to assume for that breezy “it’s reasonable to accept the genie’s offer” to make sense) or it is a morally significant decision in itself. This in turn reduces to the same issue noted above in the poison and landmine cases — if you do something and then deliberate about whether to reverse the effect, does “doing it and then deciding not to reverse it” count as one big action or does it separate into two actions? The “reasonable to accept the genie’s offer” claim makes sense if (and only if) you accept “the one big action” analysis, but the “obviously they should do nothing” claim only makes sense given the “two distinct actions” view. If it’s two distinct actions, accepting the genie’s offer was wrong (in the way setting a landmine even if you might decide to change your mind and decide to save the child from later would be wrong). If it’s one big action, then Matthew’s “obviously” claim doesn’t get off the ground.

We can consider a parallel case with the trolley problem. Suppose one is in the trolley problem and a genie offers them the option for them to flip the switch and then have ten minutes to deliberate on whether or not to flip it back. It seems obvious they should take the genie’s offer.

Again: Only obvious given the “one big action” view.

Well at the end of ten minutes they’re in a situation where they can flip the switch back, in which case the train will kill five people instead of one person, given that it’s already primed to hit one person. It seems obvious in this case that they shouldn’t flip the switch back.

Thus, deontology has to hold that taking an action and then reversing that action such that nothing in the external world is different from if they hadn’t taken and then reversed the action, is seriously morally wrong.

Again: This claim about what’s it’s “obvious” that the rights-believer has to endorse is only obvious given the “two distinct actions” view.

If flipping the switch is wrong, then it seems that flipping the switch to delay the decision ten minutes, but then not reversing the decision, is wrong. However, flipping the switch to delay the decision ten minutes and then not reversing the decision is not wrong. Therefore, flipping the switch is not wrong.

Maybe you hold that there’s some normative significance to flipping the switch and then flipping it back, making it so that you should refuse the genie’s offer. This runs into issues of its own. If it’s seriously morally wrong to flip the switch and then to flip it back, then flipping it an arbitrarily large number of times would be arbitrarily wrong. Thus, an indecisive person who froze time and then flipped the switch back and forth googolplex times, would have committed the single worst act in history by quite a wide margin. This seems deeply implausible.

This part relies on an assumption about how wrongness aggregates between actions that, at least in my experience, most non-utilitarian moral philosophers will emphatically reject. In fact, my impression at least is that the intuition that wrongness doesn’t aggregate in this way plays a key role in why so many of the people who’ve thought most about utilitarianism reject it.

Now, it could be that the non-utilitarian moral philosophers are wrong to reject aggregation. But even if so, once utilitarianism has been rejected and rights have been affirmed, it’s just a further question whether the wrongness of (initially attempted then reversed) rights violations can accumulate in this way.

Either way, deontology seems committed to the bizarre principle that taking an action and then undoing it can be very bad. This is quite unintuitive. If you undo an action, such that the action had no effect on anything because it was cancelled out, that can’t be very morally wrong. Much like writing can’t be bad if one hits the undo button and replaces it with good writing, it seems like actions that are annulled can’t be morally bad.

It’s worth just briefly registering that this is a pretty eccentric judgment. To adapt an old Judith Jarvis Thomson example, if I put poison in my wife’s coffee then felt an attack of remorse and dumped it out and replaced it with unpoisoned coffee before she drank it, my guess is that very few humans would disagree that my initial action was very wrong. Deep and abiding guilt would be morally appropriate despite my change of heart.

To put a little bow on this part, it’s worth pointing out that we coud adapt this case very slightly and make it a straightforward moral luck example. Call the version of the case where I dump out the coffee and stealthily replace it with unpoisoned coffee “Coffee (Reversed)”. Now consider a second version — “Coffee (Unreversed)” — where I don’t have the attack of remorse in time because I’m distracting by the UPS delivery guy ringing the doorbell, and my wife is thus killed.

Intuitively, these cases are messy, but part of what makes the moral luck problem such a problem is that at least one of the significant intuitions at play in moral luck cases is that the difference between Coffee (Reversed) and Coffee (Unreversed) isn’t the kind of difference that should make any difference in your moral evaluation of me.

It also runs afoul of another super intuitive principle, according to which if an act is bad, it’s good to undo that act. On deontological accounts, it can be bad to flip the switch, but also bad to unflip the switch. This is extremely counterintuitive.

If we read the “super intuitive principle” as “if an act is bad, all else being equal it’s good to undo such an act,” then I can understand why Matthew finds it so intuitive. If we read it as “if an act is bad, then it’s always good to undo it” or to put a finer point on it “if an act is bad, then the morally best decision on balance is always to undo it,” I’m a whole lot less sure. In fact, given that last reading, Matthew himself doesn’t agree with it, given that he thinks that in the landmine and poison cases them morally best decision is to save the more numerous victims of other malefactors rather than to undo your own bad act.

9 (Huemer, 2009) gives another paradox for deontology which starts by laying out two principles (p. 2)

“Whether some behavior is morally permissible cannot depend upon whether that behavior constitutes a single action or more than one action.”

This is intuitive — how we classify the division between actions shouldn’t affect their moral significance.

We’ve already seen several times above that this principle is wrong, and we could just leave Huemer there, but there’s another point of interest coming up, so let’s take a look at the remainder of 9:

Second (p.3) “If it is wrong to do A, and it is wrong to do B given that one does A, then it is wrong to do both A and B.”

Now Huemer considers a case in which two people are being tortured, prisoner A and B. Mary can reduce the torture of A by increasing the torture of B, but only half as much. She can do the same thing for B. If she does both, this clearly would be good — everyone would be better off. However, on the deontologist account, both acts are wrong. Torturing one to prevent greater torture for another is morally wrong.

If it’s wrong to cause one unit of harm to prevent 2 units of harm to another, then an action which does this for two people, making everyone better off, would be morally wrong. However, this clearly wouldn’t be morally wrong.

The rights-believer — I keep translating Matthew’s references to “the deontologist” this way because these are all supposed to be general arguments against rights, and because you don’t have to be a pure deontologist to believe that considerations about rights are morally important — is only committed to all of this given the assumption we’ve already considered and rejected in the discussions of the leg-grabbing aliens and the circles of saints above, which is that there can’t be cases where individual actions can’t be justified by some set of moral principles when considered separately but can be when considered together. “The overall effect will be to reduce everyone’s harm” is morally relevant information and Temkin’s point about aggregate reasoning is a good one.

10 Suppose one was making a decision of whether to press a button. Pressing the button would have a 50% chance of saving someone, a 50% chance of killing someone, and would certainly give them five dollars. Most moral systems, including deontology in particular, would hold that one should not press the button.

However, (Mogenson and Macaskill, 2021) argue that this situation is analogous to nearly everything that happens in one’s daily life. Every time a person gets in a car they affect the distribution of future people, by changing very slightly the time in which lots of other people have sex.

Given that any act which changes the traffic by even a few milliseconds will affect which of the sperm out of any ejaculation will fertilize an egg, each time you drive a car you causally change the future people that will exist. Your actions are thus causally responsible for every action that will be taken by the new people you cause to exist, and no doubt some will violate rights in significant ways and others will have their rights violated in ways caused by you. Mogenson and Macaskill argue that consequentialism is the only way to account for why it’s not wrong take most mundane, banal actions, which change the distribution of future people, thus violating (and preventing) vast numbers of rights violations over the course of your life.

This is a fun one, but this seems like precisely the opposite of the right conclusion. This case, if we think about it a little harder, actually cuts pretty hard against utilitarianism (and consequentialism in general).

To see why, start by noticing that from a rights-based perspective — especially straight-up deontology! — pressing a button that will itself either save or kill someone (and give you $5 either way) is absolutely nothing like engaging in an ordinary action that might indirectly and unintentionally lead (along with many other factors) to someone coming into existence who will either kill someone or save somebody from being killed. The whole point of deontology is to put the moral focus on the character of actions rather than their consequences. If deontologists are right, there’s a moral galaxy of difference between acting to violate someone’s rights and acting in a way that leads to someone else violating someone’s rights (or, if we’re going to be precisely accurate about the case here, since what we’re talking about is bringing someone into existence who will then decide to violate someone else’s rights, “leads to someone else violating someone’s rights” should reall be “is a necessary but not a condition for someone else violating someone’s rights”). If, on the other hand, consequences are all that matter, it’s much harder to see a morally significant difference between unintentionally setting in motion a chain of events that ends with someone else making a decision to do X and just doing X! The button, in other words, is a good analogy for getting into your car if utilitarianism is right, but not if deontology is right.

Also note that if there’s a 50% chance that any given button-pushing will save someone and a 50% chance that it will kill someone, over the “sufficiently long run” appealed to by statisticians, it’ll all balance out and thus be utilitarian-ishly neutral — but there’s absolutely no guarantee that killings and savings in any given lifetime of metaphorical button-pushing will balance out! You might well happen to “kill” far more people than you “save.” If we assume that the character of your action is beside the point because consequences are all that matter, your spending a lifetime taking car trips and contributing to traffic patterns, etc., might well add up to some serious heavy-duty moral wrongness.

11 The pareto principle, which says that if something is good for some and bad for no one then it is good, is widely accepted.

It’s widely accepted by economists for deciding what counts as better utility. It’s not widely accepted among non-utilitarian moral philosophers as a standard for what constitutes a morally good action, for obvious reasons — it assumes (once read as a principle about how to morally evaluate actions) that only the consequences of actions are morally relevant!

It’s hard to deny that something which makes people better off and harms literally no one is morally good. However, from the Pareto principle, we can derive that organ harvesting is morally the same as the trolley problem.

This is a pet peeve and a little bit off-topic, but it drives me crazy. The trolley “problem” as originally formulated by Thomson (who coined the phrase “the trolley problem”) was precisely that, if we’re just looking at outcomes, pushing the large man (or, and this was actually Thomson’s preferred example for dramatizing the problem, harvesting a healthy patient’s organs to save five people who need transplants) is indistinguishable from pulling the lever…and yet the vast majority of people who share the intuition that lever-pulling is morally legitimate don’t have a parallel intuition about those other cases. The “problem” was supposed to be how to reconcile those two seemingly incompatible intuitive reactions. Anyway, let’s keep going.

Suppose one is in a scenario that’s a mix of the trolley problem and the organ harvesting case. There’s a train that will hit five people. You can flip the switch to redirect the train to kill one person. However, you can also kill the person and harvest their organs, which would cause the 5 people to be able to move out of the way. Those two actions seem equal, if we accept the Pareto principle. Both of them result in all six of the people being equally well off. If the organ harvesting action created any extra utility for anyone, it would be a Pareto improvement over the trolley situation.

This nicely demonstrates exactly why “the situation created if you do X is a Pareto improvement over the situation created if you do Y” doesn’t entail “doing X is no worse morally than doing Y” without hardcore consequentialist assumptions about how morality works. While it should be noted that many non-utilitarian philosophers bite the bullet on the first version of the trolley case and conclude (on the basis of their far stronger intuitive reaction to Thomson’s other cases) that pulling the lever in the first case is wrong, there are ways of consistently avoiding giving up either of the initial intuitions. (Whether any of these ways are fully convincing is, of course, super-duper controversial.) For example, one of the solutions to the Trolley Problem that Thomson herself briefly floats in one of her several papers about it is a Kantian one — that sending a train to a track where it unfortunately will kill the person there doesn’t involve reducing them to the status of a mere means to your end in the way that actually using their body weight to block the trolley (or flip the switch in Switch) or harvesting their organs does. To see the distinction drawn in this solution (which is roughly Doctrine of Double Effect-ish), notice that if you turned out to be wrong in your assumption that the workman on the second track wouldn’t be able to get out of the way and he did in fact manage to scamper off the track before the trolley would have squashed him, that wouldn’t mess up your plan for saving the five — whereas if the large man survived the fall and rolled off the track, that would mess up your plan, because your plan involved using him as a mere means rather than setting something in motion which would have the foreseen but not intended side effect of killing him.

Now, maybe you find that convincing and maybe you don’t. But it doesn’t seem obviously wrong to me — and if it’s at all plausible, the fact that murdering the workman on the second track and harvesting his organs would be a pareto improvement from diverting the train to the second track (thus causing his death) wouldn’t be sufficient to settle the question of whether the organ harvesting was wrong in a way that diverting the train wasn’t.

Here’s the conclusion of 11:

Premise 1 One should flip the switch in the trolley problem

Premise 2 Organ harvesting, in the scenario described above, plus giving a random child a candy bar is a pareto improvement over flipping the switch in the trolley problem

Premise 3 If action X is a pareto improvement over an action that should be taken, then action X should be taken

Therefore, organ harvesting plus giving a random child a candy bar is a action that should be taken

This is a very noisy version of what could be in one sentence:

“If consequences are all that matter, saving the five through organ harvesting is no worse than saving them through pulling the lever, and doing the latter plus doing things that cause other good consequences is better.”

But here’s the thing — that has no argumentative force whatsoever against deontologists and other non-utilitarians, since critics of utilitarianism are generally split between (a) people who think even pulling the lever is wrong, and (b) people who think pulling the lever might be defensible but Thomson’s other examples that are equivalent to pulling the lever in terms of consequences are still definitely wrong. It’s hard to see how a partisan of either position would or should be moved by this argument (which remember was in a list of arguments against any sort of belief in rights understood as real rights and not heuristics for utility calculations).

Finally, Matthew’s opening statement ends with a few more specific responses to the organ harvesting counterexample to utilitarianism.

First, there’s a way to explain our organ harvesting judgments away sociologically. Rightly as a society we have a strong aversion to killing. However, our aversion to death generally is far weaker. If it were as strong we would be rendered impotent, because people die constantly of natural causes.

This is the sort of point that might bother a hardcore moral realist who believed that (some of) our moral intuitions are somehow caused by an externally existing moral reality, and some are caused by other things and should thus be disregarded. But I just find that view of meta-ethics deeply implausible — I won’t run through all this here, but I’ll just say that above and beyond the usual ontological simplicity concerns about the idea of a separate moral realm external to our moral intuitions, I have epistemic and semantic concerns about this picture. How exactly are our intuitions making contact with this realm? What plausible semantic story could we tell about how our moral terms came to refer to elements of this underlying moral reality?

The sort of view I’m attracted to instead says, basically, that the project of moral reasoning is precisely to hammer our moral intuitions (or as many of them as possible) into a coherent picture so we can act on them. Where our moral intuitions come from is an interesting question but not really a morally relevant one. What we’re trying to figure out is which goals we care about, not the empirical backstory of how we came to care about them.

Second, we have good reason to say no to the question of whether doctors should kill one to save five. A society in which doctors violate the Hippocratic oath and kill one person to save five regularly would be a far worse world. People would be terrified to go into doctor’s offices for fear of being murdered. While this thought experiment generally proposes that the doctor will certainly not be caught and the killing will occur only once, the revulsion to very similar and more easily imagined cases explains our revulsion to killing one to save 5.

Not sure what the Hippocratic oath is supposed to have to do with anything — presumably, in a world with routine organ-harvesting, doctors would just take a different oath in the first place! But the point about going to the doctor’s office for checkups is a good one. To test whether that’s the source of our revulsion, we should consider other kinds of organ-harvesting scenarios. For example, we could just make everyone register for random selection for organ harvesting the way we make boys reaching adulthood register for the Selective Service. There would be an orderly process to randomly pick winners, and the only doctors who had anything to do with it would be doctors employed by the state for this purpose — they would have nothing to do with the GPs you saw when you went in for a checkup, so we wouldn’t have the bad consequences of preventable diseases not being prevented. We’d still have a fair amount of fear, of course, but (especially if this system actually made organ failure vanishingly rare) I don’t know that it’s obvious a priori that the level of fear generated would outweigh the good consequences of wiping out death from organ failure in the utility calculus.

A further point on this:

The claim that our reaction to extremely distant hypothetical scenarios where organ harvesting was routinely and widely known about somehow explains our reaction to far more grounded hypothetical scenarios where it was a one-off done in secret is…odd. What’s the epistemic story here? What’s the reason for believing that when people think they’re having an immediate intuitive reaction to the latter they’re….subconsciously running through the far more fanciful hypothetical that they’ve somehow mixed up with it and thus forming a confused judgment about the former? I guess I just don’t buy this move at all.

Third, we can imagine several modifications of the case that makes the conclusion less counterintuitive.

First, imagine that the six people in the hospital were family members, who you cared about equally. Surely we would intuitively want the doctor to bring about the death of one to save five. The only reason why we have the opposite intuition in the case where family is not involved is because our revulsion to killing can override other considerations when we feel no connection to the anonymous, faceless strangers who’s death is caused by the Doctors adherence to the principle that they oughtn’t murder people.

This one really floored me in the debate. I guess I could be wrong but my assumption would be that no more than one or two out of any one hundred million human beings — not one or two million out of a hundred million, but literally one or two — would be more friendly to murdering a member of their own family to carve them up for their organs than doing the same to a complete stranger.

A second objection to this counterexample comes from Savulescu (2013), who designs a scenario to avoid unreliable intuitions. In this scenario there’s a pandemic that affects every single person and makes people become unconscious. One in six people who become unconscious will wake up — the other 5/6ths won’t wake up. However, if the one sixth of people have their blood extracted and distributed, thus killing them, then the five will wake up and live a normal life. It seems in this case that it’s obviously worth extracting the blood to save 5/6ths of those affected, rather than only 1/6ths of those affected.

Similarly, if we imagine that 90% of the world needed organs, and we could harvest one person’s organs to save 9 others, it seems clear it would be better to wipe out 10% of people, rather than 90%.

This is just “moral state of emergency” stuff. All the comments about those intuitions made above apply here.

A fourth objection is that, upon reflection, it becomes clear that the action of the doctor wouldn’t be wrong. After all, in this case, there are four more lives saved by the organ harvesting. It seems quite clear that the lives of four people are fundamentally more important than the doctor not sullying themself.

That’s not a further objection. That’s just banging the table and insisting that the only moral principles that are relevant are consequentialist ones — which is, of course, precisely the issue in dispute. Also worth pausing here to note the relevant higher-order evidence. As far as I know, utilitarianism is a distinct minority position among professional philosophers who have ethics as their primary academic specialization (i.e. the people who are most likely to have done extensive reflection on this!).

Fifth, we would expect the correct view to diverge from our intuitions in a wide range of cases, the persistence of moral disagreement and the fact that throughout history we’ve gotten lots of things morally wrong show that the correct view would sometimes diverge from our moral intuitions. Thus, finding some case where they diverge from our intuitions is precisely zero evidence against utilitarianism, because we’d expect the correct view to be counterintuitive sometimes. However, when it’s counterintuitive, we’d expect careful reflection to make our intuitions become more in line with the correct moral view, which is the case, as I’ve argued here.

The comments on meta-ethics above are relevant here. I’ll just add three things here. First, moral judgments and moral intuitions aren’t the same thing. An intuition is an immediate non-inferential judgment. Other kinds of judgments are indirectly and partially based on moral intuitions as well as morally relevant factual information and so on. One big problem with appealing to people having moral judgments in the past that seem obviously crazy to us now as evidence that moral intuitions can steer us wrong is that we have way more access to what moral judgments people made in the past than how much those judgments were informed by immediate intuitions that differed from ours (like, they would have had different feelings of immediate approval and disapproval about particular cases) and how much they were informed by, for example, wacky factual assumptions (e.g. “God approves of slavery and He knows what’s right more than I do” or “women are intellectually inferior to men and allowing them to determine their own destiny would likely lead to disaster”). Second, the persistence of moral disagreement could just be evidence of not everyone having identical deep moral intuitions or it could be evidence that some people are better than others at bringing their moral intuitions into reflective equilibrium or (most likely!) some of each without being evidence that some (but not other) intuitions are failing to make contact with the underlying moral reality. Third, even if there is an underlying moral reality, moral intuitions are (however this works!) presumably our only means of investigating it. If you believe that, I don’t see how you can possibly say that the counterintuitive consequences of utilitarianism are “zero” evidence against utilitarianism. They’re some evidence. They could perhaps (“on reflection”) be outweighed by other intuitions. Whether that’s the case is…well….what the last ten thousand words have been about!

Sixth, if we use the veil of ignorance, and imagine ourself not knowing which of the six people we were, we’d prefer saving five at the cost of one, because it would give us a 5/6ths, rather than a 1/6ths chance of survival.

If this is correct, it shows that to achieve the sort of neutrality that the veil of ignorance is supposed to give us, agents in the original position had better be ignorant of how likely they are to be the victim or beneficiary of any contemplated harm. Notice that without that layer of ignorance, the standard descriptions of the thought experiment aren’t actually true. “You don’t know whether you’ll be male or female, black or white, born into a rich family or a poor family, if there’s slavery you won’t know whether you’re a slave or a master,” etc. Some of these may be true but some of them won’t. Say you’re considering enslaving 1% of the population to serve the needs of the other 99%. If you’re behind the veil of ignorance but you know that, if you form the belief that you won’t be a slave — and you’re right — does that not count as knowledge? You had accurate information from which you formed an inference that you could be 99% sure was correct! On at least a bunch of boringly normal analyses of what makes true belief knowledge, the person who (correctly) concludes from behind the veil of ignorance that they won’t be a slave, and thus endorses slavery out of self-interest, does know they won’t be a slave. That very much defeats the point.

A final thought before leaving the veil of ignorance:

What if we came up with some impossibly contrived scenario whereby harvesting one little kid’s organs (instead of giving him a candy bar) would somehow save the lives of one hundred billion trillion people? As I’ve already indicated, I’m not entirely sure what I make of “moral state of exception” intuitions, but if you do take that idea seriously, here’s a way of cashing it out:

Rawlsianism is a theory of justice — although one that wisely separates justice from interpersonal morality, confining itself to the question of what just basic institutions would look like rather than going into the very different moral sphere of how a person should live their individual life. Plausibly, though, a virtuous person confronted with the choice between upholding and undermining the rules of a just social order should almost always uphold. Perhaps, though, in really unfathomably extreme scenarios a virtuous person would prioritize utility over justice. Again: I’m not entirely sure if that’s right and the good news is that no one will ever have any particular reason to have to figure it out since (unlike more grounded cases of conflicts between justice and utility) it’s just never ever going to come up.

…and that’s a wrap! I’m obviously deeply unpersuaded that any of these arguments actually give anyone much reason to reconsider the deep moral horror nearly everyone has when thinking about this consequence of utilitarianism, but there’s certainly enough here to keep it interesting.

--

--

Ben Burgis

Ben Burgis is a philosophy instructor at Georgia State University Perimeter College and the host of the Give Them An Argument podcast and YouTube channel.