Any Realistic Application of Utility is Intractable
There's a sort of periodic trend where every year or so I end up reading articles about utilitarianism from skeptics. I'm not a utilitarian -- my moral philosophy would be hard to characterize beyond "try to be humble and reduce suffering"Let's go with "quasi-Buddhist", or maybe most aligned with Rawls. Don't get me started on how this interacts with lack of belief in free will. -- but naturally as a programmer, there's a certain appeal to the general idea of a computable ethical systemIndeed I suspect that's why there's such overlap between nerds and utiltitarianism. . The idea that we should be able to have a simple set of moral axioms and from them construct a moral calculus is appealing, even though reality rarely provides simple choices. Such a calculus need not be utilitarianBuddhism certainly aligns more with "reducing suffering", e.g. negative utilitarianism, though it has less of a hard-nosed vibe (Middle Way etc). You can imagine all sorts of additions and constraints to align the algebra to your intuitions. Indeed, my main beef with hard-core utilitarianism is the unwillingness to add a few more axioms into the framework to better align the theory. , but that's by far the most common system I see referred to.
All that said there's a recurrent straw-man theme I keep seeing in non-academic arguments against utility theory. The latest I read was from Sam Kriss, in his post Against Truth. In this post, Sam outlines a scenario where 2 innocent people are pitched together in gladiatorial combat, with the argument that while in isolation we'd consider this bad, if enough people liked watching it, utility theory would say that not only we should condone this, but we'd be compelled to further it. While I enjoy Sam's writing most of the time, and I appreciate his sense of humor in this one, his standard utility monster picture rings hollow.
Don't get me wrong: there are people who buy into this hard-line idea of utility, and embrace e.g. things like repugnant conclusions as valid outcomes, rather than faults in the underlying theory. Nonetheless, I always feel compelled to defend utilitarianism against these arguments. Inevitably they portray a shallow "first-order" computation of utility.
But ironically, in trying to make a stronger case for utility, I always end up destroying the very "computability" that makes it such an attractive system to begin with. Let me show you what I mean.
The standard theme we'd look to to gain a richer picture is indirect effectsNot claiming this indirect analysis is novel, just that most critiques tend to ignore it. . Just like we can't characterize a nuclear reaction by a single atomic fission, we can't characterize an ethical choice by the direct outcomes. So in our gladiatorial example, we'd consider all of the downstream outcomes, presumably weighted by some combination of likelihood and frequency:
- What happens to the members of the audience beyond their initial joy?
- Do they probabilistically become depraved murderers?
- Or maybe just assholes, in which case, what's the impact on the lives around them?
- Do they become addicted to the "entertainment", requiring more and more events?
We can imagine assigning values and probabilities to each of these conditions, maybe breaking out MCMC to compute some stationary values. But of course we're not limited to just our audience and what they do. We'd also want to consider:
- What about what they might have done, had they not turned in to
Tiktokgladiator addicts? - What happens to their children? Or the children they would have had, had they not been exposed?
- grandchildren, great-grand-children, etc. etc.
- etc.
Intuitively, as we consider more of these indirect effects we'd start to reweight our decisions. Maybe our combat gives us a quick boost of joy, but the longer-term effect is deeply negative, and we'd be obliged to intervene to stop it. Of course, it could go the other direction.History would seem to suggest otherwise. We're not littered with examples of civilizations with death-by-combat that also happen to be joyful utopias. There's this sense that if we did a better job at calculating our overall utility, we'd get a less "absurd" answer, or at least a more nuanced one. Maybe we'd introduce enough ambiguity into our reasoning to give us reason to pause.
The problem is that in adding these layers of richness into our calculation, we've also made it intractable to calculate, if not incalculable. It's easy for us to imagine summing up the value from millions of homo economicus, but as we introduce more and more levels of indirection, it gets exponentially harder. It's not just the calculation: just assigning a probability, much less a moral value, to something 5 generations down the line, becomes nigh-impossibleExcept maybe recursively, but unlike games, ethics doesn't really have a useful "terminal" condition, nor is the end state really interesting from an ethical standpoint. So this just exacerbates the problem. .
Each additional layer of indirection yields more situations to assign value to, each with deeper and deeper ambiguity. If we "short-circuit" our calculation early, we might miss a subtle long-term effect. e.g. consider a chess engine which only evaluated the value of a position but didn't do any search. It would be a crappy chess-engine! Ceteris paribus, a moral value system which only considers the direct decision without considering downstream effects is a crappy system of morality. The various doctor/fat-man versions of the trolley problem paradox become more navigable if you imagine having to explain your decision to cut someone up to their family.
So here's my assertion: if we could completely and accurately consider all of these indirect effects, a utilitarian calculus would result in moral prescriptions that more closely align with our intuitions. The irony is that as we consider higher-order effects, our system becomes more consistent but simultaneously less calculable. This is sort of obvious, so naturally people have even written papers to this effect, e.g. the somewhat pedantic On the computational complexity of ethics: moral tractability for minds and machines.
What's the takeaway? How about: morality is complex, and we should endeavour to have humility when thinking about our moral foundationsOkay, but also that there's way more options than just "utility" even if you want to play with moral algebra. What if you included a term for freedom? What if you removed the assumption that individuals are identical? What if you came up with a fancy system of balancing suffering, etc etc . Any moral system that allows you to easily assign values to your actions is probably lying to you, whether it's clothed in the garb utilitarianism or some prescriptive ethics. All philosophical endeavour is littered with caveats and ambiguity. If our system produces moral outcomes that seem stupid or immoral, let's have the humility to accept there's just as likely a flaw in our foundation or calculation than in our intuition. But let's also be fair to the rationalists, for whatever the faults that might come into their reasoning, they at least try to think about ethics. Sure, it's annoying when people assume they have a superior system, but it's frustrating to see critiques that don't take into account mostly people don't try to think about ethics at all. That seems much worse.