• updated:

autonomy utilitarianism

Utilitarianism, briefly, is an ethical or moral framework under which one seeks to increase some measure of well being, utility, for all morally-relevant beings. Typically this is portrayed under the broader framework of consequentialism, where the moral value of an act is derived from the state of the world resulting from that act.

One major faction of Utilitarianism consider utility to be how well the world satisfies each being’s preferences about it, and is hence known as Preference Utilitarianism. Hedonic Utilitarianism on the other hand is concerned with how much pleasure, usually broadly defined, each being experiences. Negative Utilitarianism instead seeks to minimize some harm, typically suffering. Each of these have some amount of implicit constraints on what could possibly be considered a morally relevant being. One cannot minimize the suffering of a entity that isn’t capable of suffering, after all. Though that does beg the question of, what is suffering, exactly? And what does it mean to have a preference?

And what about beings that can’t suffer? Or can’t feel pleasure or happiness? Or can’t really be said to have preferences in a way that’s recognizable to humans? Autonomy Utilitarianism tries to answer this question, rather than focus on the internal emotions or values of beings, it promotes the ability of beings to interact with the world around them.

This isn’t to say that the internal state of a being is irrelevant. For one thing, interaction implies change, so the extent to which something’s internal state is fixed limits the extent to which it can interact with the world. This creates a many-branched continuum, such that rocks, whose internal state is almost static and actuators are very limited, have very little autonomy. While humans, whose internal state updates to reflect the outside world and have many options for acting on that world, have significantly more autonomy.

This also provides a possible answer to the question of what is morally relevant: Just about everything, including fairly abstract things like evolution, corporations, and ecosystems. However, different things also have different ranges of possible autonomy, and individuals acting to promote autonomy in the world have limited ability to enhance the autonomy of others. There is both only so much I can do to a rock to provide it more autonomy, and there is only so much that can theoretically be done to that rock to provide it more autonomy before it stops being a rock. At the same time, nearly everything can be rendered almost as lacking in autonomy as a rock, through the process known in living organisms as death.

In practice

How can I enhance something’s ability to interact with the world?

  • A chunk of quartz: Pretty much nothing you are actually capable of doing.
  • A tree: Something, maybe?
  • Your pet: Quite a lot, really.
  • Another human: So, so much.

Humans, along with most animals, are significantly limited by the invisible constraints of their personal history in addition to more mundane constraints of bodily capability. In long term personal interactions we have the opportunity to learn what most supports the other person’s autonomy, and tailor our interactions with them (to the extent that we don’t neglect our own autonomy) to supply that. However, from the perspective of building systems and communities and broader autonomy-supporting culture, we cannot cater to unknown individuals and must use more general strategies.

Provide tools and resources, not solutions.

One strategy is to provide tools or resources that enable people to build their own solutions to their own problems.

Decentralize critical resources.

A more restricted strategy, but very important where it applies, is the decentralization of critical resources. Control over a resource is a very fundamental type of power, and it is very easy for humans to use that power as leverage without thinking.

Proliferate knowledge.

Knowledge is, in many ways, the critical resource, and there are countless historical examples of attempts to control it’s spread.

Accommodate multiple approaches and methods to any particular task.

Wheelchair ramps and stairs for a building, phone and email support for a company, broom and vacuum. There is no one true way to do (pretty much) anything, and different people will, for individual history reasons, prefer one over the other, so offer options.

Design in exit strategies.

In theory

There are, various theoretical conundrums that arise from all Utilitarian frameworks, and while autonomy helps with some of those, it doesn’t handle others.

A rock, as we established above, doesn’t have much autonomy. Mallory takes it and crunches it up and sells it to Alice who puts it on a path to make it less muddy, thus increasing Alice’s ability to walk the path. In most cases this is a fairly innocuous operation with a pretty clear net increase of autonomy.

Suppose, though, that Alice doesn’t have as much autonomy as J random Superintelligence could gain by using Alice’s for some purpose unfathomable to us mere humans. So J buys Alice’s atoms from Mallory and does whatever inscrutable Superintelligence thing with them. Perfectly normal net increase in autonomy, right?

There’s a couple responses to that, but I don’t feel entirely comfortable with any of them. There are presumably atoms around that aren’t bound up in as much autonomy as Alice’s, and J should use those instead (this work fine until all the other atoms are used up, but it also establishes a burden of seeking alternatives for arbitrarily small local dips in autonomy, which may in fact decrease overall autonomy due to lack of information). We can switch to a negative utilitarianism, where any local decrease in autonomy is to be avoided at all costs, but that goes inconsistent pretty quickly. We can switch from measuring autonomy as a continuum to a thing that some collections of atoms have and others don’t and scale each collection’s autonomy by it’s theoretical maximum and maximize that instead, but that loses several nice properties and gets complicated quickly when dealing with other scenarios with trillions of minimally morally relevant beings.