• updated:

autonomy utilitarianism

n my musing about preference utilitarianism and expanding what entities we care about I wondered what the most inclusive definition of a benig with moral worth would look like.

An entity enguaged in a feedback loop with the rest of the world.

This also handily implies that the preferences of this entity are the eventual results of the feedback loop, if it were free to follow whatever course it may. This is akin to the idea of a revealed preference, but it’s not quite the same. Specifically it’s cuts away any requirement for concousness or whatnot.

Except that I’m not quite able to grasp such things as evolution as having moral worth. Inside that idea of preferences is this “if it were free to follow whatever course it may” bit, and that’s what lead me toward the idea of degree of autonumy, rather than preference satisfaction, as being the measure of utilty.

At the very least, autonomy utilitarianism provides a close approxamation to the preference variety, in the practical situation of having varying but overall weak evidence for the vast majority of beings’ preferences. A superintelligence could do significantly better than we can at determining such preferences, but I don’t think I want most of my preferences fulfilled for me, and, even with enormous computational ability, supporting people’s autonomy might (maybe?) be more effective for general preference satisfaction than direct intervention.

From a yet more practical standpoint,