Flavors of Utilitarianism

September 2nd, 2011
morality, utilitarianism
In utilitarianism you do what you think does the best job of making everyone happy [1]. Many questions all utilitarians agree on: is it good to kick strangers for no reason? (no) Should I help people who are suffering? (yes) Are there theoretical cases where major suffering for one person is better than minor suffering for huge enough numbers of people? (yes). But what about: should we try to avert threats to the existence of humanity as a whole? Or, should we try to make colonies? Or even, should we limit our population?

I see these as depending on two details of utilitarianism that people disagree on. The first is whether we're trying to maximize total or average happiness [2]. Consider a large population with medium average happiness against a small population with high average happiness. The large population is large enough that even though individual happiness isn't that high, total happiness is still higher than that of the small population. Which do we think would be a better population to be humanity? Someone maximizing total happiness would choose the large one, while someone maximizing average happiness would choose the small one.

I believe total happiness is what we should be maximizing. Average happiness gives unreasonable claims like that it's better to have one really happy person than a billion almost as happy people. Some people claim that total happiness leads to the "repugnant conclusion" that we should continue to increase population until there are huge numbers of people with very low but still positive happiness. I would argue that this isn't really that bad. When we say 'positive happiness' we include both suffering and joy. So someone with low positive happiness would believe that on balance their life was worth living and they are glad they got the chance to do so. Average utilitarianism would say that we should limit our population so as to have more resources per person and greater average happiness. Total utilitarianism says we should seek the population size that leads to greatest (total) happiness. If we think we are at the point where additional children decrease global happiness by increasing competition for scarce resources, then we should work to limit overpopulation, but not otherwise.

The other question is whether most people are happy or unhappy; is total happiness positive or negative? If we believe total happiness is positive and likely to remain so, then it would make sense to fund asteroid tracking research [3] to try to prevent everyone dying in an asteroid collision, removing future potential for happiness. If we believe it's negative and expect it to stay so, then we should spend our money on making sad people happier instead of trying to prevent human extinction. [4]

[1] "maximizes utility over all people"

[2] wikipedia: average and total utilitarianism

[3] Assuming this is the most cost effective existential risk to be trying to prevent.

[4] At the extreme, someone who believed total happiness was unavoidably negative should work to quickly and painlessly kill everyone. Perhaps researching bioweapons would make sense. The main way this would fail, though, is killing a lot of people but not everyone, which would dramatically increasing suffering. Also, I disagree (a) that happiness is net negative now and (b) that we should expect to to be in the future, so I think this would be a really bad idea.

Comment via: google plus, facebook

Recent posts on blogs I like:

Animal Welfare and Capabilitarianism

All ethics is a special case of animal welfare science

via Thing of Things December 18, 2024

Developing the middle ground on polarized topics

Avoiding false dichotomies The post Developing the middle ground on polarized topics appeared first on Otherwise.

via Otherwise November 25, 2024

How to eat vegan on Icon of the Seas

Royal Caribbean has a new giant cruise ship, Icon of the Seas, which has a large selection of food options.

via Home November 21, 2024

more     (via openring)