Pluralism in Effective Altruism |
August 16th, 2015 |
ea |
Effective altruism is a question: how can I do the most good with the resources available to me? For many of us, when we think this through we end up deciding that developing world public health interventions are the most important think to work on. For others it's global poverty. Or animal suffering, climate change, pandemics, or any of a wide range of others. And for some people, after carefully considering the research on various approaches, looking for ones that—by their understanding of how people make progress in the world—are most likely to fulfill their values, the cause that stands out as most pressing is reducing the risk from artificial intelligence.
Now, I think AI-risk advocates are wrong, but not in a "clearly and obviously with no room for discussion" sort of way. People's different views here rest on things like whether bringing additional people into existence matters morally, how able people are to make progress in the absence of good feedback loops, or the route scientific progress in the field of AI seems likely to take. The people in EA who think we should prioritize this research have thoughtfully and sincerely concluded it's the one where their efforts will help people the most.
Even if you think only a few of the causes that people end up working on are useful, though, EA is still really beneficial. As people get into EA they tend to become much more interested in helping others. Many go from donating nothing to taking the Giving What We Can pledge to donate 10% of their income. Others devote their careers to where they think they can help the most. People end up finding that exploring altruism is interesting and satisfying, and they make it a much bigger part of their lives. Sometimes people get obnoxious and dismiss other people's causes, but mostly people become much more serious about making a difference.
So what's the counterfactual for people who get into EA and give to AI-risk? What would they be doing if they'd never heard of EA? They'd probably be like most people, and spend their money on themselves. Their AI-risk donations are mostly from increased giving, not taking money from other causes. Yes, some people switch their donations from Oxfam to MIRI, but these people are far outweighed by people who get into the EA movement and dramatically increase their donations to global poverty charities and other clearly good things.
Comment via: google plus, facebook