Pages

Tuesday, 8 December 2015

If You Really Cared About Effective Altruism You'd Give All Your Money To Feminist Frequency

I'm interested in the effective altruism movement. I think it is on the whole a good thing. Even if you disagrees with the hardcore universalist utilitarian principles upon which mainstream EA is based, the general idea that we should evaluate charities based upon how much good they do seems to me to be solid. All charity may be good, but some charities are better than your local donkey sanctuary.

I don't personally give to effective altruist charities, or any charities at all, excluding the occasional pound for a remembrance poppy. As a long term and current student, I am not exactly flushed with cash. I think those that pledge for the future when they're not able to actually give are making a very empty gesture, see the number of students currently signed up here. In the future, however, I will consider it.

My interest in EA, however is mainly in its structure and the space it occupies in the social marketplace. The current movement is heavily influenced by people who are, for want of a better word, total shitlords. The obsession with data, auditing your own beliefs, and treating human life and happiness as a simple variable in your spreadsheet to be maximised is not one that is held by someone with a below average autistic spectrum quotient. They enjoy stories about failed well meaning projects, such as roundabout water pumps, and are determined to do better than the dumb do-gooders who currently direct the world's aid money. They also revel in bullet biting on tough moral questions, in ways that make many people feel very uncomfortable. See, for example, EA demi-God Peter Singer's comments on infanticide and abortion. These people are my ingroup.

Many of these same people are also interested in other questions that are simply far beyond the current Overton window of public debate. The fringes of the rationalist blogosphere has long spent time discussing questions about AI risk, and many prominent EAs strongly endorse donating money to organisations devoted to assessing such risks. While fascinating, these questions are probably not the most important thing we need to focus on right now, they have few immediate benefits for the world and are always going to have little mainstream appeal.

Opposing these slightly odd types are more traditional liberals. These people are equally concerned with improving general global welfare, but hold slightly more conventional moral views, and are uncomfortable with the more esoteric questions their opposite numbers enjoy. The Less Wrongers see EA as an all encompassing new moral paradigm, with all the somewhat cultish behaviour that implies. The liberals see EA as a method of refining their moral convictions and implementing them in a more efficient way.

I'm unsure if these two groups can continue to work together much longer, the movement is starting to reach the critical stage at which it is large enough to potentially split. The faultline here looks very similar to that which killed the broad atheist/skeptic coalition about five years ago with atheism plus, and ultimately led to the eternal blood war of gamergate. Nerds and herbivores are just similar enough to be natural enemies.

The break is a little more interesting than simple kulturkampf though, because of the incentive gradients involved. Let's say I'm an important figure in the effective altruism movement, perhaps working for Giving What We Can or a similar organisation. My noble goal is quite simply to improve human welfare through charitable giving. In order to do this, I can attempt to refine the effectiveness of existing charity, and I can try to recruit as many people as possible to donate as much as possible in this way.

But what if goals conflict? What if by focusing rigorously on the most effective things, I start confronting questions that will piss off one side of the see-saw. Talk too much about AI risk, and I'll alienate the normos, talk too little about it and I'll bore the spergs. This conflict seems to me to be unresolvable. Eventually I'm going to have to pick a side.

From a recruitment perspective, the growth area seems to me to be on the liberal side. There are simply far more of them. If we can convince every smart university graduate who had an Obama poster on their wall in 08 and tweets in solidarity with #blacklivesmatter and Caitlyn Jenner to give 10% of their income to charity, we could do a lot of good. On the other hand there simply aren't that many movement rationalists out there. If we can convince the entire user base of Less Wrong to give 10% of their income to charity, we can do a lot less good. Our nerds, therefore, need to fall by the wayside, their obsession with AI risk is nutty, and they put off recruitment at the other end. The most effective altruistic strategy for effective altruism is to focus on recruiting young liberals to the cause.

The incentives here, however, become somewhat strange. What if rigorously focusing on schistosomiasis, two of the top four recommended charities by evaluators Give Well, isn't optimal? Even if every marginal dollar spent on SCI is best, if it harms my recruitment through being a bit gross and generally non-sexy, it has to go. Giving money to alright, but not fantastic, charities may be a good thing if it encourages more people to sign up. A million people giving 10% of their income to an 8/10 charity is much much better than a thousand people giving to a 10/10 one in terms of global human welfare.

To recruit even more people, we need to focus on building the reputation of effective altruism. The most effectively altruistic thing to do is to publicly abandon the founding principles of the movement. Lots of liberals seem to care about sexism in video games. This is an incredibly unimportant issue compared to global poverty, but if GWWC chuck a tenner a month at Feminist Frequency, and this generates enough good publicity to give us another hundred liberal 10% pledgers, maybe it's a good thing? Throwing off the connotations of brogrammers and nerds may be a really important component of mass growth of the cause. Maybe we should actually pledge money to donkey sanctuaries if it helps us get a few more donkey lovers on board, provided there's always a marginal increase in the schistosomiasis money?

This, of course, assumes that the people behind effective altruism are truly committed to its founding principles. My proposal may well be the best way of helping the most people, and if so they should seriously consider it, but it has other drawbacks. The biggest of these is that maybe there is more to the motivations behind EA than meets the eye, a point made by Sam Bowman when I ran this past him last night. Effective altruism sends out several social signals, it is a costly and therefore credible signal of virtue and wealth, and it is more generally associated with high IQ and status. The level of donation leads to the former, the process and targets are associated with the latter. Distinguishing themselves from regular slapdash philanthropists may therefore be important for the latter function. They want to be better than the average charity donor, in terms of money, sure, but just as importantly in terms of information. Effective altruism may be a positional good, if it were just giving money away, many may lose interest. Giving money to Anita Sarkeesian is silly, I wouldn't want to be associated with it, and I doubt most EAs would either. It shows you're a low status idiothole. What's more, the more average people you recruit into the movement, the noisier the signal is. Once everyone is an EA, its no longer cool.

So we're left with a paradox. If EAs really cared about being effectively altruistic, they would do all they could to improve recruitment, this may include sending out the right signals by throwing money at Feminist Frequency. But if what they really care about is signalling that they are better than that, they won't.

1 comment:

  1. It's a topic of discussion already, should EA be trying to grow fast or have slower growth that's more focused (if growth is happening). I'm not sure I've seen a definitive answer, and even if there was, there probably wouldn't be too much that people who work at EA orgs could do about it.

    Also GWWC doesn't specify which charities you have to give to, just ones that you think could be effective, which means poverty, AI, animal rights, and meta areas can all be funded if people care about them.

    ReplyDelete