The Effective Altruism problem

A model of a man thinking

The Effective Altruism problem

The past decade has seen the rise of a new philosophy designed to make philanthropy more efficient. This week, we explore whether Effective Altruism is in fact a useful mentality within the sector, and how it might negatively impact the charity sphere.

Ben Roberts

Originating from the utilitarian philosopher Peter Singer’s 1972 essay “Famine, Affluence, and Morality”, the question arises: if you would be willing to ruin your clothes to save a child from drowning, why would you not be willing to sell your clothes to save a child from poverty? This conundrum has us inspect what our obligations to the less fortunate are, and it’s from this that a project emerged in 2011 to provide an unimpassioned and holistic answer. How can we do the most good with what we have? This question is at the core of Effective Altruism (EA).

EA asks what the greatest value to humanity that can be achieved through a set amount of money and man-hours is. As a research field, it attempts the lofty goal of hierarchising the world’s most pressing problems, while the practical community ostensibly acts upon these findings.

It’s not hard to see why the movement has taken off. Recent years have seen a cultural buzz bloom around ethical theory, such as in popular show The Good Place, where we can see ethics explained in a world with no moral relativism (which perhaps appeals to EA fans). Plus, there is a lot of goodwill at the heart of EA. It’s been boasted that following the identification of mosquito nets as one of the most cost-effective means of saving lives, over 200 million nets were distributed in EA inspired projects. But as the community continues to grow, the charity sector should remember to take the more dogmatic elements of EA with a pinch of salt.

Organisations have sprung up in this field such as Giving What We Can and GiveWell. These seek to democratise the inner workings of the sector in order to benchmark which charities give donators some bang for their buck in terms of tangible results. This is a noble undertaking, but not without flaws. For instance, these organisations aim to audit impact for charities in absolute terms, often citing a result measured in QALYs. This concrete measurement ignores the successes of many charities, quality of life improvements like comfort, dignity, and socialisation. By valuing only certain outputs, the movement will inevitably end up funnelling resources away from charities that, while effective, do not feature in the EA philosophy.

Though perhaps an interesting thought experiment, the very notion of determining a categorical value for resource distribution in the charity sector is simply impractical. Given the propensity of the movement to identify definite winners and losers for efficiency, it’s likely that EA followers would prefer to see full support for their de jour charities at the expense of niche organisations relied on by smaller communities.

Perhaps the most damning element of the EA subculture comes in the form of longtermism, a shift in philosophy away from causes such as the eradication of global poverty and towards costly research projects designed to eliminate future threats, such as from AI. Even despite the potential importance of these hypothetical problems, huge sums are being poured into conducting the necessary research to figure out how to combat these vague issues. The clear problem here is that counteracting a rogue AI is a very interesting problem to be tackling, and therefore has taken precedence over the more mundane but very real issues affecting people every day. The concept of doing good has apparently dropped out of reality, and now exists in fanciful ‘what-if’ scenarios for certain members of this culture.

As highlighted by Vox’s Dylan Matthews, EA is “very white, very male, and dominated by tech industry workers”, which should perhaps explain why a digital threat seems so worrying to this group. But this quote may also contribute to what I believe to be the most myopic part of longtermism.

Even if we assume that the greatest threat to humanity will come from an AI, I’m reminded of a quote from evolutionary biologist Stephen Jay Gould. He was “less interested in the weight and convolutions of Einstein’s brain than in the near certainty that people of equal talent have lived and died in cotton fields and sweatshops.” Though current EA inspired tech geniuses might choose not to steer resources towards impoverished communities, it takes a special kind of pride not to recognise that charity isn’t a bottomless pit. Charity is an investment in people, that they might thrive now and in future generations. And considering just how many great thinkers a world unburdened by poverty might produce, within a few generations I would suddenly feel a lot safer about upcoming AI threats.

Whether you subscribe to EA or not, what is needed instead of fantastical navel-gazing is a humanistic approach. Attention needs to be paid across the board, to charities with varying goals and approaches. We cannot assume ourselves to be moral arbiters, and must instead opt to leave nobody behind as we move towards the future.

Submitted by Kevin Curley CBE (not verified) on 19 Aug 2022

Permalink

I'm new to 'effective altruism' but it seems to me to ignore the importance of political action. Distributing 200 million mosquito nets will undoubtedly save some lives. But we also need to act politically and persuade rich governments to invest in finding a vaccine to prevent malaria. I give money to a food bank where I live in the UK because I want children to eat. But I also act politically in support of a higher national minimum wage and better universal credit. We must do both.

This is absolutely right, the long term solution to a lot of the sector's biggest problems comes from political action and legislation change. It's been argued that the EA movement has largely ignored this from the beginning, with Peter Singer himself having decried political action due to the stubbornness and difficulty of enacting this type of change. It's another case of 'flashy' solutions being far more attractive than humanistic ones.

Add new comment

The content of this field is kept private and will not be shown publicly.

Plain text

  • No HTML tags allowed.
  • Lines and paragraphs break automatically.
  • Web page addresses and email addresses turn into links automatically.