7

Effective Altruism as Global Catastrophe Mitigation

Cross-posted to LessWrong and the Global Risk Research Network
 
Summary: In this post I posit how all heretofore activities of effective altruism, both of its original three causes and the ones we've seen its inception, essentially fit an evidence-based pattern-recognition system. I delineate the basis of this framework throughout the history of effective altruism and precursor movements as a focus on groups of beneficiary populations whose well-being is as important as anyone else's, but for a variety of reasons these problems have gone neglected or intractable, until the turn of the 21st century when emerging technologies promised the potential to turn the tide. On this basis EA and adjacent movements have become a globally coordinated network to apply emerging technologies to mitigate extant or potential global catastrophes fitting this framework.
 
I. Introduction
 
In May 2016, Ozy Brennan wrote on their blog Missing Cause Areas, on how what have thus far been the priority focus areas of effective altruism (EA) appear historically contingent. From their post:
Global poverty seems to be the least historically contingent cause area: if Peter Singer’s Famine, Affluence, and Morality didn’t get people to help people in the developing world, Peter Unger or any of the other people working with similar ideas would have. The popularity of animal rights, however, is clearly connected to the fact that Peter Singer, a prominent early effective altruist, wrote Animal Liberation, one of the foundational books of the animal rights movement. He carried over a significant amount of his fanbase from animal rights into effective altruism. 
 
As a cause area, existential risk reduction seems to be solely a product of Eliezer Yudkowsky, a tireless promoter of effective altruism who researches the risks of artificial general intelligence. In my experience, interest in other kinds of existential risk among effective altruists appears to be primarily a product of people who accept Eliezer’s arguments about the importance of the far future and existential risk, but who are skeptical about the importance of the specific issue of artificial general intelligence.
 
Existential risk reduction, global poverty, and animal rights all seem to me to be important issues. But “global poverty, plus the pet issues of people who got a lot of people into EA” doesn’t seem to me to be a cause-area-finding mechanism that eliminates blind spots. I ask myself: what are we missing?
 
The most obvious thing we’re missing, of course, is politics. I can hear all of my readers groaning now, because “effective altruism doesn’t pay enough attention to politics” is the single most tired criticism of effective altruism in the entire world. I do think, however, it is trotted out ad nauseam because it has a point. There is a significant gap in effective altruism for structural change in between “buy bed nets” and “literally build God”. And while development in Africa is a fiendishly difficult topic, so are wild-animal suffering and preventing existential risk, and effective altruists seem to have mostly approached the latter with an attitude of “challenge accepted”.
 
(It’s possible, however, that development is not sufficiently neglected for effective altruists to improve the situation much? I don’t know enough to have an opinion on the issue.)
 
However, the most interesting question of blind spots, to me, is not that. The three primary cause areas of effective altruism are all advocating for particular groups of beings who are commonly overlooked: the global poor, animals, and people who don’t yet exist. The question arises: what beings are effective altruists overlooking?

Indeed, effective altruists joke the four focus areas of EA come from
this one blog post Luke Muehlhauser wrote as a description of the nascent EA movement, but we all took it as a prescription for what EA was supposed to be all about. However, EA's 3 major focus areas aren't that historically contingent. In 'Why do effective altruists support the causes we do?', Michelle Hutchinson from the Centre for Effective Altruism (CEA) provides a framework explaining how while to begin with EA's 3 major focus areas were due to historical precedent, they fit into a framework seemingly exhausting the space of possible beneficiaries for effective interventions to be focused on. From Michelle's post:
 
I think the reason is that our current delineation of causes cuts along beneficiary lines: present humans, non-human animals, and future conscious beings. Some of the most significant insights of effective altruism in terms of finding more effective ways to help others have come from highlighting different beneficiary groups. Since the groups above seem to exhaust the space of beneficiaries (if what we care about is well-being), we can’t expect to get more effectiveness improvements in this way. In future, such improvements will have to come from finding new interventions, or intervention types. These are harder to find, and likely to lead to fewer orders of magnitude improvement. This post is on how the current ‘causes in EA’ seem to arise from distinguishing beneficiary groups. In a follow up I’ll discuss what the implications of that might be in terms of our likelihood of finding more effective causes.
 
‘Cause’ is a very fuzzy term. If you think of the different things that we tend to talk about as causes, they actually seem to fall in different categories. Take the causes ‘alleviating global poverty’ and ‘structural change’. These are sometimes described as alternatives to each other, yet structural change seems more naturally a way to achieve the alleviation of poverty. This is likewise true of the cause ‘meta’. This fuzziness increases all the more what could fall into the category of ‘causes we could be supporting’, and makes it all the more surprising that there would be three singled out.
 
Distinguishing groups of beneficiaries
 
The starting point of effective altruism is increasing well-being: not just of those close to and similar to us, but all over the world and into the future. EA activities therefore fall into three groups: helping people currently existing, helping non-human animals, and helping future conscious beings. This maps out the whole space of possible effective altruism activities. There are altruistic activities which fall outside this grouping – for example, working to improve biodiversity for its own sake. But these don’t improve anyone’s well-being, and so fall outside the scope of effective altruism.
 
There are other ways to group beneficiaries than the 3 categories above. You might distinguish simply between existing sentient beings and future sentient beings. Or you might draw finer distinctions, such as between non-human animals whose suffering is caused by humans and wild animals.
 
There are several reasons for using the three-fold distinction amongst beneficiaries:
 
The groups are systematically different in how much information we have about their cost-effectiveness: We have first-hand experience of how good it is to help other humans, while it’s more difficult to know how to compare helping humans to helping other animals. Interventions that help others in the present can typically be tested for how well they work, unlike interventions aimed at the future.
The kinds of things which will help the various groups will typically be somewhat more similar to each other than to those which help others of the groups.
These distinctions are typically drawn quite strongly in people’s minds, and they represent different ways in which humanity’s moral circle could do with being widened - to people spatially far away, to people temporally far away, and to species other than our own.
 
However, this leaves a fourth now-conventional EA focus area: "meta" or metacharity. 'Meta' is a category apparently including everything from cause-neutral or cause-specific charity evaluation to EA movement growth. The object-level focus areas of EA are intensionally defined: they're about the global poor of the present generation; future generations over long timescales; and farm animals, respectively. Whatever an organization associated with EA actually does, as long as its mission is focused on one of EA's 3 key focus areas, everyone understands how it fits into EA. Metacharity as a focus area defies this paradigm because of the diversity of metacharities' activities within EA, and because they're not united by a common and concrete theme.
 
They're united by a common abstract theme though. If we go for an extensional definition of metacharity, if we point at what metacharities do, we can make more sense of them. The role of metacharities in effective altruism is to play a support or leadership role for various activities throughout the movement. Metacharities can either be cause-neutral or cause-specific, although in practice metacharities tend to select causes along specific lines. That's pretty abstract, but it's a modular add-on to the typical model of EA as having 3 focus areas. Many EA organizations can be described as playing a support or leadership role for the EA movement as a whole, or one of its constituent focus areas, thus fitting this definition:
If not EA organizations themselves, for years the likes of the Machine Intelligence Research Institute, the Future of Humanity Institute and Givewell's and Animal Charity Evaluators' (ACE) top-recommended and standout charities can and have been aptly described as EA-"adjacent". If EA(-adjacent) organizations aren't focused on one of EA's 3 object-level focus areas, they're focused on the EA community itself. The activity of cause-neutral leadership or support roles for the whole EA community distinct from any one of its 3 object-level focus area has emerged as a genuine category in EA as the relevant organizations have matured and professionalized. This is reflected in the switch from the abstract language of 'metacharity' to the use of the more precise 'EA movement-/community-building', and the 'EA Community Fund.' With this view of EA community and meta-level organizations combined with Michelle's model of beneficiary groups, we have a representation of EA explaining how its primary foci up until now aren't merely outputs of mere historical contingencies. However, this framework alone conveniently justifies EA's focuses until now without addressing why it hasn't expanded beyond the original three it came front-loaded with because of historical forces.
 
Of course, another framework for cause selection in EA is the notorious three-factor Importance, Neglectedness, Tractability (INT) framework. However, plenty of limitations of this framework have been pointed out in EA.Plus, new focuses of EA, like reducing wild animal suffering (RWAS) and the reduction of risks of astronomical suffering (s-risks) are increasingly developing. While the results of the 2017 EA survey results show focus areas aside from the 3 major ones don't receive proportionally more donations or attention now than in the past, minority focus areas in EA continue to develop as EA itself grows. In learning about the issue of identifying better evaluation and prioritization frameworks in EA, I'm surprised effective altruists until now haven't noticed Nick Bostrom of the Future of Humanity Institute (FHI) provided a framework which predicted everything EA has focused on so far ten years ago.

 


II. An Operational Definition of Global Catastrophe
 
In the Introduction to Global Catastrophic Risks, Nick Bostrom provides a taxonomy for organizing global catastrophic risks.
Let us look more closely at what would, and would not, count as a global catastrophic risk. Recall that the damage caused must be serious, and the scale global. Given this, a catastrophe that caused 10,000 fatalities or 10 billion dollars worth of economic damage (e.g., a major earthquake) would not qualify as a global catastrophe. A catastrophe that caused 10 million fatalities worth of economic loss (e.g., an influenza pandemic) would count as a global catastrophe, even if some region of the world escaped unscathed. As for disasters falling between these points, the definition is vague. The stipulation of a precise cut-off does not appear needful at this stage.
 
Global catastrophes have occurred many times in history, even if we only count the disasters causing more than 10 million deaths. A very partial list of examples might include the An Shi Rebellion (756-763), the Taiping Rebellion (1851-1864), and the famine of the Great Leap Forward in China, the Black Death in Europe, the Spanish flu pandemic, the two world wars, the Nazi genocides, the famines of British India, Stalinist totalitarianism, the decimation of the native American population through smallpox and other diseases following the arrival of European colonizers, probably the Mongol conquests, perhaps the Belgian Congo--innumerable others could be added to the list depending on how various misfortunes and chronic conditions are individuated and classified.
 
We can roughly characterize the severity of risk by three variables; its scope (how many people - and other morally relevant beings - would be affected), its intensity (how badly these would be affected), and its probability (how likely the disaster is to occur, according to our best judgement, given currently available evidence). Using the first two of these variables, we can construct a qualitative diagram of different types of risk (Fig 1.1). (The probability dimension could be displayed along a z-axis were this diagram three-dimensional).
 
The scope of a risk can be personal (affecting one person), local (affecting a large part of the human population), or trans-generational (affecting not only the current world population but all generations that could come to exist in the future). The intensity of the risk can be classified as imperceptible (barely noticeable), endurable (causing significant harm but not destroying quality of life completely), or terminal (causing death or permanently and drastically reducing quality of life). In this taxonomy, global catastrophic risks occupy the four risk classes in the high-severity upper-right corner of the figure: a global catastrophic risk is of either global or trans-generational scope, and of either endurable or terminal intensity. In principle, as suggested in the figure, the axes can be extended to encompass conceptually possible risks that are even more extreme. In particular, trans-generational risks can contain a subclass of risks so destructive that their realization would not only affect or pre-empt future human generations, but would also destroy the potential of our future light cone of the universe to produce intelligent or self-aware beings (labelled ‘Cosmic’). On the other hand, according to many theories of value there can be states of being that are even worse than non-existence or death (e.g., permanent and extreme forms of slavery or mind control), so it could, in principle, be possible to extend the x-axis to the right as well (see Fig. 1.1 labelled ‘Hellish’).
1m41oxq
A subset of global catastrophic risks is existential risks. An existential risk is one that threatens to cause the extinction of Earth-originating intelligent life or to reduce its quality of life (compared to what would otherwise have been possible) permanently and drastically. Existential risks share a number of features that mark them out as deserving of special consideration. For example, since it is not possible to recover from existential risks, we cannot allow even one existential disaster to happen; there would be no opportunity to learn from experience. Our approach to managing such risks must be proactive. How much worse an existential catastrophe would be than a non-existential global catastrophe depends very sensitively on controversial issues in value theory, in particular how much weight to give to the lives of possible future persons. Furthermore, assessing existential risks raises the distinctive methodological problems having to do with observation selection effects and the need to avoid anthropic bias.
Nick also elaborates how other catastrophes which aren't at risk of happening, but are already ongoing global catastrophes, fit into this framework.
One major current global catastrophic risk is infectious pandemic disease. As noted earlier, infectious disease causes approximately 15 million deaths per year, of which 75% occur in Southeast Asia and Sub-Saharan Africa. These dismal statistics pose a challenge to the classification of pandemic disease as a global catastrophic risk. One could argue that infectious disease is not so much a risk such as an ongoing global catastrophe. Even on a more fine-grained individuation of the hazard, based on specific infectious agents, at least some of the currently occurring pandemics (such as HIV/AIDS, which causes nearly 3 million deaths annually) would presumably qualify as global catastrophes. By similar reckoning, one could argue cardiovascular (responsible for approximately 30% of world mortality, or 18 million deaths per year) and cancer (8 million deaths) are also ongoing global catastrophes. It would be perverse if the study of possible catastrophes that could occur were to drain attention away from actual catastrophes that are occurring. It is also appropriate, at this juncture, to reflect for a moment on the biggest cause of death and disability of all, namely ageing, which accounts for perhaps two-thirds of the 57 million deaths that occur each year, along with an enormous loss of health and human capital. If ageing were not certain but merely probable, it would immediately shoot to the top of any list of global catastrophic risks. Yet the fact that ageing is not just a possible cause of future death, but a certain cause of present death, should not trick us into trivializing the matter. To the extent that we have a realistic prospect of mitigating the problem - for example, by disseminating information about healthier lifestyles or by investing more heavily in biogerontological research - we may be able to save a much larger expected number of lives (or quality-adjusted life-years) by making partial progress on this problem than by completely eliminating some of the global catastrophic risks discussed in this volume.
The "hellish" scenario Nick describes can be found in s-risks. Whether it's a neglected tropical disease, biosecurity, or anti-ageing, a focus on mitigating ongoing global catastrophes as Nick describes can already be found in EA. Within this framework, the global burden of mental illness and climate change are ongoing global catastrophes as well. If we combine this operational definition of a global catastrophe to rule in or out what are the most important problems in the world to focus upon with Michelle's framework of the three groups of beneficiaries, impact on the well-being on non-human animals count as global catastrophes too. This includes farm animal welfare in EA, and wild animal welfare, which is only now beginning to receive more attention in the movement. And if we go back 20 years, we find someone who identified what was an ongoing global catastrophe, other things being equal, negatively impacting the well-being of all non-human Earth-originating life (in addition to humans) that was transgenerational in scope and terminal in degree: David Pearce. The problem he identified was suffering as a form of negative reinforcement learning as a product of natural selection being in the future an appendix to life no longer necessary to the continued existence of its sentient passengers. His proposed solution: the Hedonistic Imperative. It is one of the things David is most known for, alongside co-founding the World Transhumanist Association with Nick Bostrom in 1998.

 


III. A Common Thread
 
What this all builds up to is a model of something the transhumanism, rationality and effective altruism all have in common going back 20 years, something they've all been reaching for. If you've been keeping score, it's on its way to accounting for everything that's happened in EA so far. And remember, the kernel of it from Nick Bostrom turns out to be predictive, not from hindsight, because it's from ten years ago.
 
Before I started thinking about it, had you asked me what is a common thread between the 3 movements of transhumanism, rationality and EA I would've told you: emerging technologies. But at first glance it would seem emerging tech if a focus in EA is mostly limited to the far-future side of things. But things are changing in EA. Last year at EA Global, William MacAskill said the key to EA was for weird nerds to keep saying true things. GiveDirectly is running basic income guarantee experiments in sub-Saharan Africa before most governments even in the developed world ever try such a thing. Startups are doing them one better by facilitating easier remittances from diaspora communities back to Africa. Effective altruists would love it if instead of having to put a band-aid on the problem of malaria eradication by donating to the Against Malaria Foundation, we could find the right way to eliminate the disease for good using gene drives. Effective animal advocates are increasingly focusing on the role clean/cultured meat can help bring an end to farming live animals. The idea there is anyone left in EA who is unwilling to move the movement forward toward solving the problems it has set out solve because doing so would risk putting someone off is false. Effective altruism is exciting, ambitious and bold.
 
The common thread is a model. Simply put, the focus areas of EA are based on identifying global catastrophes negatively impacting well-being of moral patients, however defined, that are inter-generational in scope, and terminal in their intensity. These problems are most often addressed by a solution of applying emerging technologies making solving these problems possible like never before in history. That this model broad and robust is evidenced by the fact is was also a driving force in precursor movements to EA, such as transhumanism and rationality.
 
Why it's important effective altruism as a community has an accurate map of the territory it's creating is because if we don't it leaves us open to exploitation. In September 2016, Ian David Moss of Createquity posted on the Effective Altruism Forum about how All causes are EA causes. In his post, Ian made the following argument:

Finally, embracing domain-specific effective altruism diversifies the portfolio of potential impact for effective altruism. Even within the EA movement currently, there are disagreements about the highest-potential causes to champion. Indeed, one could argue that domain-specific effective altruist organizations already exist. Take, for example, Animal Charity Evaluators (ACE) or the Machine Intelligence Research Institute (MIRI), both of which are considered effective altruist organizations by the Centre for Effective Altruism. Animal welfare and the development of “friendly” artificial intelligence are both considered causes of interest for the EA movement. But how should they be evaluated against each other? And more to the point, if it were conclusively determined that friendly AI was the optimal cause to focus on, would ACE and other animal welfare EA charities shut down to avoid diverting attention and resources away from friendly AI? Or vice versa?
 
The reality, as most EAs will admit, is that virtually all estimates of the expected impact of various interventions are rife with uncertainty. Small adjustments to core assumptions or the emergence of new information can change those calculations dramatically. Even a risk-friendly investor would be considered insane to bank her entire asset base with a single company or industry, and if anything, the information available in the social realm is far less plentiful and precise than is the case in business. Particularly as the EA movement seeks to grow in influence, the idea of risk mitigation is going to become increasingly applicable.
 
[...]
 
Effective altruism is a truly transformative idea that has the potential to improve billions of lives – but the movement’s rhetoric and ideology is currently limiting that potential in very significant ways. The few, wonderful people who are prepared to embrace any cause in the name of global empathy should be treasured and cultivated. But solely relying on them to change the world is very likely a losing strategy. If effective altruists can come up with ways to additionally engage those who want to maximize their impact but are not prepared to abandon causes and geographies they care about deeply, that could be the difference between EA ending up as a footnote to history or the world-changing social force it seeks to be.
 
To be clear, I'm not saying Ian is doing anything to exploit EA, and I agree that more effectiveness should be brought into the world of art philanthropy, and when that's done, those art philanthropists should feel free to call themselves effective altruists. Indeed, I engaged with Ian's post at length. But he was making broader claims about EA, and challenged effective altruists to accept the external criticism as valid: that all causes are EA causes. While all causes may have the potential to be EA causes, to point at anti-malarial insecticide-treated bednets; corporate campaigns against battery cages for egg-laying hens; and AI alignment research and say "these are entirely historically contingent cause selections! You have no right to deny me the franchise of EA because EA is nothing but a coalition of convenience and some hand-wavy optimization processes!" is something that should give effective altruists pause.
 
But if there is no common thread or common ground, it's easier for entryists who want to peel off left-leaning or right-leaning activist-minded effective altruists into their partisan movements for "systemic change", as if EA isn't already about changing systems. A conflict which could threaten to divide EA might succeed in doing so if there is no sense of something uniting the disparate causes of effective altruism. The task of ruling in or out what causes may or may not be effective as EA grows and expands in scope is less daunting if we know where EA is really coming from to begin with. We can create as common knowledge in EA a lodestone to bring us back to centre, and remind us ours is a fable of science and not politics.
 
Effective altruism is a question, not an ideology. Norms and values and a profession to ask the question 'how can I do the most good?' are fine things, but they're not foolproof, and they're not precise. If something walked like an effective focus area, and talked like an effective focus area, we could be more confident if a focus area was effective. And some questions to ask for figuring out what that looks like are:
  • What group of beneficiaries do the moral patients we're trying to help belong to?
  • Is the negative impact on their well-being from the problem we've identified global and trans-generational in scope?
  • Is the negative impact on well-being of this problem terminal in intensity?
  • Do the probabilities of the variables (e.g., scope, intensity, moral weight, criterion for "well-being" or "moral patienthood", etc.) we assign in our model of the problem make it competitive with existing causes in effective altruism?
Effective altruists can use competing reference classes, cost-benefit analyses and expected value estimates to hammer out whose probability assignments to what variables in what models were the most well-calibrated all along to determine how to do the most good. But all effective altruists can share the assumption the most important problems to solve are to mitigate the risk of global, trans-generational and terminal threats to the well-being of a population of beneficiaries, be that population human or non-human; living in the present or (possibly) the future; wherever they may be our grasp to help them matches the reach of our urge to do so.

 


IV. Conclusion
 
Michelle wrote a follow-up to her post about why effective altruists support the causes we do: Finding more effective causes. Does the broader framework I've posited encompasses and explains everything EA has ever focused upon thus far find more effective causes than Michelle expected? From Michelle's post:
Here are two toy models for how effective altruism might have thus far looked for the most effective causes.
 
Model 1:
 
From all the ways of helping others we can find, we hone in on those which seem most effective. We then investigate them in more detail to work out which seem most promising to work on. Some of the interventions effective altruism focuses on are quite surprising, so we shouldn’t think of this as looking at a series of interventions that are already described: we’re precisely looking for ones that haven’t been. On a model like this, the effectiveness of the interventions we’ll find in the future is quite a mystery, and it seems likely that for some time to come most of our efforts should go into trying to find the new, more effective causes.

Model 2:
 
We systematically expand our circle of caring to groups people tend to neglect, and doing so has highlights novel ways to help others. People tend to care about those close by and similar to them. Over history, the circle of those we care about has gradually widened - for example in coming to understand racism as an evil. You might think that what effective altruism has tried to do is continue this progression - persuading people that they should help not just those in their country, but also on the other side of the world; not just those of their own species but all sentient creatures; not just people currently existing but any people whose lives we can affect. Then in each of those cases it tried to find the most effective way to help that group.
 
It seems unlikely that either of these models accurately represent how effective altruism has come to focus on the causes it does. But some of the biggest insights of effective altruism do seem to have come from expanding our circle of caring. The importance of preventing events which could severely affect those in the future for the worse, after all, follows naturally after the realisation that the wellbeing of those in the future matters as much as that of current people.

Model 1
is indeed inaccurate in the wake of a the new framework. From within this framework, what bridges what effective altruists have done, what they're starting to do and what to expect we'll do in the future is no longer a mystery. Model 2 is still likelier to fit. A fourth group of beneficiaries Michelle's original three missed out on is future generations of non-human populations. The Hedonistic Imperative and s-risk reduction both target these populations unlike the other focus areas of EA, aimed at suffering trans-generational in scope and terminal in intensity.
1. New beneficiary groups
 
Finding subgroups of these three beneficiary groups, or finding groups that cut across these groups, may highlight new effective ways of helping others. (H/t Daniel Dewey for this point). ‘People currently in extreme poverty’ is an example of a sub-group (of ‘current people’) while ‘people prone to depression’ is an example of a group that cuts across current and future people. Identifying such a group might be useful because a it is neglected compared to others. Eg typically animal welfare activists do not work on the suffering of wild animals, so identifying that sub-group as worth helping was novel. Or identifying such a group might be useful because there there is some particular way to help that group, which is highlighted by identifying the group. Eg specifically considering the category ‘animals in factory farms’.
 
In some cases, identifying these groups highlights more effective interventions within a cause, such as concentrating on ending factory farming within the cause of animal rights. In other cases, it may suggest a new cause as being effective, as might be the case with trying to find a cure to depression (I’m not clear here whether the cause would be ‘medical research’, ‘improving mental health across the world’ or what).
 
2. New methods
 
            Alternatively, we might be to find new methods to help our three main beneficiary groups. That might lead to new causes for us to focus on. For example - perhaps it is possible to breed animals with a higher happiness set point, and we should be trying to forward that research rather than only working to alleviate suffering among animals. In other cases it might suggest more effective interventions within causes we already focus on. Eg developing a cheaper way to purify water, or a more effective way to raise money to fight poverty.
 
While the identification of new beneficiary groups has gone further than Michelle posited, whether its through mental health, anti-aging, RWAS, or biosecurity, the expansion of EA beyond its original 3 focus areas is indeed more predicated on finding new methods to help those prior identified groups of beneficiary populations than newly identified groups.
 
The fact that expanding the circle of caring yielded such gains in effectiveness, and that it probably won’t yield more, makes it somewhat unlikely that we will be able to find another cause which eclipses the ones we currently focus on. On the other hand, finding new groupings of beneficiaries and new methods for helping beneficiary groups both seem promising ways to find more effective causes.
 
Our chance of finding a far more effective intervention depends in part on what overall class of beneficiaries we’re looking at and how neglected they tend to be. E.g. Animals tend to get less attention than humans – that plausibly explains why the suffering of wild animals has not previously been thought to be of moral importance. Any group that includes currently existing people in rich countries, on the other hand, is comparatively likely to have had quite a bit of work put into it.
 
Where there hasn’t been, that will often be for reasons which make it rather intractable. E.g. there may be a strong lobby against an intervention, or it may require an untenable level of cooperation among diverse bodies. There seem to be some cases for which doesn’t hold though: you might think that human enhancements (as opposed to treatments of health problems), e.g. trying to slow ageing, have been neglected for the most part simply due to a perception that we don’t need them.
 
Relatedly, although effective altruism may have been useful in the past in highlighting particularly effective interventions within crowded areas, that does not mean it will in future. Past work by effective altruism in such areas has built on decades of research - whether by bodies like the WHO and the World Bank or by academia. Much of the value-add has been looking at the big picture of the interventions they’ve researched, and picking out the most effective ones. There might be interventions they entirely overlooked, but it’s more likely that improvements will come simply from some interventions being a bit better than they looked. This means the initial gains will have been far faster than subsequent ones.
 
With neglected areas like the far future, the story seems different. Because there has been little research done by others it’s more plausible that there are interventions we’ve never yet considered. I don’t know how to think about their likely value compared to those that we have. E.g. risks from AI seem quite major, potentially somewhat close in the future (~100 years?) and potentially tractable (e.g. by talking to the people doing the research), so it seems a high bar to surpass. But that’s not to say that we can’t: particularly if we could (say) find some way to improve society such that it was more robust to all possible disasters.
Within a framework of applying emerging technologies to mitigating extant or potential global catastrophes, with a willingness to go beyond building on decades of research to advance and develop new science to solve those problems, I believe EA will be useful in the future as it has been in the past. The story is the same as it is for the far future; not enough research has been done in all areas of EA because there are interventions effective altruists will consider nobody else will to help those most in need.
 
EA is crucial to the future because Michelle's examples aren't merely plausible. They're real. Effective altruism is the only movement considering the moral importance of wild animals; treating both health problems and working on human enhancements to slow ageing; working on bringing more attention and expanding the moral circle to those populations of even humans still living today whose well-being has been disproportionately neglected for generations, condemning to death or worse fates of suffering; and rendering the solutions to these problems more tractable by lobbying for those interventions and coordinating those diverse bodies that would otherwise be against them, on top of everything else.
 
To help groups of beneficiary populations whose well-being is threatened to a inter-generational scope and terminal degree is the pole holding up the broad tent of effective altruism. Now may it never fall down. Two years ago, I wrote this Facebook post:
I didn't become part of effective altruism for what it ever was in the present, but what it could become. I wasn't raised in a tradition of charity. I was raised in the traditions of how ingenuity can be uplifting for the human condition, and how clear thinking can lead the conscience and the seek for justice, not the other way around. My family values were those of aspiring rationality. Apparently unlike most, I wasn't first drawn to rationality and skepticism because I found a community which for the first time in my life could fill an unnamed yearning. I joined because it simply felt like home. The culture of individualistic intellect and using enlightenment to provide that opportunity to anyone everyone is my native tribe.
 
That effective altruism hasn't been like this has led to what I think are rightful reservations about it. Pious virtue signaling dressed up as anything turns off worthy iconoclasts. If you couldn't tell, that's why I'm indifferent to expressing it myself. In my more shortsighted and dreary moments, I neglect to cultivate the virtue I feel is important in my personal life because I feel it's pointless to try for myself if even what still seems the most promising movement on Earth loses itself to flash at the cost of substance.
 
This is changing. Effective altruism will always contain donation as an aspect, but the throwing of money at problems which, rightly or wrongly, rings hollow for so many, will soon cease to be its strongest pillar. This year at Effective Altruism Global things were different. The whole community is different We are going from purchasing bednets to destroying the world's deadliest monster's with gene drives. We are going from small cash transfers to entrepreneurs empowering people to pull themselves out of poverty, and running the biggest RCTs on basic income in history. We're backing all the biotech startups engineering cultured substitutes for every thinkable animal-agriculture product. We are wholesale embracing the teachings of scholars who have over thirty years years discovered and designed the exciting science of predicting the future. We're not only looking athow to transform societies with robustly evidence-based policies, but learning from experienced professionals on how to build the intellectual supply chain to get it done, from building coalitions to influencing policymakers to intellectual to running more experiments.
 
Effective altruism is finally becoming what I signed up for. Let's keep it up. They say excited altruism is effective altruism. Let's get excited for effectiveness. Let's go beyond that. Let's become overwhelmingly effective. Maniacally effective. Titanically and bombastically and ballistically effective. Let's get nuts.

When the ideologues who told us we didn't do enough systemic change start telling us science is too scary to use for changing systems, we will know we're winning. Humanity will abandon the classes of dogma which prey on their fears and promise them false hopes for cheap power when they are shown it is public knowledge and technology which improves their well-being more than stirring but empty sentiments ever can. As the world becomes less tragic in all aspects of life, bad ideas won't have minds to prey on. Let's starve them. Let's not just raise the waterline of sanity. Let's drown the world in it.
I am glad there is, after all, an evidence-based framework for everything effective altruism has been and everything I've hoped it can become.

Comments (12)

Comment author: [deleted] 11 June 2018 06:26:58PM 4 points [-]

Evan, just a data point: I don't understand a lot of what you're saying in most of your posts/comments, and I can only think of one person I find more difficult to understand out of everyone I've come across in the EA community who I've really wanted to understand. (By which I mean "I find the way you speak confusing and I often don't know what you mean", not "Boi, you crazy".)

Comment author: Evan_Gaensbauer 12 June 2018 12:03:11AM *  1 point [-]

Thanks. Are you referring to my posts and comments on social media? That's more transient, so I make less of an effort on social media to be legible to everyone. Do you have examples of the posts or comments of mine you mean? I don't get tons of feedback on this. Of course people tell me I'm often confusing. But the feedback isn't actionable. I can decode any posts you send me. For example, here is a post of mine where I haven't gotten any negative feedback on the content or writing style. This post was like a cross between a personal essay and dense cause prioritization discussion, so it's something I wouldn't usually post to the EA Forum. It's gotten some downvotes, but clearly more upvotes than downvotes, so somebody is finding it useful. Again, if I get some downvotes it's ultimately feedback on what does or doesn't work on the EA Forum. This is the kind of clearer feedback specifying something.

Comment author: [deleted] 12 June 2018 10:49:24AM 2 points [-]

Also the dank memes stuff...at the meta level of treating it like valuable, serious stuff... This is a separate thing as it's a case of me thinking, "Surely they're still joking...but it really sounds like they're not," but it's another reason for me to give up on trying to understand you because it's too much effort.

Comment author: [deleted] 12 June 2018 10:39:10AM *  2 points [-]

I don't want to spend too long on this, so to take the most available example (i.e. treat this more as representative than an extreme example): Your summary at the top of this post.

  • General point: I get it now but I had to re-read a few times.
  • I think the old "you're using long words" is a part of this, which is common in EA and non-colloquial terms are often worth the price of reduced accessibility, but you seem to do this more than most (e.g. "posit how" could be "suggest that"/"explore how", "heretofore" could be "thus far", "delineate" could be "identify"/"trace" etc....it's not that I don't recognise these words, they're just less familiar and so make reading more effort).
  • Perhaps long sentences with frequent subordinate clauses - and I note the irony of my using that term - and, indeed, the irony of adding a couple here - add to the density.
  • More paragraphs, subheadings, italics, proofing etc. might help a bit.

I also have the general sense that you use too many words - your comments and posts are usually long but don't seem to be saying enough to justify the length. I am reminded of Orwell:

It is easier — even quicker, once you have the habit — to say "In my opinion it is not an unjustifiable assumption that" than to say "I think".

And yes - mostly on social media. But starting to read this post prompted the comment (I feel like you have useful stuff to say so was surprised to not see many upvotes and wondered if it's because others find you hard to follow too).

Comment author: Evan_Gaensbauer 17 June 2018 06:17:13PM 1 point [-]

One heuristic I use for writing is to try Writing Like I Talk from Paul Graham. Of course, I already tend to speak differently than most people. I find keeping my head in books changes how I think internally, and thus how I speak. It comes full circle when I write like I talk, which is different than most people talk or write. The perfect is the enemy of the good, and there are trade-offs in time taken to write. Another is to know your audience. The post in question was meant to be read by suffering reducers and those familiar with the work on the Foundational Research Institute, from whom I've already received good feedback from, so I relatively achieved my goal with my writing. Also, those posts are rougher on my personal blog, but I would edit them before I put them up on the EA Forum.

As long as it takes to read my stuff, I use a lot of words because it provides full context. For example, I'd hope someone familiar with academic jargon but relatively new to EA might come to fully understand the case of potential s-risks from terraforming, having come in knowing little to nothing about the subject. I'm aware I often use too many words, but when the time comes to make posts more accessible, I can and will do so. I appreciate this feedback though. Please feel free to provide feedback anytime. I update on it quite quickly, even from a single person. I wish more people felt comfortable doing so.

I wrote this post up because it will tie into a series of blog posts I'll be rolling out. When it's done, in context, I hope this post will make more sense. I'm going to be working with various EA organizations to bring remote volunteering opportunities to local EA groups to do direct work. I'm going to consult with Rethink Charity's research team to tighten up a model I have for coordinating teams together numbering in potentially hundreds of individuals. Soon time too may be a unit of caring.

Comment author: [deleted] 04 July 2018 05:00:44PM 0 points [-]

The perfect is the enemy of the good, and there are trade-offs in time taken to write. Another is to know your audience.

Of course. My comment was an offer of a data point, not a judgment that you're prioritising badly.

I'm going to be working with various EA organizations to bring remote volunteering opportunities to local EA groups to do direct work.

Have you chatted to David Furlong of Deedmob?

Comment author: Evan_Gaensbauer 12 June 2018 12:51:30AM 0 points [-]

I also have some posts I've taken more time to edit for clarity on my personal blog about effective altruism.

Comment author: [deleted] 12 June 2018 10:42:08AM 1 point [-]

Thanks. Data point: the summary at the top of "Crucial Considerations for Terraforming as an S-Risk" seems like a normal level of hard-to-read-ness for EA.

Comment author: Evan_Gaensbauer 13 June 2018 06:47:36PM 0 points [-]

The summary wasn't supposed to be easier to be read. It was a condensed version so those who are already familiar with the concept would be aware of where the post was going. It was primarily intended for those effective altruists who are already (quite) familiar with risks of astronomical suffering; and the research of Brian Tomasik and the Foundational Research Institute.

Comment author: [deleted] 04 July 2018 05:04:21PM *  1 point [-]

The summary wasn't supposed to be easier to be read.

I was saying that it was easier to read ;-) You seemed to be asking for feedback on posts that you've "taken more time to edit for clarity".

Although I should say that I am "already (quite) familiar with risks of astronomical suffering; and the research of Brian Tomasik and the Foundational Research Institute".

Comment author: Evan_Gaensbauer 04 July 2018 10:47:19PM 0 points [-]

Oh, I see, the summary was the standard level of difficulty. That makes sense. One thing is when responding to Brian and others for whom the info in my post isn't new, I might be assuming some background context I left out because I assumed it was common knowledge. My post as is might be too condensed. To unpack an essay already so long seems daunting :/

Comment author: [deleted] 11 June 2018 06:16:31PM 2 points [-]

There is a significant gap in effective altruism for structural change in between “buy bed nets” and “literally build God”.

[Laughing crying face]

[Not because I'm crying with laughter, but because I'm laughing and crying at the same time]