Hide table of contents

An issue a lot of EAs have, but I think few have formalized in writing, is a concern with cause representation within the EA movement. 

The basic idea behind representation is that the materials created or aimed at being part of the public-facing view of EA are in line with what a suitably broad selection of EAs truly think. To take an obvious example, if someone was very pro-Fair Trade and started saying EA was all about Fair Trade, this would not really be representative of what most EAs think (even if this person was convinced that Fair Trade was the best cause within the EA framework). Naturally, in a movement as large as the EA movement, there remains a diversity of viewpoints, but, nonetheless, I think it's fairly easy for experienced EAs to have a sense of what is a common EA view, and what is not (although, people can definitely be affected by peer group pressure and city selection in creating a bubble). There have been a lot of implicit and explicit conflicts around this issue and there are different possible solutions to dealing with it. 

First, some examples of clear problems with representativeness. 

  • The EA Handbook
  • EA Global
  • Funding gaps
  • EA chapter building

The EA Handbook

The EA Handbook was one of the most public examples of the problem at hand: you can look at the facebook comments and EA Forum comments to get a sense of what people have said. If I had to summarize a group of people’s concerns, it would have to do with representativeness. As was put it in one of the comments: 

“I don't feel like this handbook represents EA as I understand it. By page count, AI is 45.7% of the entire causes sections. And as Catherine Low pointed out, in both the animal and the global poverty articles (which I didn't count toward the page count), more than half the article was dedicated to why we might not choose this cause area, with much of that space also focused on far-future of humanity. I'd find it hard for anyone to read this and not take away that the community consensus is that AI risk is clearly the most important thing to focus on.” 

I personally think people would not have reacted so strongly to the Handbook if it had not seemed like being part of a bigger trend, one I hope to crystallize in this blog post.

EA Global

EA Global is an area that is pretty public and pretty numerically measurable. If you look at all the talks given in 2018 by their cause area, they end up looking like 3 hours worth of global poverty, 4.5 hours animal welfare talks, and 11.5 hours of x-risk. This is not counting the meta talks that could have been about any cause area but were often effectively x-risk-related (e.g. 80,000 Hours’ career advice). This is also counting the “far future animal welfare talks” as normal animal welfare talks. You can also split it up as near future cause areas, with 5.5 hours dedicated on them, vs far future cause areas, with 13.5 hours spent on those. 

This concern has been true for the last few EAGs and it’s getting more noticeable over time. Part of the reason I only go to every 2nd EAG, and why many of the people I would describe as leaders in EA poverty do not go at all, is due to the lack of representation, and thus the lack of draw of EAs who would want to talk about other causes. This is a self-perpetuating problem as well, since if fewer EAs go the events they become intrinsically less and less friendly towards the EAs of that cause area. After a couple years, you could even do a survey and say “well, the average attender of EAG thinks X cause is of the highest impact”, but that would only be true because of everyone with different views dropping out over time due to frustration and a feeling of disconnection. This is another issue I have talked about with a lot of involved EAs, and is part of the reason there is interest in a different EA related conference.

Funding Gaps

Details on funding gaps can be found here. Generally, however, claiming that “the EA movement is not largely funding constrained” is another example of a general trend of implying that the particular things that are representative of particular groups of EAs ultimately represent the movement as a whole.

Saying that “the far future is funding-filled and thus, if you care about it, you should not like earning-to-give as much” is more honest and true than claims along the lines of “the whole EA movement is funding-filled”.

EA Chapter Building

The final example is more subtle to quantify, but it’s also one I have heard about from quite a few different sources. EA chapter building is currently fairly tightly controlled and focused heavily on the creation of far future and AI-focused EAs. Again, if an organization is genuine about this, that is one thing, but I feel as though the average EA (unless they have had direct experience with trying to run a chapter) would guess that groups are generally discussing all cause areas and are getting supported similarly, regardless of focus.

While these are not the only examples, I feel they are sadly enough to point at a more overarching trend. 

I would also like to include some areas where I feel this has not happened. Some good examples:

  • EA Forum
  • EA Facebook jobs
  • EA Wikipedia
  • Doing Good Better

The EA Forum is surprisingly diverse, and the current karma system does not seem to consistently favour one cause area or EA organization over another. As stated in this post, it’s true that frequent forum users tend to have a diversity of views. This could change in the future, given the upcoming changes, but currently, I see this medium as one of the less controlled systems within the EA.

The EA Facebook jobs group has helped a lot of people (including many of the staff currently working at EA organisations) find jobs from a wide range of EA-related organizations. If you take a sampling of the job ads, they tend to be disperse and more representative of the different cause areas. 

The EA Wikipedia page currently shows all three causes and concepts that most EAs would broadly agree with as core to the movement and representative of those within. 

Doing Good Better, much like the Wikipedia page, does not hold an aggressively single cause focus throughout the book. Instead, it covers classic EA and issues that almost all would agree with.  

How do we know what is representative? 

Representativeness is defined as “typical of a class, group, or body of opinion”. So the representativeness of the EA movement would be expressed via what is typical of many EAs within the movement. This would, ideally be determined via a random sample that hits a large percentage of the EA movement. For example, through the EA survey or by gathering the perspective of everyone who has signed up to the EA Forum. Both of these would hit a very large percentage of the EA movement relative to more informal measures.

What is representative of EA leaders?

One of the responses against having a representative sample is that perhaps there are EAs who are more well-informed than others. To take a more objective criteria, perhaps the average EA who has been involved in the EA movement for 5 years or more is more informed than the average EA who has been involved for 5 days. I think there are ways to go about determining things like this from more aggregate data (for example, duration of involvement, or the percentage donated, might both correlate with more involved EAs). Perhaps even do a survey which makes sure to sample every organization that over 50% of the broader EA community thinks of as of an “EA organization”.

While this post does not aim to determine the “perfect” way to sample EAs or EA leaders, it does aim to point in the right direction in the face of the numerous issues with sampling EAs. Clearly, a survey of only the EA leaders within my city (or any other specific location) would be critically biased, as would be one with a disproportional focus on a particular organization. Another unrepresentative sample might be derived from among “EAG leaders”, as the leaders are chosen by a single organization and generally hold that organization’s cause as salient. This issue is worth another post altogether.    

Possible solutions

Have a low but consistent bar for representativeness, allowing multiple groups to put forward competing presentations of EA. For example, anyone can make an EA handbook that’s heavily focused on a single cause area and call it an EA handbook.

Pros - This solution is fairly easy to implement, and allows a wide variety of ideas to co-exist and flourish. Things will naturally get more popular if they represent the EAs better as they will be shared more throughout the movement.  

Cons - Leaves the movement pretty vulnerable to co-option and misrepresentation (e.g. an EA Fair Trade handbook),which could harm movement building/newer people’s views of EA. 

Have a high and consistent bar for representativeness. For example, if something is branded in a way that suggests that it is representative of EA, it exhibits at least 20% of each cause area (x-risk, AR, poverty) and does not clearly pitch or favour a single organization or approach. Alternatively, some kind of a more formal system, based off objective measures from the community, could be installed. 

Pros - Does not make EA easy to co-opt, and makes sure that the most seen EA content gives appropriate representation to different ideas.

Cons - Ratios and exact numbers would be hard to calculate and get a sense of. They would also change over time (e.g. if a new cause got added).  

Community-building organizations could strive for cause indifference. For example, current EA is built via a few different movement-building organizations. A case could be made that organizations focused specifically on movement-building should strive to be representative or cause indifferent. One of the ways they could do this is through cross-organization consultation before hosting events or publishing materials meant to represent the movement as a whole. 

Pros - Reduces the odds of duplicating movement outreach work (e.g. AI EA chapters and poverty EA chapters). Increases the odds that long term the EA movement will be cause diverse, leading to higher odds of finding Cause X, a cause that’s better than currently existing cause areas that we haven’t discovered yet.

Cons - Many of the most established EA organisations have a cause focus of some sort. Would be hard to enforce, but could nonetheless be an ideal worth striving towards.   

Comments60
Sorted by Click to highlight new comments since: Today at 10:26 AM

Joey, thanks for your post! I work for CEA and am the Curator of EA Global. I manage content for the event so I’m responding to that part of the post.

When deciding which speakers to solicit, I try to consider things like cause area representation, presenter diversity, and the development of community norms, among other things. It is really hard to get this right, and I know that I’ve fallen short of where I’d like to be on all of these.

I do think we’ve managed to improve on the representativeness dimension over the past couple of years. I know there’s room for reasonable disagreement about how to categorize talks, and I think you and I must be looking at the talk categories differently because I’m coming up with a very different distribution than you mention. For talks at EA Global 2018, I count 21% animals, 18% meta/rationality, 25% AI/x-risk/GCRs, 14% global health and development, 7% government/policy and 14% other topics. Across the four events in 2017 and 2018, my breakdown shows 15% animals, 20% AI/x-risk/GCRs, 14% health and development, 11% government/policy, 23% meta/rationality, and 19% other. Here is a link to a categorization of all of the talks from 2017 and 2018 by cause area so that you can see how I’m thinking about the talk distribution (in the interest of time, and since you mentioned talks, I haven’t included meetups, office hours, workshops, or whiteboard sessions). I haven’t done the breakdown for 2015 and 2016, but I think we are representing the community’s interests better in recent years than we did in the past.

This year I’ve commissioned recommendations from EAs with subject matter expertise in the different cause areas to try to improve further. We also welcome speaker and content suggestions from the community. Please submit ideas for EA Global London here.

On the topic of Effective Altruism Global, I'm not just concerned about the lower representation of non-x-risk cause areas, but also the speaker selection for those cause areas. In 2016 as an example, the main animal welfare speaker was a parrot intelligence researcher who seemed, I'm sorry to say this, uninformed about animal welfare, even of birds. I think the animal welfare speakers over the years have been more selected for looking cool to the organizers (who didn't know much about animal welfare) and/or increasing speaker demographic diversity (Not that this is a bad thing, but it's unhelpful to just get diversity in one cause area.), instead of actually having the leading experts on EA and animal welfare.

I agree that our selection process for animal-focused speakers in 2015 and 2016 left a lot to be desired. In 2017 we began working with advisors from specific fields to be sure we’re reaching out to speakers with expertise on the topics that conference attendees most want to hear about. This year we’ve expanded to a larger advisory board with the hope that we can continue to improve the EA Global content.

Thank you for the explanation. I still believe the 2017 and 2018 animal welfare and global poverty line-ups left a lot to be desired, but those years might have been better than 2016 at least in the choice of keynote speaker.

Maybe there could be more transparency in regards to the advisory board, because without knowing those details, I don't know how to evaluate the situation. I do feel concern from CEA's history that the advisory board may favor people with close ties to CEA rather than actual meaningful representation from those fields. But I can't be confident in that without knowing the details.

I think what conference attendees most want to hear about but also worth considering what potential attendees would want to hear about. Personally i would prefer more diversity within the cause area to look at various challenges to conventional EAA whilst focussing more on philosophy and demandingness. I think in this way people could become somewhat more familiar with the broader cause area rather than in my view a tendency to focus on a fairly narrow group of organisations and individuals.

Would it be possible to say who is on the advisory board?

Hi Kevin,

Thanks for your comment. To improve the breadth of EAA topics covered at EA Global, I started working with Tyler John as my first advisor in 2017. This year we have an advisory board consisting of ~25 people outside of CEA with expertise in AI, animals, biosecurity, global health & development, horizon scanning (topics that push the frontiers of EA), and meta EA, as well as a “wild card” section for additional suggestions. I’d need to check with the rest of the advisors before sharing their names.

Hi Amy, is there any progress in terms of presenting who is on the advisory boards? Or if people don't want to be named that would be useful information too.

Maybe this is off topic, but can any near future EAs recommend something I can read to understand why they think the near future should be prioritized?

As someone focused on the far future, I'm glad to have near future EAs: I don't expect the general public to appreciate the value of the far future any time soon, and I like how the near future work makes us look good as a movement. In line with the idea of moral trade, I wish there was something that the far future EAs could do for the near future EAs in return, so that we would all gain through cooperation.

Here are ten reasons you might choose to work on near-term causes. The first five are reasons you might think near term work is more important, while the latter five are why you might work on near term causes even if you think long term future work is more important.

  1. You might think the future is likely to be net negative. Click here for why one person initially thought this and here for why another person would be reluctant to support existential risk work (it makes space colonization more likely, which could increase future suffering).

  2. Your view of population ethics might cause you to think existential risks are relatively unimportant. Of course, if your view was merely a standard person affecting view, it would be subject to the response that work on existential risk is high value even if only the present generation is considered. However, you might go further and adopt an Epicurean view under which it is not bad for a person to die a premature death (meaning that death is only bad to the extent it inflicts suffering on oneself or others).

  3. You might have a methodological objection to applying expected value to cases where the probability is small. While the author attributes this view to Holden Karnofsky, Karnofsky now puts much more weight on the view that improving the long term future is valuable.

  4. You might think it's hard to predict how the future will unfold and what impact our actions will have. (Note that the post is from five years ago and may no longer reflect the views of the author.)

  5. You might think that AI is unlikely to be a concern for at least 50 years (perhaps based on your conversations with people in the field). Given that ongoing suffering can only be alleviated in the present, you might think it's better to focus on that for now.

  6. You might think that when there is an opportunity to have an unusually large impact in the present, you should take it even if the impact is smaller than the expected impact of spending that money on long term future causes.

  7. You might think that the shorter feedback loops of near term causes allow us to learn lessons that may help with the long term future. For example, Animal Charity Evaluators may help us get a better sense of how to estimate cost-effectiveness with relatively weak empirical evidence, Wild Animal Suffering Research may help us learn how to build a new academic field, and the Good Food Institute may help us gain valuable experience influencing major economic and political actors.

  8. You might feel like you are a bad fit for long term future causes because they require more technical expertise (making it hard to contribute directly) and are less funding constrained (making it hard to contribute financially).

  9. You might feel a spiritual need to work on near term causes. Relatedly, you might feel like you're more likely to do direct work long term if you can feel motivated by videos of animal suffering (similar to how you might donate a smaller portion of your income because you think it's more likely to result in you giving long term).

  10. As you noted, you might think there are public image or recruitment benefits to near term work.

Note: I do not necessarily agree with any of the above.

I think there is an 11th reason why someone may want to work on near-term causes: while we may be replaceable by the next generations when it comes to working on the long-term future, we are irreplaceable when it comes to helping people / sentient beings who are alive today. In other words: influencing what may happen 100 years from now can be done by us, our children, our grand-children and so on; however, only we can help say the 700 million people living in extreme poverty today.

I have not come across the counter-arguments for this one: has it been discussed on previous posts or related material? Or maybe it is a basic question in moral philosophy 101 and I am just not knowledgeable enough :-)

The argument is that some things in the relatively near term have lasting effects that cannot be reversed by later generations. For example, if humanity goes extinct as a result of war with weapons of mass destruction this century, before it can become more robust (e.g. by being present on multiple planets, creating lasting peace, etc), then there won't be any future generations to act in our stead (for at least many millions of years for another species to follow in our footsteps, if that happens before the end of the Earth's habitability).

Likewise, if our civilization was replaced this century by unsafe AI with stable less morally valuable ends, then future generations over millions of years would be controlled by AIs pursuing those same ends.

This period appears exceptional over the course of all history so far in that we might be able to destroy or permanently worsen the prospects of civilizations as a result of new technologies, but before we have reached a stable technological equilibrium or dispersed through space.

Thanks, Carl. I fully agree: if we are convinced it is essential that we act now to counter existential risks, we must definitely do that.

My question is more theoretical (feel free to not continue the exchange if you find this less interesting). Imagine we lived in a world just like ours but where the development of AI, global pandemics, etc. are just not possible: for whatever reason, those huge risks are just not there. An argument in favour of weighting the long-term future heavily could still be valid (there could be many more people alive in the future and therefore a great potential for either flourishing or suffering). But how should we weight that against the responsibility to help people alive today, since we are the only ones who can do it (future generations will not be able to replace us in that role)?

Imagine we lived in a world just like ours but where the development of AI, global pandemics, etc. are just not possible: for whatever reason, those huge risks are just not there

If that was the only change our century would still look special with regard to the possibility of lasting changes short of extinction, e.g. as discussed in this posts by Nick Beckstead. There is also the astronomical waste argument: delay in interstellar colonization by 1 year means losing out on all the galaxies reachable (before separation by the expansion of the universe) by colonization begun in year n-1 instead of n. The population of our century is vanishingly small compared to future centuries, so the ability of people today to affect the colonized volume is accordingly vastly greater on a per capita basis, and the loss of reachable galaxies to delayed colonization is irreplaceable as such.

So we would still be in a very special and irreplaceable position, but less so.

For our low-population generation to really not be in a special position, especially per capita, it would have to be the case that none of our actions have effects on much more populous futures as a whole. That would be very strange, but if it were true then there wouldn't be any large expected impacts of actions on the welfare of future people.

But how should we weight that against the responsibility to help people alive today, since we are the only ones who can do it (future generations will not be able to replace us in that role)?

I'm not sure I understand the scenario. This sounds like a case where an action to do X makes no difference because future people will do X (and are more numerous and richer). In terms of Singer's drowning child analogy, that would be like a case where many people are trying to save the child and extras don't make the child more likely to be saved, i.e. extra attempts at helping have no counterfactual impact. In that case there's no point in helping (although it may be worth trying if there is enough of a chance that extra help will turn out to be necessary after all).

So we could consider a case where there are many children in the pond, say 20, and other people are gathered around the pond and will save 10 without your help, but 12 with your help. There are also bystanders who won't help regardless. However, there is also a child on land who needs CPR, and you are the only one who knows how to provide it. If you provide the CPR instead of pulling children from the pond, then 10+1=11 children will be saved instead of 12. I think in that case you should save the two children from drowning instead of the one child with CPR, even though your ability to help with CPR is more unique, since it is less effective.

Likewise, it seems to me that if we have special reason to help current people at the expense of much greater losses to future generations, it would be because of flow-through effects, or some kind of partiality (like favoring family over strangers), or some other reason to think the result is good (at least by our lights), rather than just that future generations cannot act now (by the same token, billions of people could but don't intervene to save those dying of malaria or suffering in factory farms today).

Nice comment; I'd also like to see a top-level post.

One quibble: Several of your points risk conflating "far-future" with "existential risk reduction" and/or "AI". But there is far-future work that is non-x-risk focused (e.g. Sentience Institute and Foundational Research Institute) and non-AI-focused (e.g. Sentience Institute) which might appeal to someone who shares some of the concerns you listed.

For 5, the survey (https://arxiv.org/pdf/1705.08807.pdf) sort of ends all discussion about AI timelines. Not that it's necessarily right, just that no one is in a position to second-guess it.

For another relevant reason to think less about the future, take a look at this. https://web.stanford.edu/~chadj/IdeaPF.pdf

For 7, we can learn quite a bit from working on long term causes, and WASR is an example of that: it's stuff that won't be implemented any time soon, but we can gain feedback from the baby steps. The same thing has applied to some AI work.

Also, it seems to me that the kind of expertise here is highly domain-specific, and the lessons learned in one domain probably won't help elsewhere. I suppose that short term causes let you perform more trials after observing initial results, at least.

For 8, nontechnical people can work on political issues with long-term implications.

Lists of 10 are always fishy because the author is usually either stretching them out with poor reasons to make it to 10, or leaving out good reasons to keep it at 10. Try not to get attached to the number :)

I do agree WASR seems pretty tractable and the near-term learning value is pretty high even if we don't have a good idea of the long-term feasibility yet. I think it's promising, but I could also see it being ruled out as impactful, and I feel like we could have a good answer in a few years.

I don't have a good sense yet on whether something like AI research has a similar feel. If it did, I'd feel more excited about it.

For 5, the survey (https://arxiv.org/pdf/1705.08807.pdf) sort of ends all discussion about AI timelines. Not that it's necessarily right, just that no one is in a position to second-guess it.

I don't follow what you mean by "ends all discussion."

Even if AI development researchers had a consensus opinion about AI timelines (which they don't), one could still disagree with the consensus opinion.

I suspect AI dev researcher timeline estimates vary a lot depending on whether the survey is conducted during an AI boom or AI winter.

Well, you might disagree, but you'd have to consider yourself likely to be a better predictor than most AI experts.

The lack of consensus doesn't really change the point because we are looking at a probability distribution either way.

Booms/winters are well known among researchers, they are aware of how it affects the field so I think it's not so easy to figure out if they're being biased or not.

I think it's important to hold "AI development research" and "AI timeline prediction-making" as two separate skillsets. Expertise in one doesn't necessarily imply expertise in the other (though there's probably some overlap).

Any good model of the quality of AI dev researcher timeline opinions needs to be able to explain why AI safety was considered a joke by the field for years, and only started to be taken seriously by (some) AI dev researchers after committed advocacy from outsiders.

I think it's important to hold "AI development research" and "AI timeline prediction-making" as two separate skillsets. Expertise in one doesn't necessarily imply expertise in the other (though there's probably some overlap).

OK, that's true. The problem is, it's hard to tell if you are better at predicting timelines.

Any good model of the quality of AI dev researcher timeline opinions needs to be able to explain why AI safety was considered a joke by the field for years, and only started to be taken seriously by (some) AI dev researchers after committed advocacy from outsiders.

I think that's a third issue, not a matter of timeline opinions either.

I think that's a third issue, not a matter of timeline opinions either.

Seems relevant in that if you surveyed timeline opinions of AI dev researchers 20 years ago, you'd probably get responses ranging from "200 years out" to "AGI? That's apocalyptic hogwash. Now, if you'd excuse me..."

I don't know which premise here is more greatly at odds with the real beliefs of AI researchers - that they didn't worry about AI safety because they didn't think that AGI would be built, or that there has ever been a time when they thought it would take >200 years to do it.

This is great – consider making it a standalone post?

I'll consider expanding it and converting it into its own post. Out of curiosity, to what extent does the Everyday Utilitarian article still reflect your views on the subject?

It's a helpful list and I think these considerations deserve to be more well known.

If you were going to expand further, it might be useful to add in more about the counterarguments to these points. As you note in a few cases, the original proponents of some of these points now work on long-term focused issues.

I also agree with the comment above that it's important to distinguish between what we call "the long-term value thesis" and the idea that reducing extinction risks is the key priority. You can believe in the long-term value thesis but think there's better ways to help the future than reducing extinction risks, and you can reject the long-term value thesis but still think extinction risk is a top priority.

I also agree with the comment above that it's important to distinguish between what we call "the long-term value thesis" and the idea that reducing extinction risks is the key priority. You can believe in the long-term value thesis but think there's better ways to help the future than reducing extinction risks, and you can reject the long-term value thesis but still think extinction risk is a top priority.

Agreed. Calling reducing X-risks non-near-term-future causes strikes me as using bad terminology.

I plan on posting the standalone post later today. This is one of the issues that I will do a better job addressing (as well as stating when an argument applies only to a subset of long term future/existential risk causes).

As a further illustration of the difference with your first point, the idea that the future might be net negative is only reason against reducing extinction risk, but it might be more reason to focus on improving the long-term in general. This is what the s-risk people often think.

Agreed. As someone who prioritises s-risk reduction, I find it odd that long-termism is sometimes considered equivalent to x-risk reduction. It is legitimate if people think that x-risk reduction is the best way to improve the long-term, but it should be made clear that this is based on additional beliefs about ethics (rejecting suffering-focused views and not being very concerned about value drift), about how likely x-risks in this century are, and about how tractable it is to reduce them, relative to other ways of improving the long-term. I for one think that none of these points is obvious.

So I feel that there is a representativeness problem between x-risk reduction and other ways of improving the long-term future (not necessarily only s-risk reduction), in addition to an underrepresentation of near-term causes.

I'm aware of this and also planning on addressing it. One of the reasons that people associate the long term future with x-risk reduction is that the major EA organizations that have embraced the long term future thesis (80,000 Hours, Open Phil etc.) all consider biosecurity to be important. If your primary focus is on s-risks, you would not put much effort into biorisk reduction. (See here and here.)

I agree the long-term value thesis and the aim of reducing extinction risk often go together, but I think it would be better if we separated them conceptually.

At 80k we're also concerned that there might be better ways to help the future, which is one reason why we highly prioritise global priorities research.

Why do you think Epicureanism implies a focus on the near term and not a focus on improving the quality of life in the long-term future?

I actually began to wonder this myself after posting. Specifically, it seems like an Epicurean could think s-risks are the most important cause. Hopefully Michael Plant will be able to answer your question. (Maybe EA Forum 2.0 should include a tagging feature.)

I'm not sure I see which direction you're coming from. If you're a symmetric person-affector (i.e. reject the procreatve asymmetry, the view we're neutral about creating happy lives but agasinst creating unhappy lives) then you don't think there's value in creating future life, good or bad. So neither x-risks nor s-risks are a concern.

Maybe you're thinking 'don't those with person-affecting views care about those who are going to exist anyway?' the answer is Yes if you're a necessitarian (No if you're a presentist), but given that what we do changes who comes into existence necessitarianism (holds you value wellbeing of those that exist anyway) collapses, in practice, into presentism (holds you value wellbeing of those that exist right now).

Vollmer, the view that would be care about the quality of the long-term future, but not whether it happens, seems to be averagism.

A. Does that mean that, under a symmetric person-affecting Epicurean view, it's not bad if a person brings into existence someone who's highly likely to have a life filled with extreme suffering? Do you find this plausible?

B. Does that also mean that, under a symmetric person-affecting Epicurean view, there's no benefit from allowing a person who is currently enduring extreme suffering to terminate their life? Do you find this plausible?

C. Let's say a person holds the following views:

  1. It is good to increase the well-being of currently existing people and to decrease the suffering of currently existing people.

  2. It is good to increase the well-being of future people who will necessarily exist and to decrease the suffering of future people who will necessarily exist. (I'm using necessarily exist in a broad sense that sets aside the non-identity problem.)

  3. It's neither good nor bad to cause a person with a net positive life to come into existence or to cause a currently existing person who would live net positively for the rest of their life to stay alive.

  4. It's bad to cause a person who would live a net negative life to come into existence and to cause a currently existing person who would live net negatively for the rest of their life to stay alive.

Does this qualify as an Epicurean view? If not, is there a name for such a view?

Very late reply, but the first parts of 3 and 4 together are the procreation asymmetry, which together lead to antinatalism (it's better for individuals not to be born, ignoring other effects) unless you accept sufficiently low risks of bad lives, and adding the second parts of each leads to promortalism (it's better for people to die painlessly as soon as possible, ignoring other effects) unless you accept sufficiently low risks of bad lives.

If you accept 2 (the non-identity problem) and believe adding people with good lives is never good in itself, that leads to antinatalism regardless of uncertainty for any life that isn't maximally good, or you need to reject the independence of irrelevant alternatives (or transitivity) as in this paper.

The argument with the independence of irrelevant alternatives is that for any hypothetical individual A without maximal welfare, there is some hypothetical individual B (the same individual or a different individual) with higher welfare, so by 2, it's strictly better for B to come to exist than A, but since, by 3, it's never good in itself to bring a new individual into existence, then B coming to exist is at best neutral, and since A is strictly worse than B, then (by transitivity and the independence of irrelevant alternatives) A coming to exist is bad in itself, which would contradict the "nor bad" from "It's neither good nor bad to cause a person with a net positive life to come into existence".

The same argument can be made about existing people with respect to death using 1 and the second part of 3 (and transitivity and the independence of irrelevant alternatives), leading to promortalism.

That being said, I wouldn't guess that it's uncommon for those defending person-affecting views to reject the independence of irrelevant alternatives. But if you do accept it, then I think the four statements together are best described as promortalism (if you omit the "nor bad" from 3, otherwise they are inconsistent).

Right, sorry, I misread. I thought you were assuming some form of Epicureanism with concern for all future beings, not Epicureanism plus a person-affecting view.

I'd add one more: having to put your resources towards more speculative, chancy causes is more demanding.

When donating our money and time to something like bednets, the cost is mitigated by the personal satisfaction of knowing that we've (almost certainly) had an impact. When donating to some activity which has only a tiny chance of success (e.g., x-risk mitigation), most of us won't get quite the same level of satisfaction. And that's pretty demanding to have to give up not only a large chunk of your resources but also the satisfaction of having actually achieved something.

Rob Long has written a bit about this - https://experiencemachines.wordpress.com/2018/06/10/demanding-gambles/

Thanks for that link, it's an interesting article. In the context of theory within the animal movement Singer's pragmatism isn't particularly demanding, but a more justice oriented approach is (along the lines of Regan). In my view it would be a good thing not least for the sake of diversity of viewpoints to make more claims around demandingness rather than largely following a less demanding position. Though i do think that because people are not used to ascribing significant moral value to other animals then it follows that anything more than the societal level is therefore considered demanding, particularly in regard to considering speciesism alongside other forms of human discrimination.

I agree the far future is overwhelmingly important. However, I don't think it's been shown that focusing on the far-future really is more cost-effective, even when taking a far-future point of view. I have a degree of epistemic uncertainty with wide error bars such that I wouldn't be too surprised if MIRI turned out to be the most cost-effective but I also wouldn't be too surprised if it turned out that AMF was the most cost-effective. Right now, in my view, the case for the far-future seems to be arguing that if you take a large number and divide it by some unknown probability of success that you must still get a large number, where this isn't true. I'd like organizations like MIRI to back up the claim that they have a "medium probability" of success.

I personally tend to value being able to learn about causes and being empirical about how to do good. This makes it more difficult to work in far-future causes due to the lack of feedback loops, but I don't think it's impossible (e.g., I like the approach being taken by AI Impacts and through Rethink Priorities I'm now working to try to refine my own views on this).

I think this update on my skepticism post still represents my current position somewhat well, though it is definitely due for an update.

Overall, I definitely favor spending resources on x-risk reduction efforts. I'm even comfortable with roughly 50% of the EA movement's resources being spent on it, given that I sure wouldn't want to be wrong on this issue -- extinction seems like a tremendous downside! However, I'd prefer there to be more effort spent on learning what we can about the value of these efforts and I also think it's not yet clear that poverty or animal-focused interventions are not equally or more valuable.

Lastly, as a movement, we certainly can and should do more than one thing. We can fight x-risk while also fighting malaria. I think we'd have a stronger and more robust movement this way.

I hope to write more on this in the future, eventually.

Just wanted to say that I'd be really excited to read more of your thoughts on this. As mentioned above, I think many considerations and counter-considerations against x-risk work deserve more attention and exposure in the community.

I encourage you to write up your thoughts in the near-term rather than far future! :P

I liked this solely for the pun. Solid work, James.

I agree that a lot of work on X risk/far future is value of information. But I argued here that the distributions of cost-effectiveness in the present generation of alternative food for agricultural catastrophes did not overlap with AMF. There very well could be flow-through effects from AMF to the far future, but I think it is hard to argue that they would be greater than actually addressing X risk. So I think if you do value the far future, it would be even harder to argue that the distribution of alternate foods and AMF overlap. There would be a similar results for AI vs AMF if you believe the model referred to here.

It's certainly possible to generate a cost-effectiveness estimate that doesn't overlap with AMF. I'd just be concerned with how well that estimate holds up to additional rigorous scrutiny. Many such estimates tend to decline dramatically as additional considerations are explored.

See a list of reasons why not to work on reducing extinction risk here: https://80000hours.org/articles/extinction-risk/#who-shouldnt-prioritise-safeguarding-the-future

See a list of counterarguments to the long-term value thesis here: https://80000hours.org/articles/future-generations/

There are also further considerations around coordination that we're writing about in an upcoming article.

Hi John, I don't have any concrete links, but I'd start by distinguishing different kinds of far-future causes: on the one hand, those that are supported by a scientific consensus, and those that are a matter of scientific controversy. An example of the former would be global warming (which isn't even that far future for some parts of the world), while the example of the latter would be the risks related to the development of AI.

Now in contrast to that, we have existing problems in the world: from poverty and hunger, to animals suffering across the board, to existing problems related to climate changes etc. While I wouldn't necessarily prioritize these causes to future-oriented charities (say, climate related research), it is worth keeping in mind that investing in the reduction of the existing suffering may have an impact on the reduction of future suffering as well (e.g. by increasing the number of vegans we may impact the ethics of human diet in the future). The impact of such changes is much easier to assess than the impact of the research in an area that concerns risks which are extremely hard to predict. Hence, I don't think the research on AI risks is futile --not at all-- I just find it important to have a clear assessment criteria, just like in any other domain of science, as for what counts as effective and efficient research strategy, how are future assessments of the currently funded projects going to proceed (in order to determine how much has been done within these projects and whether a different approach would be better), whether the given cause is already sufficiently funded in comparison to other causes, etc.

Thanks for writing this up. Attempting to read between the lines, I am also increasingly frustrated by the feeling that near-term projects are being squeezed out of EA. I've been asking myself when (I think it's a 'when' rather than 'if') EA will become so far-future heavy there's no point me participating. I give it 2 years.

There are perhaps a couple of bigger conversations to be had here. Are the different causes friends or enemies? Often it feels like the former, and this is deeply disappointing. We do compete over scarce resources (e.g. money) but we should be able to cooperate at a broader societal level (post forthcoming). Further, if/when would it make sense for those of us who feel irked by what seems to be an exclusionary, far-futurist, tilt that seems to be occuring to split off and starting doing our own thing?

Should probably mention I have raised similiar concerns before in this post: 'the marketing gap and a plea for moral inclusivity'

Thanks for the link, Michael - I've missed that post and it's indeed related to the current one.

Thanks, Joey, for writing this up. My worry is that making any hard rules for what counts as representative may do more harm than good, if only due to deep (rational) disagreements that may arise on any particular issue. The example Michael mentions is a case in point: for instance, while I may not necessarily disagree that research on AI safety is worthy of pursuit (though see the disagreements between Yann LeCun, the head of AI research at Facebook with Bostrom's arguments), I find the transparency of the criteria used by EA organizations to make decisions which projects to fund unsatisfactory, to the point of endangering the EA movement and is reputation when it comes to the claim that EA is about effective paths of reducing suffering. The primary problem here, as I argued in this post is that it remains unclear why the currently funded projects should count as effective and efficient scientific research.

In view of this, I find it increasingly frustrating to associate myself with the EA movement and its recent development, especially since the issue of efficiency of scientific research is the very topic of my own research. The best I can do is to treat this as an issue of a peer disagreement, where I keep it open that I might be wrong after all. However, this also means we have to keep an open dialogue since either of the sides in the disagreement may turn out to be wrong, but this doesn't seem easy. For instance, as soon as I mention any of these issues on this forum, a few downvotes tend to pop up, with no counterargument provided (edit: this current post ironically turned out to be another case in point ;)

So altogether, I'm not sure I feel comfy associating myself with the EA community, though I indeed deeply care about the idea of effective charity and effective reduction of suffering. And introducing a rule-book which would claim, for instance, that EAs support the funding of research on of AI safety would make me feel just as uncomfy, not because of this idea in principle, but because of its current execution.

EDIT: Just wanted to add that the proposal for community-building organizations to strive for cause indifference sounds like a nice solution.

Hi Joey, thanks for raising this with such specific suggestions for how this should be done differently.

I won't respond to the specific Handbook concerns again, since people can easily find my previous responses in the comment threads that you link to.

I think that part of the problem was caused by the general trend that you're discussing, but also that I made mistakes in editing the Handbook, which I'm sorry for. In particular, I should have:

  • Consulted more widely before releasing the Handbook

  • Made clearer that the handbook was curated and produced by CEA

  • Included more engaging content related to global poverty and animal welfare.

I've tried to fix all of these mistakes, and we are currently working on producing a new edition which includes additional content (80,000 Hours' Problem Framework, Sentience Institute's Foundational question summaries (slightly shortened), David Roodman's research for GiveWell on the worm wars).

[I'm a CEA staff member, but writing as an individual and a local group founder/organizer]

"EA chapter building is currently fairly tightly controlled" - what aspect of this do you see as tightly controlled? Funding? Advising?

As someone who helped start a local group before there were funds, written resources, or other advising for starting or running an EA group, I see how those things would be helpful, but don't see them as essential. The only request I can remember to focus on a particular cause area was back when GWWC was solely focused on global poverty and invited the Boston group to affiliate, which we decided not to do.

I'm all for EA movement-building orgs doing a better job at supporting local groups and people who are thinking of starting one. But I wouldn't want people to come away from this post with the understanding that they're somehow restricted from starting a group, or that they'll only be able to do so if they support the right cause. My guess is that most EA groups were founded by people who saw a gap and decided to start something, not people who were tapped on the shoulder by a movement-building organization.

Joey
6y14
0
0

So my personal experience starting a chapter was a long time ago, but what I have heard from people more involved in chapters currently is that there has been social pressure towards focusing on certain cause areas and funding pressure along with it. For example, a sense that chapters are much more likely to get funding, attention, etc, from movement building organizations if they are more far future focused. I think chapters can of course run without any support of any major organization, but the culture of chapters will change if support is more conditional over the long term. As far as I know, no one has been specifically told not to run a chapter based on a different cause focus and I agree this is not the reason most groups start (but it can be a big difference in which groups grow).

Tightly controlled also is in reference to what competing chapter building / movement building organizations would go through to work in the space. For example, the recent post on Leverage is explicitly aimed at caution towards a conference being run under a different organization. I have heard from other organizations about similar frustrations and coordination problems when trying to work in the outreach space.

Hi Joey, thank you for writing this.

I think calling this a problem of representation is actually understating the problem here.

EA has (at least to me) always been a community that inspires encourages and supports people to use all the information and tools available to them (including their individual priors intuitions and sense of morality) to reach a conclusion about what causes and actions are most important for them to take to make a better world (and of course to then take those actions).

Even if 90% of experienced EAs / EA community leaders currently converge on the same conclusion as to where value lies, I would worry that a strong focus on that issue would be detrimental. We'd be at risk of losing the emphasis on cause prioritisation - arguably most useful insight that EA has provided to the world.

  • We'd risk losing an ability to support people though cause prioritisation (coaching, EA or otherwise, should not pre-empt the answers or have ulterior motives)
  • we risk creating a community that is less able to switch to focus on the most important thing
  • we risk stifling useful debate
  • we risk creating a community that does not benefits from collaboration by people working in different areas
  • etc

(Note: Probably worth adding that if 90% of experienced EAs / EA community leaders converged on the same conclusion on causes my intuitions would suggest that this is likley to be evidence of founder effects / group-think as much as it is evidence for that cause. I expect this is because I see a huge diversity in people's values and thinking and a difficulty in reaching strong conclusions in ethics and cause prioritisation)

Would it make sense to have a separate entity for some aspects of global poverty and animal suffering? This is already the case for charity evaluation (GiveWell, ACE). It's also more or less already the case for EA Funds and could easily be extended to EA Grants (with a separate donation pool for each cause area). I can also envision a new career advice organization that provides people interested in global poverty and animal suffering with coaching by people very familiar with and experienced in those areas. (80,000 Hours has problem profiles, career reviews, and interviews related to both of those areas, but their coaching seems to focus primarily on other areas.) To be clear, I'm not proposing that EA outreach (as opposed to cause-specific outreach) be formally split between different organizations (since I think that's likely to be harmful). I'm also not proposing that EA infrastructure (the EA Forum, EA Global, GWWC etc.) be split up (since there's less of a tradeoff between supporting cause areas for general infrastructure). But I do think that when there is a significant tradeoff (due to the function being resource intensive), it would be good for there to be a separate entity so that those who prioritize different cause areas can also have that function for their preferred area. (It seems to me it would be difficult to do this within a single organization since that organization would understandably want to prioritize the cause area(s) it felt were most effective.)

Should we expect a difference between materials aimed at beginners and those more involved? I think it makes sense to a adopt a clearer point of view toward cause areas if you expect your audience to be familiar with most common arguments in Effective Altruism. In my opinion, it's more important for beginner-oriented materials to be "representative."

One easy way you could get a sample that's both broadly representative and also weights more involved EAs more is to make the survey available to everyone on the forum, but to weight all responses by the square root of the respondent's karma. Karma is obviously an imperfect proxy, but it seems much easier to get than people's donation histories, and it doesn't seem biased in any particular direction. The square root is so that the few people with the absolute highest karma don't completely dominate the survey.

I think EA Forum karma isn't the best because a lot of the people who are particularly engaged in EA do not spend much time on the forum and instead focus on more action-relevant things for their org. The EA Forum will be biased towards people more interested in research and community related things as opposed to direct actions. For example, New Incentives is a very EA aligned org in direct poverty, but they spend most of their time doing cash transfers in Nigeria instead of posting on the forum.

To build on your idea though, I think forming some sort of index of involvement would get away from any one particular thing biasing the results. I think including karma in the index makes sense, along with length of involvement, hours per week involved in EA, percent donated, etc.

I'm working on a project to scale up volunteer work opportunities with all kinds of EA organizations. Part of what I wanted to do is develop a system for EA organizations to delegate tasks to volunteers, including writing blog posts. This could help EA orgs like New Incentives get more of their content up on the EA Forum, such as research summaries and progress updates. Do you think orgs would find this valuable.

Curated and popular this week
Recent opportunities in Building effective altruism