29

RandomEA comments on Problems with EA representativeness and how to solve it - Effective Altruism Forum

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (58)

You are viewing a single comment's thread. Show more comments above.

Comment author: RandomEA 04 August 2018 06:12:11PM *  42 points [-]

Here are ten reasons you might choose to work on near-term causes. The first five are reasons you might think near term work is more important, while the latter five are why you might work on near term causes even if you think long term future work is more important.

  1. You might think the future is likely to be net negative. Click here for why one person initially thought this and here for why another person would be reluctant to support existential risk work (it makes space colonization more likely, which could increase future suffering).

  2. Your view of population ethics might cause you to think existential risks are relatively unimportant. Of course, if your view was merely a standard person affecting view, it would be subject to the response that work on existential risk is high value even if only the present generation is considered. However, you might go further and adopt an Epicurean view under which it is not bad for a person to die a premature death (meaning that death is only bad to the extent it inflicts suffering on oneself or others).

  3. You might have a methodological objection to applying expected value to cases where the probability is small. While the author attributes this view to Holden Karnofsky, Karnofsky now puts much more weight on the view that improving the long term future is valuable.

  4. You might think it's hard to predict how the future will unfold and what impact our actions will have. (Note that the post is from five years ago and may no longer reflect the views of the author.)

  5. You might think that AI is unlikely to be a concern for at least 50 years (perhaps based on your conversations with people in the field). Given that ongoing suffering can only be alleviated in the present, you might think it's better to focus on that for now.

  6. You might think that when there is an opportunity to have an unusually large impact in the present, you should take it even if the impact is smaller than the expected impact of spending that money on long term future causes.

  7. You might think that the shorter feedback loops of near term causes allow us to learn lessons that may help with the long term future. For example, Animal Charity Evaluators may help us get a better sense of how to estimate cost-effectiveness with relatively weak empirical evidence, Wild Animal Suffering Research may help us learn how to build a new academic field, and the Good Food Institute may help us gain valuable experience influencing major economic and political actors.

  8. You might feel like you are a bad fit for long term future causes because they require more technical expertise (making it hard to contribute directly) and are less funding constrained (making it hard to contribute financially).

  9. You might feel a spiritual need to work on near term causes. Relatedly, you might feel like you're more likely to do direct work long term if you can feel motivated by videos of animal suffering (similar to how you might donate a smaller portion of your income because you think it's more likely to result in you giving long term).

  10. As you noted, you might think there are public image or recruitment benefits to near term work.

Note: I do not necessarily agree with any of the above.

Comment author: JesseClifton 05 August 2018 02:20:35AM 7 points [-]

Nice comment; I'd also like to see a top-level post.

One quibble: Several of your points risk conflating "far-future" with "existential risk reduction" and/or "AI". But there is far-future work that is non-x-risk focused (e.g. Sentience Institute and Foundational Research Institute) and non-AI-focused (e.g. Sentience Institute) which might appeal to someone who shares some of the concerns you listed.

Comment author: Milan_Griffes 04 August 2018 06:23:28PM 7 points [-]

This is great – consider making it a standalone post?

Comment author: Peter_Hurford  (EA Profile) 04 August 2018 07:58:09PM 3 points [-]

Seconded!

Comment author: RandomEA 04 August 2018 08:10:38PM 4 points [-]

I'll consider expanding it and converting it into its own post. Out of curiosity, to what extent does the Everyday Utilitarian article still reflect your views on the subject?

Comment author: Ben_Todd 05 August 2018 04:36:52AM 16 points [-]

It's a helpful list and I think these considerations deserve to be more well known.

If you were going to expand further, it might be useful to add in more about the counterarguments to these points. As you note in a few cases, the original proponents of some of these points now work on long-term focused issues.

I also agree with the comment above that it's important to distinguish between what we call "the long-term value thesis" and the idea that reducing extinction risks is the key priority. You can believe in the long-term value thesis but think there's better ways to help the future than reducing extinction risks, and you can reject the long-term value thesis but still think extinction risk is a top priority.

Comment author: Denise_Melchin 05 August 2018 08:22:27AM 1 point [-]

I also agree with the comment above that it's important to distinguish between what we call "the long-term value thesis" and the idea that reducing extinction risks is the key priority. You can believe in the long-term value thesis but think there's better ways to help the future than reducing extinction risks, and you can reject the long-term value thesis but still think extinction risk is a top priority.

Agreed. Calling reducing X-risks non-near-term-future causes strikes me as using bad terminology.

Comment author: RandomEA 05 August 2018 11:35:58AM 5 points [-]

I plan on posting the standalone post later today. This is one of the issues that I will do a better job addressing (as well as stating when an argument applies only to a subset of long term future/existential risk causes).

Comment author: Ben_Todd 05 August 2018 07:54:21PM 11 points [-]

As a further illustration of the difference with your first point, the idea that the future might be net negative is only reason against reducing extinction risk, but it might be more reason to focus on improving the long-term in general. This is what the s-risk people often think.

Comment author: Tobias_Baumann 06 August 2018 08:53:16AM 7 points [-]

Agreed. As someone who prioritises s-risk reduction, I find it odd that long-termism is sometimes considered equivalent to x-risk reduction. It is legitimate if people think that x-risk reduction is the best way to improve the long-term, but it should be made clear that this is based on additional beliefs about ethics (rejecting suffering-focused views and not being very concerned about value drift), about how likely x-risks in this century are, and about how tractable it is to reduce them, relative to other ways of improving the long-term. I for one think that none of these points is obvious.

So I feel that there is a representativeness problem between x-risk reduction and other ways of improving the long-term future (not necessarily only s-risk reduction), in addition to an underrepresentation of near-term causes.

Comment author: RandomEA 06 August 2018 09:48:35AM 2 points [-]

I'm aware of this and also planning on addressing it. One of the reasons that people associate the long term future with x-risk reduction is that the major EA organizations that have embraced the long term future thesis (80,000 Hours, Open Phil etc.) all consider biosecurity to be important. If your primary focus is on s-risks, you would not put much effort into biorisk reduction. (See here and here.)

Comment author: Ben_Todd 07 August 2018 04:21:43AM 2 points [-]

I agree the long-term value thesis and the aim of reducing extinction risk often go together, but I think it would be better if we separated them conceptually.

At 80k we're also concerned that there might be better ways to help the future, which is one reason why we highly prioritise global priorities research.

Comment author: Peter_Hurford  (EA Profile) 04 August 2018 10:44:58PM 0 points [-]
Comment author: pmelchor  (EA Profile) 11 August 2018 10:45:31PM 5 points [-]

I think there is an 11th reason why someone may want to work on near-term causes: while we may be replaceable by the next generations when it comes to working on the long-term future, we are irreplaceable when it comes to helping people / sentient beings who are alive today. In other words: influencing what may happen 100 years from now can be done by us, our children, our grand-children and so on; however, only we can help say the 700 million people living in extreme poverty today.

I have not come across the counter-arguments for this one: has it been discussed on previous posts or related material? Or maybe it is a basic question in moral philosophy 101 and I am just not knowledgeable enough :-)

Comment author: Carl_Shulman 12 August 2018 10:12:06PM *  3 points [-]

The argument is that some things in the relatively near term have lasting effects that cannot be reversed by later generations. For example, if humanity goes extinct as a result of war with weapons of mass destruction this century, before it can become more robust (e.g. by being present on multiple planets, creating lasting peace, etc), then there won't be any future generations to act in our stead (for at least many millions of years for another species to follow in our footsteps, if that happens before the end of the Earth's habitability).

Likewise, if our civilization was replaced this century by unsafe AI with stable less morally valuable ends, then future generations over millions of years would be controlled by AIs pursuing those same ends.

This period appears exceptional over the course of all history so far in that we might be able to destroy or permanently worsen the prospects of civilizations as a result of new technologies, but before we have reached a stable technological equilibrium or dispersed through space.

Comment author: pmelchor  (EA Profile) 15 August 2018 02:46:04PM 0 points [-]

Thanks, Carl. I fully agree: if we are convinced it is essential that we act now to counter existential risks, we must definitely do that.

My question is more theoretical (feel free to not continue the exchange if you find this less interesting). Imagine we lived in a world just like ours but where the development of AI, global pandemics, etc. are just not possible: for whatever reason, those huge risks are just not there. An argument in favour of weighting the long-term future heavily could still be valid (there could be many more people alive in the future and therefore a great potential for either flourishing or suffering). But how should we weight that against the responsibility to help people alive today, since we are the only ones who can do it (future generations will not be able to replace us in that role)?

Comment author: Carl_Shulman 15 August 2018 06:03:42PM 3 points [-]

Imagine we lived in a world just like ours but where the development of AI, global pandemics, etc. are just not possible: for whatever reason, those huge risks are just not there

If that was the only change our century would still look special with regard to the possibility of lasting changes short of extinction, e.g. as discussed in this posts by Nick Beckstead. There is also the astronomical waste argument: delay in interstellar colonization by 1 year means losing out on all the galaxies reachable (before separation by the expansion of the universe) by colonization begun in year n-1 instead of n. The population of our century is vanishingly small compared to future centuries, so the ability of people today to affect the colonized volume is accordingly vastly greater on a per capita basis, and the loss of reachable galaxies to delayed colonization is irreplaceable as such.

So we would still be in a very special and irreplaceable position, but less so.

For our low-population generation to really not be in a special position, especially per capita, it would have to be the case that none of our actions have effects on much more populous futures as a whole. That would be very strange, but if it were true then there wouldn't be any large expected impacts of actions on the welfare of future people.

But how should we weight that against the responsibility to help people alive today, since we are the only ones who can do it (future generations will not be able to replace us in that role)?

I'm not sure I understand the scenario. This sounds like a case where an action to do X makes no difference because future people will do X (and are more numerous and richer). In terms of Singer's drowning child analogy, that would be like a case where many people are trying to save the child and extras don't make the child more likely to be saved, i.e. extra attempts at helping have no counterfactual impact. In that case there's no point in helping (although it may be worth trying if there is enough of a chance that extra help will turn out to be necessary after all).

So we could consider a case where there are many children in the pond, say 20, and other people are gathered around the pond and will save 10 without your help, but 12 with your help. There are also bystanders who won't help regardless. However, there is also a child on land who needs CPR, and you are the only one who knows how to provide it. If you provide the CPR instead of pulling children from the pond, then 10+1=11 children will be saved instead of 12. I think in that case you should save the two children from drowning instead of the one child with CPR, even though your ability to help with CPR is more unique, since it is less effective.

Likewise, it seems to me that if we have special reason to help current people at the expense of much greater losses to future generations, it would be because of flow-through effects, or some kind of partiality (like favoring family over strangers), or some other reason to think the result is good (at least by our lights), rather than just that future generations cannot act now (by the same token, billions of people could but don't intervene to save those dying of malaria or suffering in factory farms today).

Comment author: HaydenW 05 August 2018 01:18:53PM *  5 points [-]

I'd add one more: having to put your resources towards more speculative, chancy causes is more demanding.

When donating our money and time to something like bednets, the cost is mitigated by the personal satisfaction of knowing that we've (almost certainly) had an impact. When donating to some activity which has only a tiny chance of success (e.g., x-risk mitigation), most of us won't get quite the same level of satisfaction. And that's pretty demanding to have to give up not only a large chunk of your resources but also the satisfaction of having actually achieved something.

Rob Long has written a bit about this - https://experiencemachines.wordpress.com/2018/06/10/demanding-gambles/

Comment author: KevinWatkinson  (EA Profile) 05 August 2018 02:07:23PM 0 points [-]

Thanks for that link, it's an interesting article. In the context of theory within the animal movement Singer's pragmatism isn't particularly demanding, but a more justice oriented approach is (along the lines of Regan). In my view it would be a good thing not least for the sake of diversity of viewpoints to make more claims around demandingness rather than largely following a less demanding position. Though i do think that because people are not used to ascribing significant moral value to other animals then it follows that anything more than the societal level is therefore considered demanding, particularly in regard to considering speciesism alongside other forms of human discrimination.

Comment author: kbog  (EA Profile) 05 August 2018 11:53:14AM *  6 points [-]

For 5, the survey (https://arxiv.org/pdf/1705.08807.pdf) sort of ends all discussion about AI timelines. Not that it's necessarily right, just that no one is in a position to second-guess it.

For another relevant reason to think less about the future, take a look at this. https://web.stanford.edu/~chadj/IdeaPF.pdf

For 7, we can learn quite a bit from working on long term causes, and WASR is an example of that: it's stuff that won't be implemented any time soon, but we can gain feedback from the baby steps. The same thing has applied to some AI work.

Also, it seems to me that the kind of expertise here is highly domain-specific, and the lessons learned in one domain probably won't help elsewhere. I suppose that short term causes let you perform more trials after observing initial results, at least.

For 8, nontechnical people can work on political issues with long-term implications.

Lists of 10 are always fishy because the author is usually either stretching them out with poor reasons to make it to 10, or leaving out good reasons to keep it at 10. Try not to get attached to the number :)

Comment author: Peter_Hurford  (EA Profile) 05 August 2018 09:52:06PM 2 points [-]

I do agree WASR seems pretty tractable and the near-term learning value is pretty high even if we don't have a good idea of the long-term feasibility yet. I think it's promising, but I could also see it being ruled out as impactful, and I feel like we could have a good answer in a few years.

I don't have a good sense yet on whether something like AI research has a similar feel. If it did, I'd feel more excited about it.

Comment author: Milan_Griffes 05 August 2018 09:47:48PM *  2 points [-]

For 5, the survey (https://arxiv.org/pdf/1705.08807.pdf) sort of ends all discussion about AI timelines. Not that it's necessarily right, just that no one is in a position to second-guess it.

I don't follow what you mean by "ends all discussion."

Even if AI development researchers had a consensus opinion about AI timelines (which they don't), one could still disagree with the consensus opinion.

I suspect AI dev researcher timeline estimates vary a lot depending on whether the survey is conducted during an AI boom or AI winter.

Comment author: kbog  (EA Profile) 05 August 2018 11:04:36PM *  1 point [-]

Well, you might disagree, but you'd have to consider yourself likely to be a better predictor than most AI experts.

The lack of consensus doesn't really change the point because we are looking at a probability distribution either way.

Booms/winters are well known among researchers, they are aware of how it affects the field so I think it's not so easy to figure out if they're being biased or not.

Comment author: Milan_Griffes 06 August 2018 12:25:42AM *  4 points [-]

I think it's important to hold "AI development research" and "AI timeline prediction-making" as two separate skillsets. Expertise in one doesn't necessarily imply expertise in the other (though there's probably some overlap).

Any good model of the quality of AI dev researcher timeline opinions needs to be able to explain why AI safety was considered a joke by the field for years, and only started to be taken seriously by (some) AI dev researchers after committed advocacy from outsiders.

Comment author: kbog  (EA Profile) 06 August 2018 12:36:20AM *  1 point [-]

I think it's important to hold "AI development research" and "AI timeline prediction-making" as two separate skillsets. Expertise in one doesn't necessarily imply expertise in the other (though there's probably some overlap).

OK, that's true. The problem is, it's hard to tell if you are better at predicting timelines.

Any good model of the quality of AI dev researcher timeline opinions needs to be able to explain why AI safety was considered a joke by the field for years, and only started to be taken seriously by (some) AI dev researchers after committed advocacy from outsiders.

I think that's a third issue, not a matter of timeline opinions either.

Comment author: Milan_Griffes 06 August 2018 01:09:08AM 1 point [-]

I think that's a third issue, not a matter of timeline opinions either.

Seems relevant in that if you surveyed timeline opinions of AI dev researchers 20 years ago, you'd probably get responses ranging from "200 years out" to "AGI? That's apocalyptic hogwash. Now, if you'd excuse me..."

Comment author: kbog  (EA Profile) 06 August 2018 03:12:51AM 3 points [-]

I don't know which premise here is more greatly at odds with the real beliefs of AI researchers - that they didn't worry about AI safety because they didn't think that AGI would be built, or that there has ever been a time when they thought it would take >200 years to do it.

Comment author: vollmer 06 August 2018 09:03:38AM 2 points [-]

Why do you think Epicureanism implies a focus on the near term and not a focus on improving the quality of life in the long-term future?

Comment author: RandomEA 06 August 2018 11:44:30AM 2 points [-]

I actually began to wonder this myself after posting. Specifically, it seems like an Epicurean could think s-risks are the most important cause. Hopefully Michael Plant will be able to answer your question. (Maybe EA Forum 2.0 should include a tagging feature.)

Comment author: MichaelPlant 06 August 2018 04:01:42PM 1 point [-]

I'm not sure I see which direction you're coming from. If you're a symmetric person-affector (i.e. reject the procreatve asymmetry, the view we're neutral about creating happy lives but agasinst creating unhappy lives) then you don't think there's value in creating future life, good or bad. So neither x-risks nor s-risks are a concern.

Maybe you're thinking 'don't those with person-affecting views care about those who are going to exist anyway?' the answer is Yes if you're a necessitarian (No if you're a presentist), but given that what we do changes who comes into existence necessitarianism (holds you value wellbeing of those that exist anyway) collapses, in practice, into presentism (holds you value wellbeing of those that exist right now).

Vollmer, the view that would be care about the quality of the long-term future, but not whether it happens, seems to be averagism.

Comment author: vollmer 07 August 2018 08:06:56AM *  0 points [-]

Right, sorry, I misread. I thought you were assuming some form of Epicureanism with concern for all future beings, not Epicureanism plus a person-affecting view.

Comment author: RandomEA 06 August 2018 04:39:59PM *  0 points [-]

A. Does that mean that, under a symmetric person-affecting Epicurean view, it's not bad if a person brings into existence someone who's highly likely to have a life filled with extreme suffering? Do you find this plausible?

B. Does that also mean that, under a symmetric person-affecting Epicurean view, there's no benefit from allowing a person who is currently enduring extreme suffering to terminate their life? Do you find this plausible?

C. Let's say a person holds the following views:

  1. It is good to increase the well-being of currently existing people and to decrease the suffering of currently existing people.

  2. It is good to increase the well-being of future people who will necessarily exist and to decrease the suffering of future people who will necessarily exist. (I'm using necessarily exist in a broad sense that sets aside the non-identity problem.)

  3. It's neither good nor bad to cause a person with a net positive life to come into existence or to cause a currently existing person who would live net positively for the rest of their life to stay alive.

  4. It's bad to cause a person who would live a net negative life to come into existence and to cause a currently existing person who would live net negatively for the rest of their life to stay alive.

Does this qualify as an Epicurean view? If not, is there a name for such a view?