29

kbog comments on Problems with EA representativeness and how to solve it - Effective Altruism Forum

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (58)

You are viewing a single comment's thread. Show more comments above.

Comment author: RandomEA 04 August 2018 06:12:11PM *  42 points [-]

Here are ten reasons you might choose to work on near-term causes. The first five are reasons you might think near term work is more important, while the latter five are why you might work on near term causes even if you think long term future work is more important.

  1. You might think the future is likely to be net negative. Click here for why one person initially thought this and here for why another person would be reluctant to support existential risk work (it makes space colonization more likely, which could increase future suffering).

  2. Your view of population ethics might cause you to think existential risks are relatively unimportant. Of course, if your view was merely a standard person affecting view, it would be subject to the response that work on existential risk is high value even if only the present generation is considered. However, you might go further and adopt an Epicurean view under which it is not bad for a person to die a premature death (meaning that death is only bad to the extent it inflicts suffering on oneself or others).

  3. You might have a methodological objection to applying expected value to cases where the probability is small. While the author attributes this view to Holden Karnofsky, Karnofsky now puts much more weight on the view that improving the long term future is valuable.

  4. You might think it's hard to predict how the future will unfold and what impact our actions will have. (Note that the post is from five years ago and may no longer reflect the views of the author.)

  5. You might think that AI is unlikely to be a concern for at least 50 years (perhaps based on your conversations with people in the field). Given that ongoing suffering can only be alleviated in the present, you might think it's better to focus on that for now.

  6. You might think that when there is an opportunity to have an unusually large impact in the present, you should take it even if the impact is smaller than the expected impact of spending that money on long term future causes.

  7. You might think that the shorter feedback loops of near term causes allow us to learn lessons that may help with the long term future. For example, Animal Charity Evaluators may help us get a better sense of how to estimate cost-effectiveness with relatively weak empirical evidence, Wild Animal Suffering Research may help us learn how to build a new academic field, and the Good Food Institute may help us gain valuable experience influencing major economic and political actors.

  8. You might feel like you are a bad fit for long term future causes because they require more technical expertise (making it hard to contribute directly) and are less funding constrained (making it hard to contribute financially).

  9. You might feel a spiritual need to work on near term causes. Relatedly, you might feel like you're more likely to do direct work long term if you can feel motivated by videos of animal suffering (similar to how you might donate a smaller portion of your income because you think it's more likely to result in you giving long term).

  10. As you noted, you might think there are public image or recruitment benefits to near term work.

Note: I do not necessarily agree with any of the above.

Comment author: kbog  (EA Profile) 05 August 2018 11:53:14AM *  6 points [-]

For 5, the survey (https://arxiv.org/pdf/1705.08807.pdf) sort of ends all discussion about AI timelines. Not that it's necessarily right, just that no one is in a position to second-guess it.

For another relevant reason to think less about the future, take a look at this. https://web.stanford.edu/~chadj/IdeaPF.pdf

For 7, we can learn quite a bit from working on long term causes, and WASR is an example of that: it's stuff that won't be implemented any time soon, but we can gain feedback from the baby steps. The same thing has applied to some AI work.

Also, it seems to me that the kind of expertise here is highly domain-specific, and the lessons learned in one domain probably won't help elsewhere. I suppose that short term causes let you perform more trials after observing initial results, at least.

For 8, nontechnical people can work on political issues with long-term implications.

Lists of 10 are always fishy because the author is usually either stretching them out with poor reasons to make it to 10, or leaving out good reasons to keep it at 10. Try not to get attached to the number :)

Comment author: Peter_Hurford  (EA Profile) 05 August 2018 09:52:06PM 2 points [-]

I do agree WASR seems pretty tractable and the near-term learning value is pretty high even if we don't have a good idea of the long-term feasibility yet. I think it's promising, but I could also see it being ruled out as impactful, and I feel like we could have a good answer in a few years.

I don't have a good sense yet on whether something like AI research has a similar feel. If it did, I'd feel more excited about it.

Comment author: Milan_Griffes 05 August 2018 09:47:48PM *  2 points [-]

For 5, the survey (https://arxiv.org/pdf/1705.08807.pdf) sort of ends all discussion about AI timelines. Not that it's necessarily right, just that no one is in a position to second-guess it.

I don't follow what you mean by "ends all discussion."

Even if AI development researchers had a consensus opinion about AI timelines (which they don't), one could still disagree with the consensus opinion.

I suspect AI dev researcher timeline estimates vary a lot depending on whether the survey is conducted during an AI boom or AI winter.

Comment author: kbog  (EA Profile) 05 August 2018 11:04:36PM *  1 point [-]

Well, you might disagree, but you'd have to consider yourself likely to be a better predictor than most AI experts.

The lack of consensus doesn't really change the point because we are looking at a probability distribution either way.

Booms/winters are well known among researchers, they are aware of how it affects the field so I think it's not so easy to figure out if they're being biased or not.

Comment author: Milan_Griffes 06 August 2018 12:25:42AM *  4 points [-]

I think it's important to hold "AI development research" and "AI timeline prediction-making" as two separate skillsets. Expertise in one doesn't necessarily imply expertise in the other (though there's probably some overlap).

Any good model of the quality of AI dev researcher timeline opinions needs to be able to explain why AI safety was considered a joke by the field for years, and only started to be taken seriously by (some) AI dev researchers after committed advocacy from outsiders.

Comment author: kbog  (EA Profile) 06 August 2018 12:36:20AM *  1 point [-]

I think it's important to hold "AI development research" and "AI timeline prediction-making" as two separate skillsets. Expertise in one doesn't necessarily imply expertise in the other (though there's probably some overlap).

OK, that's true. The problem is, it's hard to tell if you are better at predicting timelines.

Any good model of the quality of AI dev researcher timeline opinions needs to be able to explain why AI safety was considered a joke by the field for years, and only started to be taken seriously by (some) AI dev researchers after committed advocacy from outsiders.

I think that's a third issue, not a matter of timeline opinions either.

Comment author: Milan_Griffes 06 August 2018 01:09:08AM 1 point [-]

I think that's a third issue, not a matter of timeline opinions either.

Seems relevant in that if you surveyed timeline opinions of AI dev researchers 20 years ago, you'd probably get responses ranging from "200 years out" to "AGI? That's apocalyptic hogwash. Now, if you'd excuse me..."

Comment author: kbog  (EA Profile) 06 August 2018 03:12:51AM 3 points [-]

I don't know which premise here is more greatly at odds with the real beliefs of AI researchers - that they didn't worry about AI safety because they didn't think that AGI would be built, or that there has ever been a time when they thought it would take >200 years to do it.