Comment author: Michael_PJ 09 February 2017 12:52:23PM 1 point [-]

Thanks for this - I'm pretty sure I'm guilty of doing this carelessly, and I agree that it's actually not great.

Comment author: RobBensinger 07 February 2017 10:38:52PM 4 points [-]

Anonymous #8:

If I could change the effective altruism community tomorrow, I would move it somewhere other than the Bay Area, or at least make it more widely known that moving to the Bay is defecting in a tragedy of the commons and makes you Bad.

If there were large and thriving EA communities all over the place, nobody would need to move to the Bay, we'd have better outreach to a number of communities, and fewer people would have to move a long distance, get US visas, or pay a high rent in order to get seriously involved in EA. The more people move to the Bay, the harder it is to be outside the Bay, because of the lack of community. If everyone cooperated in developing relatively local communities, rather than moving to the bay, there'd be no need to move to the Bay in the first place. But we, a community that fangirls over 'Meditations on Moloch' (http://slatestarcodex.com/2014/07/30/meditations-on-moloch/) and prides itself on working together to get shit done, can't even cooperate on this simple thing.

I know people who are heartbroken and depressed because they need community and all their partners are in the Bay and they want to contribute, but they can't get a US visa or they can't afford Bay Area rent levels, so they're stuck friendless and alone in whatever shitty place they were born in. This should not be a hard problem to solve if we apply even a little thought and effort to it; any minimally competent community could pull this off.

Comment author: Michael_PJ 08 February 2017 12:06:17AM 3 points [-]

There's a lot of EA outside the Bay! The Oxford/London cluster in particular is quite nice (although I live there, so I'm biased).

Comment author: RobBensinger 07 February 2017 11:04:24PM 7 points [-]

Anonymous #39:

Level of involvement: I'm not an EA, but I'm EA-adjacent and EA-sympathetic.

EA seems to have picked all the low-hanging fruit and doesn't know what to do with itself now. Standard health and global poverty feel like trying to fill a bottomless pit. It's hard to get excited about GiveWell Report #3543 about how we should be focusing on a slightly different parasite and that the cost of saving a life has gone up by $3. Animal altruism is in a similar situation, and is also morally controversial and tainted by culture war. The benefits of more long-shot interventions are hard to predict, and some of them could also have negative consequences. AI risk is a target for mockery by outsiders, and while the theoretical arguments for its importance seem sound, it's hard to tell whether an organization is effective in doing anything about it. And the space of interventions in politics is here-be-dragons.

The lack of salient progress is a cause of some background frustration. Some of those who think their cause is best try to persuade others in the movement, but to little effect, because there's not much new to say to change people's minds; and that contributes to the feeling of stagnation. This is not to say that debate and criticism are bad; being open to them is much better than the alternative, and the community is good at being civil and not getting too heated. But the motivation for them seems to draw more from ingrained habits and compulsive behavior than from trying to expose others to new ideas. (Because there aren't any.)

Others respond to the frustration by trying to grow the movement, but that runs into the real (and in my opinion near-certain) dangers of mindkilling politics, stifling PR, dishonesty (Sarah Constantin's concerns), and value drift.

And others (there's overlap between these groups) treat EA as a social group, whether that means house parties or memes. Which is harmless fun in itself, but hardly an inspiring direction for the movement.

What would improve the movement most is a wellspring of new ideas of the quality that inspired it to begin with. Apart from that, it seems quite possible that there's not much room for improvement; most tradeoffs seem to not be worth the cost. That means that it's stuck as it is, at best -- which is discouraging, but if that's the reality, EAs should accept it.

Comment author: Michael_PJ 07 February 2017 11:58:51PM 0 points [-]

I agree that we're in danger of having picked all the low-hanging fruit. But I think there's room to fix this.

Comment author: tjmather  (EA Profile) 05 February 2017 09:53:45PM *  4 points [-]

One possible area for exploration is around Schistosomiasis prevention, as reinfection rates appear to be high after deworming campaigns. PMA2020 has launched an annual survey to measure the impact of Schistosomiasis control programs in Uganda.

Johns Hopkins University/Center for Communication Programs in Uganda will be conducting a mass media campaign to promote Schistosomiasis prevention in fall 2017 before deworming day. The 2017 PMA2020 survey should be able to measure changes in knowledge, attitudes and practices after the mass media campaign. If there is funding in place, the 2018 PMA2020 survey may be able to measure the impact of the mass media campaign on actual infection rates.

Does anyone have ideas for exploration around Schistosomiasis prevention? With the PMA2020 survey, there is a unique opportunity for data collection to help evaluate potential Schistosomiasis prevention programs.

Disclosure: I am helping fund both the data collection and mass media program in Uganda

Comment author: Michael_PJ 06 February 2017 11:37:16PM 0 points [-]

I think this is a case where we're unlikely to be able to offer anything beyond what the academic community is going to do. I think the best way to improve exploration around schistosomiasis prevention would probably be to just fund some more PhD students!

Comment author: Ben_Todd 05 February 2017 05:31:16PM 7 points [-]

Thanks for the post. I broadly agree.

There are some more remarks on "gaps" in EA here: https://80000hours.org/2015/11/why-you-should-focus-more-on-talent-gaps-not-funding-gaps/

Two quick additions:

1) I'm not sure spending on RCTs is especially promising. Well-run RCTs that actually have power to update you can easily cost tens of millions of dollars, so you'd need to be considering spending hundreds of millions for it to be worth it. We're only just getting to this scale. GiveWell has considered funding RCTs in the past and rejected it, I think for this reason (though I'm not sure).

2) It might be interesting for someone to think more about multi-arm bandit problems, since it seems like it could be a good analogy for cause selection. An approximate solution is to exploit your best opportunity 90% of the time, then randomly select another opportunity to explore 10% of the time. https://en.wikipedia.org/wiki/Multi-armed_bandit

Comment author: Michael_PJ 06 February 2017 11:35:08PM 0 points [-]

1) I nearly added a section about whether exploration is funiding- or talent-constrained! In short, I'm not sure, and I suspect it's different in different places. It sounds like OPP is probably talent-constrained, but other orgs may differ. In particular, if we wanted to try some of my other suggestions for improving exploration, like building institutions to start new orgs, then that's potentially quite funding-intensive.

2) I'm not sure whether multi-armed bandits actually model our situation, since I'm not sure if you can incorporate situations where you can change the efficiencies of your actions. What does "improving exploration capacity" look like in a multi-armed bandit? There may also be complications because we don't even know the size of the option set.

Comment author: Daniel_Eth 06 February 2017 03:40:07AM 4 points [-]

Agreed that we should be doing more exploration. I think one reason there hasn't been as much is it's a harder sell. "Give me money that I can use to save lives - I've already found a method that works" is a more convincing plea than "give me money so I can sit around and think of an altruistic way to spend other people's money - I swear I'll work effectively at this." Of course, big established organizations like OPP can do this, but I think the hard sell creates a barrier to entry.

Comment author: Michael_PJ 06 February 2017 11:28:58PM 1 point [-]

Exploration also carries significant risk of failure, which can be offputting. I don't think there's any way around that but to be somewhat tolerant of failure. But not so tolerant that people don't try hard!

Comment author: Alex_Barry 05 February 2017 06:04:32PM 9 points [-]

Good post!

Animal Charity Evaluators also has the Animal Advocacy Research Fund which has $1,000,000 to give out over 3 years to fund research, which you should probably count as money spent on exploration.

Depending on what you mean by 'direct work', x-risk orgs could also be counted as currently doing mostly exploration, or at least don't fit very neatly into the dichotomy. Still even with these additions I doubt this would raise it above ~$20 million a year, which would probably not be enough to change your conclusion.

Comment author: Michael_PJ 06 February 2017 11:27:41PM 1 point [-]

I'm unsure where the balance should lie quantitatively. I think that $100 million would probably be too much, and $10 million is probably too low.

I agree that x-risk work doesn't fit nicely into this: it's not even clear whether you'd want to count the output of research as "actually" what you want or as VoI.

Comment author: Ben_Todd 06 February 2017 03:27:59PM 3 points [-]

We've considered wrapping it into the problem framework in the past, but it can easily get confusing. Informativeness is also more of a feature of how you go about working on the cause, rather than which cause you're focused on.

The current way we show that we think VOI is important is by listing Global Priorities Research as a top area (though I agree that doesn't quite capture it). I also talk about it often when discussing how to coordinate with the EA community (VOI is a bigger factor when considering the community perspective than individual perspective).

Comment author: Michael_PJ 06 February 2017 11:24:48PM 0 points [-]

I think I agree with this - it's usually the case that one particular sub-problem in an area is particularly informative to work on.

However, I think it's at least possible that some areas are systematically very informative to work on. For example, if the primary work is research, then you should expect to mainly be outputting information. AI research might be like this.

20

EA should invest more in exploration

[Epistemic status: strongly stated, weakly held] When faced with problems that involve ongoing learning, most strategies involve a balance between "exploration" and "exploitation". Exploration means taking opportunities that increase your knowledge about how good your opportunities are, whereas exploitation means putting resources into what you currently believe is the best... Read More
Comment author: Michael_PJ 27 October 2016 07:20:54PM *  10 points [-]

This concerns me because "EA" is such a vaguely defined group.

Here are some clearly defined groups:

  • The EA FB group
  • The EA forum
  • Giving What We Can

All of these have a clear definition of membership and a clear purpose. I think it is entirely sensible for groups like this to have some kinds of rules, and processes for addressing and potentially ejecting people who don't conform to those rules. Because the group has a clear membership process, I think most people will accept that being a member of the group means acceding to the rules of the group.

"EA", on the other hand, is a post hoc label for a group of people who happened to be interested in the ideas of effective altruism. One does not "apply" to be an "EA". Nor does can we meaningfully revoke membership except by collectively refusing to engage with someone.

I think that attempts to police the borders of a vague group like "EA" can degenerate badly.

Firstly, since anyone who is interested in effective altruism has a plausible claim to be a member of "EA" under the vague definition, there will continue to be many people using the label with no regards for any "official" definition.

Secondly (and I hope this won't happen), such a free-floating label is very vulnerable to political (ab)use. We open ourselves up to arguments about whether or not someone is a "true" EA, or schisms between various "official" definitions. At risk of bringing up old disagreements, the arguments about vegetarian catering at last year's EA Global were already veering in this direction.

This seems to me to have been a common fate for vague group nouns over the years, with feminism being the most obvious example. We don't want to have wars between the second- and third-wave EAs!

My preferred solution is to avoid "EA" as a noun. Apart from the dangers I mentioned above, its origin as a label for an existing group of people gives it all sorts of connotations that are only really valid historically: rationalist thinking style, frank discussion norms, appreciation of contrarianism ... not to mention being white, male, and highly educated. But practically, having such a label is just too useful.

The only other suggestion I can think of is to make a clearly defined group for which we have community norms. For lack of a better name, we could call it "CEA-style EA". Then the CEA website could include a page that describes the core values of "CEA-style EAs" and some expectations of behaviour. At that point we again have a clearly defined group with a clear membership policy, and policing the border becomes a much easier job.

In practice, you probably wouldn't want an explicit application process, with it rather being something that you can claim for yourself - unless the group arbiter (CEA) has actively decreed that you cannot. Indeed, even if someone has never claimed to be a "CEA-style EA", declaring that they do not meet the standard can send a powerful signal.

View more: Next