17

EA Grants applications are now open

I’m announcing that the Centre for Effective Altruism’s (CEA) EA Grants project has now been reopened for public applications. If you have a project that you think is worth funding, you can fill out our short application here . Applications close on Sunday, October 14. We expect to run regular... Read More
Comment author: HaydnBelfield 21 August 2018 08:25:27PM 1 point [-]

"Cause areas shouldn't be tribes" "We shouldn't entrench existing cause areas" "Some methods of increasing representativeness have the effect of entrenching current cause areas and making intellectual shifts harder."

Does this mean you wouldn't be keen on e.g. "cause-specific community liasons" who mainly talk to people with specific cause-prioritisations, maybe have some money to back projects in 'their' cause, etc? (I'm thinking of something analogous to an Open Philanthropy Project Program Officer )

Comment author: Kerry_Vaughan 21 August 2018 11:04:48PM 2 points [-]

Does this mean you wouldn't be keen on e.g. "cause-specific community liasons" who mainly talk to people with specific cause-prioritisations, maybe have some money to back projects in 'their' cause, etc? (I'm thinking of something analogous to an Open Philanthropy Project Program Officer )

I don't think I would be keen on this as stated. I would be keen on a system by which CEA talks to more people with a wider variety of views, but entrenching particular people or particular causes seems likely to be harmful to the long-term growth of the community.

Comment author: Justis 19 August 2018 09:08:31PM 6 points [-]

Disclosure: I copyedited a draft of this post, and do contract work for CEA more generally

I don't think that longtermism is a consensus view in the movement.

The 2017 EA Survey results had more people saying poverty was the top priority than AI and non-AI far future work combined. Similarly, AMF and GiveWell got by far the most donations in 2016, according to that same survey. While I agree that someone can be a longtermist and think that practicality concerns prioritize near-term good work for now anyway, I don't think this is a very compelling explanation for these survey results.

As a first pass heuristic, I think EA leadership would guess correctly about community-held views more often if they held the belief "the modal EA-identifying person cares most about solving suffering that is happening in the world right now."

Comment author: Kerry_Vaughan 21 August 2018 10:55:53PM 6 points [-]

I agree that I might be wrong about this, but it's worth noting that I wasn't trying to make a claim about the modal EA. When talking about the emerging consensus I was implicitly referring to the influence-weighted opinion of EAs or something like that. This could be an area where I don't have access to a representative sample of influential EAs which would make it likely that the claim is false.

Comment author: weeatquince  (EA Profile) 15 August 2018 11:50:28PM *  26 points [-]

We would like to hear suggestions from forum users about what else they might like to see from CEA in this area.

Here is my two cents. I hope it is constructive:


1.

The policy is excellent but the challenge lies in implementation.

Firstly I want to say that this post is fantastic. I think you have got the policy correct: that CEA should be cause-impartial, but not cause-agnostic and CEA’s work should be cause-general.

However I do not think it looks, from the outside, like CEA is following this policy. Some examples:

  • EA London staff had concerns that they would need to be more focused on the far future in order to receive funding from CEA.

  • You explicitly say on your website: "We put most of our credence in a worldview that says what happens in the long-term future is most of what matters. We are therefore more optimistic about others who roughly share this worldview."[1]

  • The example you give of the new EA handbook

  • There is a close association with 80000 Hours who are explicitly focusing much of their effort on the far future.

These are all quite subtle things, but collectively they give an impression that CEA is not cause impartial (that it is x-risk focused). Of course this is a difficult thing to get correct. It is difficult to draw the line between saying: 'our staff members believe cause_ is important' (a useful factoid that should definitely be said), whilst also putting across a strong front of cause impartiality.


2.

Suggestion: CEA should actively champion cause impartiality

If you genuinely want to be cause impartial I think most of the solutions to this are around being super vigilant about how CEA comes across. Eg:

  • Have a clear internal style guide that sets out to staff good and bad ways to talk about causes

  • Have 'cause impartiality' as a staff value

  • If you do an action that does not look cause impartial (say EA Grants mostly grants money to far future causes) then just acknowledge this and say that you have noted it and explain why it happened.

  • Public posts like this one setting out what CEA believes

  • If you want to do lots of "prescriptive" actions split them off into a sub project or a separate institution.

  • Apply the above retroactively (remove lines from your website that make it look like you are only future focused)

Beyond that, if you really want to champion cause impartiality you may also consider extra things like:

  • More focus on cause prioritisation research.

  • Hiring people who value cause impartiality / cause prioritisation research / community building, above people who have strong views on what causes are important.


3.

Being representative is about making people feel listened too.

Your section on representatives feels like you are trying to pin down a way of finding an exact number so you can say we have this many articles on topic x and this many on topic y and so on. I am not sure this is quite the correct framing.

Things like the EA handbook should (as a lower bound) have enough of a diversity of causes mentioned that the broader EA community does not feel misrepresented but (as an upper bound) not so much that CEA staff [2] feel like it is misrepresenting them. Anything within this range seems fine to me. (Eg. with the EA handbook both groups should feel comfortable handing this book to a friend.) Although I do feel a bit like I have just typed 'just do the thing that makes everyone happy' which is easier said than done.

I also think that "representativeness" is not quite the right issue any way. The important thing is that people in the EA community feel listened too and feel like what CEA is doing represents them. The % of content on different topics is only part of that. The other parts of the solution are:

  • Coming across like you listen: see the aforementioned points on championing cause impartiality. Also expressing uncertainty, mentioning that there are opposing views, giving two sides to a debate, etc.

  • Listening -- ie. consulting publicly (or with trusted parties) wherever possible.

If anything getting these two things correct is more important than getting the exact percentage of your work to be representative.


Sam :-)


[1] https://www.centreforeffectivealtruism.org/a-three-factor-model-of-community-building

[2] Unless you have reason to think that there is a systematic bias in staff, eg if you actively hired people because of the cause they cared about.

Comment author: Kerry_Vaughan 18 August 2018 12:25:39AM *  11 points [-]

Thanks Sam! This is really helpful. I'd be interested in talking on Skype about this sometime soon (just emailed you about it). Some thoughts below:

Is longtermism a cause?

One idea I've been thinking about is whether it makes sense to treat longtermism/the long-term future as a cause.

Longtermism is the view that most of the value of our actions lies in what happens in the future. You can hold that view and also hold the view that we are so uncertain about what will happen in the future that doing things with clear positive short-term effects is the best thing to do. Peter Hurford explains this view nicely here.

I do think that longtermism as a philosophical point of view is emerging as an intellectual consensus in the movement. Yet, I also think there are substantial and reasonable disagreements about what that means practically speaking. I'd be in favor of us working to ensure that people entering the community understand the details of that disagreement.

My guess is that while CEA is very positive on longtermism, we aren't anywhere near as positive on the cause/intervention combinations that longtermism typically suggests. For example, personally speaking, if it turned out that recruiting ML PhDs to do technical AI-Safety didn't have a huge impact I would be surprised but not very surprised.

Threading the needle

My feeling as I've been thinking about representativeness is that getting this right requires threading a very difficult needle because we need to optimize against a large number of constraints and considerations. Some of the constraints include:

  • Cause areas shouldn't be tribes -- I think cause area allegiance is operating as a kind of tribal signal in the movement currently. You're either on the global poverty tribe or the X-risk tribe or the animal welfare tribe and then people tend to defend the views of the tribe they happen to be associated with. I think this needs to stop if we want to build a community that can actually figure out how to do the most good and then do it. Focusing on cause areas as the unit of analysis for representativeness entrenches the tribal concern, but it's hard to get away from because it's an easy-to-understand unit of analysis.
  • We shouldn't entrench existing cause areas -- we should be aiming for an EA that has the ability to shift its consensus on the most pressing problems as we learn more. Some methods of increasing representativeness have the effect of entrenching current cause areas and making intellectual shifts harder.
  • Cause-impartiality can include having a view -- cause impartiality means that you do an impartial calculation of impact to determine what to work on. Such a calculation should lead to developing views on what causes are most important. Intellectual progress probably includes decreasing our uncertainty and having stronger views.
  • The view of CEA staff should inform, but not determine our work -- I don't think it's realistic or plausible for CEA to take actions as if we have no view on the relative importance of different problems, but it's also the case that our views shouldn't substantially determine what happens.
  • CEA should sometimes exercise leadership in the community -- I don't think that social movements automatically become excellent. Excellence typically has to be achieved on purpose by dedicated, skilled actors. I think CEA will often do work that represents the community, but will sometimes want to lead the community on important issues. The allocation of resources across causes could be one such area for leadership although I'm not certain.

There are also some other considerations around methods of improving representativeness. For example, consulting established EA orgs on representativeness concerns has the effect of entrenching the current systems of power in a way that may be bad, but that gives you a sense of the consideration space.

CEA and cause-impartiality

Suggestion: CEA should actively champion cause impartiality

I just wanted to briefly clarify that I don't think CEA taking a view in favor of longtermism or even in favor of specific causes that are associated with longtermism is evidence against us being cause-impartial. Cause-impartiality means that you do an impartial calculation of the impact of the cause and act on the basis of that. This is certainly what we think we've done when coming to views on specific causes although there's obviously room for reasonable disagreement.

I would find it quite odd if major organizations in EA (even movement building organizations) had no view on what causes are most important. I think CEA should be aspiring to have detailed, nuanced views that take into account our wide uncertainty, not no views on the question.

Making people feel listened to

I broadly agree with your points here. Regularly talking to and listening to more people in the community is something that I'm personally committed to doing.

Your section on representatives feels like you are trying to pin down a way of finding an exact number so you can say we have this many articles on topic x and this many on topic y and so on. I am not sure this is quite the correct framing.

Just to clarify, I also don't think trying to find a number that defines representativeness is the right approach, but I also don't want this to be a purely philosophical conversation. I want it to drive action.

Comment author: Khorton 15 August 2018 02:00:04PM 2 points [-]

We do however recognize that when consulting others it’s easy to end up selecting for people with similar views and this can leave us with blind spots in particular areas. We are thinking about how to expand the range of people we get advice from. While we cannot promise to enact all suggestions, we would like to hear suggestions from forum users about what else they might like to see from CEA in this area.

It seems like you currently only consult people for EA Global content. Do you want to get advice on how to have a wider range of consultants for EA Global content, or are you asking for something else?

Comment author: Kerry_Vaughan 15 August 2018 09:06:50PM *  4 points [-]

We're asking for feedback on who we should consult with in general, not just for EA Global.

In particular, the usual process of seeking advice from people we know and trust is probably producing a distortion where we aren't hearing from a true cross-section of the community, so figuring out a different process might be useful.

Comment author: remmelt  (EA Profile) 15 August 2018 02:36:30PM 7 points [-]

What are some open questions that you’d like to get input on here (preferably of course from people who have enough background knowledge)?

This post reads to me like an explanation of why your current approach makes sense (which I find mostly convincing). I’d be interested in what assumptions you think should be tested the most here.

Comment author: Kerry_Vaughan 15 August 2018 09:01:16PM 4 points [-]

The biggest open questions are:

1) In general, how can we build a community that is both cause impartial and also representative? 2) If we want to aim for representativeness, what reference class should we target?

Comment author: Milan_Griffes 15 August 2018 04:17:46PM *  4 points [-]

it’s unclear what reference class we should be using when making our work more representative... The best solution is likely some hybrid approach, but it’s unclear precisely how such an approach might work.

Could you say more about what CEA is planning to do to get more clarity about who it should represent?

Comment author: Kerry_Vaughan 15 August 2018 08:58:48PM 2 points [-]

At the moment our mainline plan is this post with a request for feedback.

I've been talking with Joey Savoie and Tee Barnett about the issue. I intend to consult others as well, but I don't have a concrete plan for who to contact.

30

CEA on community building, representativeness, and the EA Summit

( This post was written by Kerry Vaughan and Larissa Hesketh-Rowe with contributions from other members of the CEA staff) There has been discussion recently about how to approach building the EA community, in light of last weekend’s EA Summit and this post on problems with EA representativeness and how... Read More
Comment author: Milan_Griffes 09 February 2018 01:27:23AM 0 points [-]

When is the next round of EA grants opening?

Are you considering accepting applications on a rolling basis?

Comment author: Kerry_Vaughan 11 February 2018 05:27:33PM 2 points [-]

Currently planning to open EA Grants applications by the end of the month. I plan for the application to remain open so that I can accept applications on a rolling basis.

Comment author: weeatquince  (EA Profile) 22 December 2017 03:19:17PM *  3 points [-]

This is fantastic. Thank you for writing up. Whilst reading I jotted down a number of thoughts, comments, questions and concerns.

.

ON EA GRANTS

I am very excited about this and very glad that CEA is doing more of this. How to best move funding to the projects that need it most within the EA community is a really important question that we have yet to solve. I saw a lot of people with some amazing ideas looking to apply for these grants.

1

"with an anticipated budget of around £2m"

I think it is quite plausible that £2m is too low for the year. Not having enough funding increases the costs to applicants (time spent applying) and you (time spent assessing) relative to the benefits (funding moved), especially if there are applicants above the bar for funding but that you cannot afford to fund. Also I had this thought prior to reading that one of your noted mistakes was "underestimated the number of applications", it feels like you might still be making this mistake.

2

"mostly evaluating the merits of the applicants themselves rather than their specific plans"

Interesting decision. Seems reasonable. However I think it does have a risk of reducing diversity and I would be concerned that the applicants would be judged on their ability to hold philosophise in an academic oxford manner etc.

Best of luck with it

.

OTHER THOUGHTS

3

"encouraging more people to use Try Giving,"

Could CEA comment or provide advise to local group leaders on if they would want local groups to promote the GWWC pledge or the Try Giving pledge or when one might be better than the other? To date the advise seems to have been to as much as possible push the Pledge and not Try Giving

4

"... is likely to be the best way to help others."

I do not like the implication that there is a single answer to this question regardless of individual's moral frameworks (utilitarian / non-utilitarian / religious / etc) or skills and background. Where the mission is to have an impact as a "a global community of people..." the research should focus on supporting those people to do what they has the biggest impact given their positions.

5 Positives

"Self-sorting: People tend to interact with others who they perceive are similar to themselves"

This is a good thing to have picked up on.

"Community Health"

I am glad this is a team

"CEA’s Mistakes"

I think it is good to have this written up.

6

"Impact review"

It would have been interesting to see an estimates for costs (time/money) as well as for the outputs of each team.

.

WELL DONE FOR 2017. GOOD LUCK FOR 2018!

Comment author: Kerry_Vaughan 02 January 2018 09:58:25PM 2 points [-]

I think it is quite plausible that £2m is too low for the year. Not having enough funding increases the costs to applicants (time spent applying) and you (time spent assessing) relative to the benefits (funding moved), especially if there are applicants above the bar for funding but that you cannot afford to fund. Also I had this thought prior to reading that one of your noted mistakes was "underestimated the number of applications", it feels like you might still be making this mistake.

That's fair. My thinking in choosing £2m was that we would want to fund more projects than we had money to fund last year, but that we would have picked much of the low-hanging fruit, so there'd be less to fund.

In any case, I'm not taking that number too seriously. We should fund all the projects worth funding and raise more money if we need it.

View more: Next