Comment author: Evan_Gaensbauer 05 May 2018 02:38:05PM *  2 points [-]

Based on a suggestion from Michael Plant, I'm collapsing the few posts I made on the EA Forum on this topic to one. Since a comment made by user Ronja_Lutz will be deleted alongside the post it was under, I'm reproducing it here for ease of response later.

Thank you - this sounds like it will be very valuable for our local reading/discussion group! In previous discussions, we were struggling a bit with the term "suffering", since we couldn't find a clear definition for it (we read Thoomas Metzinger's paper, but didn't find it very useful). Do you have any recommendations for that, too?


Reducing Wild Animal Suffering Literature Library: Introductory Materials, Philosophical & Empirical Foundations

These reading modules are put together by the members of the group  Wild Animal Welfare Project Discussion  as part of the  RWAS Literature Library Project . This series of articles and essays together lay out crucial considerations explaining and underpinning the reduction of wild animal suffering (RWAS) as a potential focus area for effective... Read More

Wild Animal Welfare Project Discussion: A One-Year Strategic Review

Summary: One year ago, I started the Facebook group Wild Animal Welfare Project Discussion to coordinate R&D projects in the network of wild animal suffering reducers in effective altruism, and as part of a broader project of figuring out how to coordinate and develop causes within effective altruism. This is... Read More
Comment author: RomeoStevens 04 May 2018 10:14:54PM 1 point [-]

I wasn't commenting on the overall intention but on enumerations of causal levers outlined by economists in the talks given. I was objecting to the frame that these causal levers are obfuscated. I think presenting them as such is a way around them being low status to talk about directly.

Comment author: Evan_Gaensbauer 04 May 2018 10:36:29PM 1 point [-]

Thanks for the context. That makes a lot of sense. I've undid my downvote on your parent comment, upvoted it, and also upvoted the above. (I think it's important, as awkward as it might be, for rationalists and effective altruists to explicate their reasoning at various points throughout their conversation, and how they update at the end, to create a context of rationalists intending their signals to be clear and received without ambiguity. It's hard to get humans to treat each other with excellence, so if our monkey brains force us to treat each other like mere reinforcement learners, rationalists might as well be transparent and honest about it.)

It would appear the causal levers aren't obfuscated. Which ones do you expect are the most underrated?

Comment author: RyanCarey 02 May 2018 11:32:16PM 1 point [-]

I think this has turned out really well Max. I like that this project looks set to aid with movement growth while improving the movement's intellectual quality, because the content is high-quality and representative of current EA priorities. Maybe the latter is the larger benefit, and probably it will help everyone to feel more confident in accelerating the movement growth over time, and so I hope we can find more ways to have a similar effect!

Comment author: Evan_Gaensbauer 04 May 2018 07:57:48PM *  2 points [-]

What current EA priorities are according to the CEA is at odds with what many others think EA's current priorities are. It appears the CEA, by putting more emphasis on x-risk reduction, is smuggling what they think EA's current (proportional distribution) of priorities should be into a message about what EA's current priorities actually are, in a way undermining the perspective of thousands of effective altruists. So the idea that this handbook will increase the movement's intellectual quality is based on definitions of quality and representation for EA many effective altruists don't share. I think this and future editions of the EA Handbook should be regarded as the community as drafts from the CEA until they carry out a project of getting as broad and substantial a swathe of feedback from important community actors as they can. This doesn't have to be a program of populist democracy where each self-identified effective altruist gets a vote. But the CEA could run the EA Handbook by EA organizations which are just as crucial to effective altruism as the CEA which don't have the phrase 'effective altruism' in their name, like ACE, Givewell, or any other organizations which are cause-specific but are community pillars nonetheless.

Comment author: Maxdalton 04 May 2018 05:02:53PM 6 points [-]

(Copying across some comments I made on Facebook which are relevant to this.)

Thanks for the passionate feedback everyone. Whilst I don’t agree with all of the comments, I’m sorry for the mistakes I made. Since some of the comments above make similar comments, I’ll try to give general replies in some main-thread comments. I’ll also be reaching out to some of the people in the thread above to try to work out the best way forward.

My understanding is that the main worry that people have is about calling it the Effective Altruism Handbook vs. CEA’s Guide to Effective Altruism or similar. For the reasons given in my reply to Scott above, I think that calling it the EA Handbook is not a significant change from before: unless we ask Ryan to take down the old handbook, then whatever happens, there will be a CEA-selected resource called the EA Handbook. For reasons given above and below, I think that the new version of the Handbook is better than the old. I think that there is some value in explicitly replacing the old version for this reason, and since “EA Handbook” is a cleaner name. However, I do also get people’s worries about this being taken to represent the EA community as a whole. For that reason, I will make sure that the title page and introduction make clear that this is a project of CEA, and I will make clear in the introduction that others in the community would have selected different essays.

My preferred approach would then be to engage with people who have expressed concern, and see if there are changes we can make that alleviate their concerns (such as those we already plan to make based on Scott’s comment). If it appears that we can alleviate most of those concerns whilst retaining the value of the Handbook from from CEA’s perspective, it might be best to call it the Centre for Effective Altruism’s EA Handbook. Otherwise, we would rebrand. I’d be interested to hear in comments whether there are specific changes (articles to add/take away/design things) that would reassure you about this being called the EA Handbook.

In this comment I’ll reply to some of the more object-level criticisms. I want to apologize for how this seemed to others, but also give a clearer sense of our intentions. I think that it might seem that CEA has tried merely to push AI safety as the only thing to work on. We don’t think that, and that wasn’t our intention. Obviously, poorly realized intentions are still a problem, but I want to reassure people about CEA’s approach to these issues.

First, re there not being enough discussion of portfolios/comparative advantage, this is mentioned in two of the articles (“Prospecting for Gold” and “What Does (and Doesn't) AI Mean for Effective Altruism?”). However, I think that we could have emphasised this more, and I will see if it’s possible to include a full article on coordination and comparative advantage.

Second, I’d like to apologise for the way the animal and global health articles came across. Those articles were commissioned at the same time as the long-term future article, and they share a common structure: What’s the case for this cause? What are some common concerns about that cause? Why might you choose not to support this cause? The intention was to show how many assumptions underlie a decision to focus on any cause, and to map out some of the debate between the different cause areas, rather than to illicitly push the long-term future. It looks like this didn’t come across, sorry. We didn’t initially commission sub-cause profiles on government, AI and biosecurity, which explains why those more specific articles follow a different structure (mostly talks given at EA Global).

Third, I want to explain some of the reasoning behind including several articles on AI. AI risk is a more unusual area, which is more susceptible to misinterpretation than global health or animal welfare. Partly for this reason, we thought that it was sensible to include several articles on this topic, with the intention that this would provide more needed background and convey more of the nuance of the idea. I will talk with some of the commenters above to discuss if it makes sense to do some sort of merge so that AI dominates the contents page less.

Comment author: Evan_Gaensbauer 04 May 2018 07:51:37PM 4 points [-]

What about the possibility the Centre for Effective Altruism represents the community by editing the EA Handbook to reflect what the community values in spite of what the CEA concludes, the CEA excludes evaluations from the EA Handbook from which it currently diverges from the community, and it's still called the 'EA Handbook' instead of 'CEA's Guide to EA?' Obviously this wouldn't carry EA forward with what the CEA thinks is maximum fidelity, but it's clear many think the CEA is trying to spread the EA message with infidelity, while acting as though they're the only actor in the movement others can trust to carry that message. It looks not only hypocritical but undermines faith in the CEA.

Altering the handbook so it's more of a compromise between multiple actors in EA will redeem the reputation of the CEA. Without that, the CEA can't carry EA forward with the fidelity at all, because the rest of the movement wouldn't cooperate with them. In the meantime, the CEA and everyone else can hammer out what we think is the most good here on the EA Forum. If broader conclusions are drawn which line up with the CEA's evaluation based on a consensus the CEA had behind their perspective the best arguments, that can be included in the next edition of the EA Handbook. Again, from the CEA's perspective, that might seem like deliberately compromising the fidelity of EA in the short-term to appease others. But again, from the perspective of the CEA's current critics, why they're criticizing the 2nd edition of the EA Handbook is because they perceive themselves as protecting the fidelity of EA from the Centre for Effective Altruism. This could solve other contentious issues in EA, such as consideration of both s-risks and x-risks from AI. The EA Handbook could be published as close to identically as possible in multiple languages, which would prevent the CEA from selling EA one way in English, the EAF selling it another way in German, and creating more trust issues which would down the road just become sources of conflict, not unlike the criticism the EA Handbook, 2nd edition, is receiving now. Ultimately, this would be the CEA making a relatively short-term compromise to ensure the long-term fidelity of EA by demonstrating themselves as delegate and representative agency the EA community can still have confidence in.

Comment author: DavidMoss  (EA Profile) 03 May 2018 08:12:59PM 8 points [-]

My anecdata is that it's very high, since people are heavily influenced by such norms and (imagined) peer judgement.

Cutting the other way, however, people who are brought into EA by such social effects (e.g. because they were hanging around friends who were EA, so they became involved in EA too rather than in virtue of having (always had) intrinsic EA belief and motivation) would be much more vulnerable to value drift once those social pressures change. I think this is behind a lot of cases of value drift I've observed.

When I was systematically interviewing EAs for a research project this distinction, between social-network EAs and always-intrinsic EAs was one of the clearest and most important distinctions that arose. I think one might imagine that social-network EAs would be disproportionately less involved, more peripheral members, whereas the always-intrinsic EAs would be more core, but actually the tendency was roughly the reverse. The social-network EAs were often very centrally positioned in higher staff positions within orgs, whereas the always-instrinsic EAs were often off independently doing whatever they thought was most impactful, without being very connected.

Comment author: Evan_Gaensbauer 03 May 2018 10:36:37PM 2 points [-]

It appears the best of both worlds might be to seed local EA presence where the initial social network is composed of individuals who were always intrinsically motivated by EA who were also friends. I wouldn't be surprised if that's the story behind many local EA communities which became well-organized independent of one another. Of course if this is the key to bringing together local EA presences as social networks which tend toward lower rates of value drift, the kind of data we're collecting so far won't be applicable to what we want to learn for long. The anecdata of EAs who have been in the community since before there was significant investment in and direction of movement growth won't be relevant when we're trying to systematize that effort in a goal-driven fashion. As EA enters another stage of organization as a movement, it's a movement structured fundamentally differently than how it organically emerged from a bunch of self-reflective do-gooders finding each other on the internet 5+ years ago.

Comment author: Denise_Melchin 02 May 2018 10:24:25AM 1 point [-]

I don’t think of having a (very) limited pool of funders who judge your project as such a negative thing. As it’s been pointed out before, evaluating projects is very time intensive.

You’re also implicitly assuming that there’s little information in the rejection of funders. I think if you have been rejected by 3+ funders, where you hopefully got a good sense for why, you should seriously reconsider your project.

Otherwise you might fall prey to the unilateralist’s curse - most people think your project is not worth funding, possibly because it has some risk of causing harm (either directly or indirectly by stopping others from taking up a similar space) but you only need one person who is not dissuaded by that.

Comment author: Evan_Gaensbauer 03 May 2018 03:57:00AM 2 points [-]

I think if you have been rejected by 3+ funders, where you hopefully got a good sense for why, you should seriously reconsider your project.

As Peter hints at below and which I've mentioned in another comment, the problem appears to be as soon as smaller donors receive info about a project having a funding application rejected by a more influential funder, such as the EA Grants, they reject them. So what some projects are experiencing isn't the serial rejection of three independent funders, but rejection after it becomes common knowledge the first funder rejected them. The problem appears to be the funders with the most money or best affective reputation in EA are implicitly assumed to have the soundest approaches for assessing projects as well, which shouldn't be the case.

Comment author: Denkenberger 02 May 2018 10:44:25PM 1 point [-]

I haven't seen the launch of 2018 EA grants - could you link to it?

Comment author: Evan_Gaensbauer 03 May 2018 03:51:42AM 0 points [-]

I heard a round of EA Grants applications had opened for this year, but that appears not to currently be the case according to the EA Grants website. I was mistaken. I did hear more EA Grants will be from community members, but not directly from anyone at the CEA, and I assume applications will open at some point, but there isn't anywhere the CEA has said when.

Comment author: Gregory_Lewis 02 May 2018 06:10:23PM 4 points [-]

Thanks for the even-handed explication of an interesting idea.

I appreciate the example you gave was more meant as illustration than proposal. I nonetheless wonder whether further examination of the underlying problem might lead to ideas drawn tighter to the proposed limitations.

You note this set of challenges:

  1. Open Phil targets larger grantees
  2. EA funds/grants have limited evaluation capacity
  3. Peripheral EAs tend to channel funding to more central groups
  4. Core groups may have trouble evaluating people, which is often an important factor in whether to fund projects.

The result is a good person (but not known to the right people) with a good small idea is nonetheless left out in the cold.

I'm less sure about #2 - or rather, whether this is the key limitation. Max Dalton wrote on one of the FB threads linked.

In the first round of EA Grants, we were somewhat limited by staff time and funding, but we were also limited by the number of projects we were excited about funding. For instance, time constraints were not the main limiting factor on the percentage of people we interviewed. We are currently hiring for a part-time grants evaluator to help us to run EA Grants this year[...]

FWIW (and non-resiliently), I don't look around and see lots of promising but funding starved projects. More relevantly, I don't review recent history and find lots of cases of stuff rejected by major funders then supported by more peripheral funders which are doing really exciting things.

If not, then the idea here (in essence, of crowd-sourcing evaluation to respected people in the community) could help. Yet it doesn't seem to address #3 or #4.

If most of the money (even from the community) ends up going through the 'core' funnel, then a competitive approach would be advocacy to these groups to change their strategy, instead of providing a parallel route and hoping funders will come.

More importantly, if funders generally want to 'find good people', the crowd-sourced project evaluation only helps so much. For people more on the periphery of the community, this uncertainty from funders will remain even the anonymised feedback on the project is very positive.

Per Michael, I'm not sure what this idea has over (say) posting a 'pitch' on this forum, doing a kickstarter, etc.

Comment author: Evan_Gaensbauer 02 May 2018 09:08:34PM *  3 points [-]

Edit: I heard a round of EA Grants applications had opened for this year, but that appears not to currently be the case according to the EA Grants website. I was mistaken. I did hear more EA Grants will be from community members, but not directly from anyone at the CEA, and I assume applications will open at some point, but there isn't anywhere the CEA has said when.

It should be noted the EA Grants and the EA Funds are different accounts with different issues. Last year the EA Grants were limited by staff time, but I don't recall anyone directly saying that was the case with the EA Funds. There is another round of EA Grants this year, so no data has come out about that. I expect the CEA is putting more staff time on it to solve the most obvious flaw with the EA Grants last year.

Each of the EA Funds have been performing separately. Last year when there were infrequent updates about the EA Funds it turned out the CEA was experiencing technical delays in implementing the EA Funds website. Since then, while it's charitably assumed (as I think is fair) each of the fund managers might be too busy with their day jobs at the Open Philanthropy Project to afford as much attention to fund management, neither the CEA nor Open Phil has confirmed such speculation. The Funds also vary in their performance. Lewis Bollard has continually made many smaller grants to several smaller projects from the Animal Welfare Fund, contrasted with Neck Beckstead who has made only one grant from each of the two funds he manages, the Far Future Fund and the EA Community Fund. I contacted the CEA and let me know they intend to release updates on the Far Future Fund and EA Community Fund (which I assume will include disclosures of grants they've been tabling the last few months) by July.

FWIW (and non-resiliently), I don't look around and see lots of promising but funding starved projects. More relevantly, I don't review recent history and find lots of cases of stuff rejected by major funders then supported by more peripheral funders which are doing really exciting things.

Per Michael, I'm not sure what this idea has over (say) posting a 'pitch' on this forum, doing a kickstarter, etc.

One problem is smaller organizations with smaller, less experienced teams is they don't know how well how to independently and effectively pitch or raise funds for their project, even when their good people with good ideas. Compounding this is a sense of dejection by nascent community projects once they've been rejected by the big funders to receive grants, especially otherwise qualified EA community members who don't know how to navigate the non-profit sector. This is feedback I've gotten from community members who know of projects which didn't get off the ground, and that they faltered quietly might be why they go unnoticed. That stated, I don't think there is a ton of promising but funding-starved projects around.

On the flip side, I've heard some community members say they're overlooked by donors who are earning to give after they've been overlooked by, e.g,. the EA Grants, apparently based on the reasoning since as individual donors they don't have the bandwidth to evaluate projects, they defer to the apparently expert judgement of the CEA, and since the CEA didn't fund the project, individual would-be donors conclude a project isn't fit to receive funding from them either. This creates a ludicrous Catch-22 in which projects won't get funding from smaller donors until they have authentic evidence of the quality of their project in the form of donations from big donors, which if the projects got they wouldn't need to approach the smaller donors in the first place. This isn't tricky epistemology or the CEA even unwittingly creating perverse incentives. Given the EA Grants said they didn't have the bandwidth to evaluate a lot of potentially valuable projects, for other donors to base not donating to small projects based on them not receiving EA Grants is unsound. It's just lazy reasoning because smaller donors don't have the bandwidth to properly evaluate projects either.

Ultimately I think we shouldn't hold single funders like CEA and Open Phil primarily accountable for this state of affairs, and the community needs to independently organize to connect funding with promising projects better. I think this is a problem in a demand of a solution, but I think something like a guide on how to post pitches or successfully crowd-fund a project would work better than creating a brand new EA crowdfunding platform. Joey Savoie recently wrote a post about how to write posts on the EA Forum to get new causes in EA, as a long-time community members who himself has lots of experience writing similar pitches.

Unfortunately advocating for core funding groups to change their strategy has practical costs which apparently so high appeals like this on the EA Forum feel futile. Direct advocacy to change strategy is too simplistic, and long essays on the EA Forum which ground the epistemological differences of individual effective altruists which diverge from the CEA or Open Phil receive little to no feedback. I think from the inside these organizations focus narrowly on maximizing goal satisfaction they don't have the time to alter their approach in light of critical feedback from the community, and all the while they feel it's important to carry on with the very same approaches others in the community are unhappy with. So while I think in this instance a crowdfunding platform is not the right solution, advocating or changing to existing funds seems noncompetitive as well, and designing other parallel routes for funding is something I'd encourage effective altruists to do.

View more: Prev | Next