Comment author: Arepo 01 February 2018 12:39:46AM 6 points [-]

I also feel that, perhaps not now but if they grow much more, it would be worth sharing the responsibility among more than just one person per fund. They don't have to disagree vociferously on many subjects, just provide a basic sanity check on controversial decisions (and spreading the work might speed things up if research time is a limiting factor)

Comment author: Evan_Gaensbauer 01 February 2018 09:38:25PM 12 points [-]

I've received feedback from multiple points in the community the EA Funds haven't been as responsive in as timely or as professional a manner as some would prefer. It appears a factor for this is that the fund managers are all program officers at the Open Philanthropy Project, which is a job which from the fund managers' perspective is most of the time more crucial than anything that can be done with the EA Funds. Thus, doing a more than full-time work-equivalent(?... I don't know how much Open Phil staff work each week) may mean management of the EA Funds gets overlooked. Ben West also made a recent post in the 'Effective Altruism' Facebook group asking about the EA Funds, and the response from the Centre for Effective Altruism (CEA) was they hadn't had a chance to update the EA Funds webpage with data on what grants had been made in recent months.

Given that at the current level of funding, the EA Funds aren't being mismanaged, but rather are being more neglected than donors and effective altruists would like, I'd say it might already be time to assign more managers to the fund. Picking Open Phil program officers to run the funds was the best bet for the community to begin with, as they had the best reputation for acumen going in, but if in practice in turns out Nick, Elie and Lewis only have enough time to manage grants at Open Phil (most of the time), it's only fair to donors CEA assign more fund managers to the fund. What's more, I wouldn't want the attention of Open Phil program officers to be any more divided than it need be, as I consider their work more important than the management of the EA Funds as is.

If the apparent lack of community engagement regarding the EA Funds is on the part of the CEA team responsible to keep the webpage updated, as their time may also be divided and dedicated to more important CEA projects than the EA Funds at any given point in time, that needs to be addressed. I understand the pressures of affording enough money to project management it gets done very effectively, while as an effective non-profit not wanting to let overhead expand too much and result in inefficient uses of donor money. I think if that's the case for CEA staff dividing their time between EA Funds and more active projects, it'd be appropriate for the CEA to hire a dedicated communications manager for the the EA Funds overall, and/or someone who will update the webpage with greater frequency. This could probably be done at 1 full-time equivalent additional staff hire or less. If it's not a single new position at the CEA, a part-time equivalent CEA staffer could have their responsibilities extended to ensuring there's a direct channel between the EA Funds and the EA community.

In the scope of things, such as the money moved through EA overall, EA Funds management may seem a minor issue. Given it's impact on values integral to EA, like transparency and accountability, as well as ensuring high-trust engagement between EA donors and EA organizations, options like I've listed out above seem important to implement. If not, overall, I'd think there's greater need for adding external oversight to ensure anything is being done with the EA Funds.

Comment author: avacyn 28 January 2018 08:03:54PM 2 points [-]

Nice! I really like the idea of EAs getting ahead by coordinating in unconventional ways.

The ideas in "Building and EA social safety net" could be indirectly encouraged by just making EA a tighter community with more close friendships. I'm pretty happy giving an EA friend a 0-interest loan, but I'd be hesitant to do that for a random EA. By e.g. organizing social events where close friendships could form, more stuff like that would happen naturally. Letting these things happen naturally also makes them harder to exploit.

Comment author: Evan_Gaensbauer 29 January 2018 08:47:23PM 1 point [-]

One issue with this sort of thinking is in practice setting up lots of events sometimes doesn't lead to people becoming this close. Some local communities for effective altruism have community members being roommates together, working at the same organizations, and doing all the social stuff. That waxes and wanes with how well organized it all is. Lots of EA community organizers will move from where they're from to another city, e.g., Berkeley, and basically on both ends switching the person who takes the role of de facto event organizer means organization will stagnate as someone else gets used to doing it all. That this apparently happens often means sustaining a space in which close friendships are likely to occur is hard. Doing it consistently over multiple years in hindsight appears hard. I don't know how good evidence there is for optimal methods in doing this.

Comment author: Joey 15 January 2018 01:51:43AM 3 points [-]

So I think we agree on some things and disagree on others. I think that getting large EA organizations to adopt the cause definitely helps but is but is not necessary. Animal rights as a whole, for example, is not mentioned at all on GiveWell or GWWC and it’s listed as a 2nd tier area by 80,000 Hours (bit.ly/2DdxCqQ), but it is still pretty clearly endorsed by EA as a whole. If by EA orgs you mean EA orgs of any size, I do think that most cause areas that are accepted by the EA movement will get organizations started in it in time. I think that causes like wild animal suffering and positive psychology are decent examples of causes that have gotten some traction without major pre-existing organizations endorsing them. It might also come down to disagreements about definitions of “in EA”.

I almost put your blogs into this post as a positive example of what I wish people would do, but I wanted to keep the post to a lower length. In general, I think your efforts on mental health have updated more than a few EAs in positive directions towards it, including myself. There has been some related external content and research on this topic in part because of your posts and I would put a nontrivial chance on some EAs in the next 1-5 years focusing exclusively on this cause area and starting something in it. In general, I would expect adoption to new causes to be fairly slow and start with small numbers of people and maybe one organization before expanding to be on the standard go-to EA list.

I think if I were to guess what is holding back mental health / positive psych as a cause area it would be having a really strong concrete charity to donate to. By strong charity, I mean strong CEA but also focus on narrow set of interventions, decent evidence base/track record, strong M&E, and decently investigated by an external EA party (would not have to be an org. Could be an individual.) Something like Strong Minds might be a good fit for this.

Comment author: Evan_Gaensbauer 16 January 2018 03:07:28AM *  0 points [-]

I made this same point in the 'Effective Altruism' Facebook group a while ago if anyone wanted to follow for other public conversation on the topic. I wonder if a post on the EA Forum summarizing these kinds of points and requesting evaluations or reviews of charities based on effective positive psychology interventions rigorously implemented would be a good idea.

Comment author: MichaelPlant 12 January 2018 12:11:13AM *  8 points [-]

I worry you've missed the most important part of the analysis. If we think what it means for a "new cause to be accepted by the effective altruism movement" that would proably be either:

  1. It becomes a cause area touted by EA organisations like Give Well, CEA, or GWWC. In practice, this involves convincing the leadership of those organisations. If you want to get a new cause in via this route, that's end goal you need to achieve; writing good arguments is a means to that end.

  2. you convince individuals EA to change what they do. To a large extent, this also depends on convincing EA-org leadership, because that's who people look to for confirmation a new cause has been vetted. This isn't necessarily stupid on the part of individual EAs to defer to expert judgement: they might think "Oh, well if so and so aren't convinced about X, there's probably a reason for it".

This seems as good as time as any to re-plug the stuff I've done. I think these mostly meet your criteria, but fail in some key ways.

I first posted about mental health and happiness 18 months ago and explained why poverty is less effective than most will think and mental health more effective. I think I was, at the time, lacking a particular charity recommendation though (I now think Basic Needs and Strong Minds look like reasonable picks); I agree it's important new cause suggestions have 'shovel ready' project.

I argued you, whoever you are, probably don't want to donate the Against Malaria Foundation. I explain it's probably a mistake for EAs to focus too much on 'saving lives' at the expense of either 'improving lives' or 'saving humanity'.

Back in August I explain why drug policy reform should be taken seriously as new cause. I agree that lacks a shovel ready project too, but, if anything, I think there was too much depth and rigour there. I'm still waiting for anyone to tell me where my EV calcs have gone wrong and drug policy reform wouldn't be more cost-effective than anything in GiveWell's repertoire.

Comment author: Evan_Gaensbauer 16 January 2018 03:00:04AM *  2 points [-]

I think this is missing some prior steps as to how a cause can be built up in the effective altruism movement. For example, a focus on risks of astronomical future suffering ("s-risks), and reducing wild animal suffering (RWAS), both largely inspired in EA by Brian Tomasik's work, have found success in the German-speaking world and increasingly globally throughout the movement. These are causes which have both have largely circumvented attention from either the Open Philanthropy Project (Open Phil) or the Centre for Effective Altruism (CEA) and its satellite projects (e.g., GWWC, 80,000 Hours, etc.).

Since the beginning of effective altruism, global poverty alleviation and global health have been the biggest focus areas. I witnessed as the movement grew causes were developed through a mix of online coordination on the global level with social networks like Facebook, mailing lists, and fora like LessWrong, and locally or regionally with non-profit organizations focusing on outreach and research. This was the case for both AI safety and farm animal welfare, which proportionally didn't have nearly the representation in EA five years ago that they have now.

Certainly smaller focus areas like s-risk reduction and RWAS are receiving much less attention than others in EA. However, that across multiple organizations each of those causes is respectively funded by between $100k and $1 million USD, largely from individual effective altruists, is proof of concept a cause can be built up without being touted by CEA or Open Phil. And what's more it's not as if the trajectory of these causes looks bleak. They've been building up growth momentum for years, and they're not showing signs of slowing. So how much they achieve increasing success in the near future will provide more data about what's possible in getting a new cause into EA. What's more, at least RWAS is a cause that's on Open Phil's radar. So it's not like grants or endorsements of these causes from Open Phil or CEA couldn't happen in the future.

In general I think developing a cause within the effective altruism community is something which often precedes more focus from it by flagship organizations of the movement, and that the process of development often takes the form of following the kinds of steps Joey outlined above. Obviously there could be more to the process than just that. I'm working on a post to introduce a project which builds on the kinds of steps Joey pointed out, and you've already taken, to organize and coordinate causes in effective altruism.

Comment author: mhpage 10 January 2018 12:04:57PM 13 points [-]

This comment is not directly related to your post: I don't think the long-run future should be viewed of as a cause area. It's simply where most sentient beings live (or might live), and therefore it's a potential treasure trove of cause areas (or problems) that should be mined. Misaligned AI leading to an existential catastrophe is an example of a problem that impacts the long-run future, but there are so, so many more. Pandemic risk is a distinct problem. Indeed, there are so many more problems even if you're just thinking about the possible impacts of AI.

Comment author: Evan_Gaensbauer 16 January 2018 02:43:44AM 2 points [-]

I agree with Jacy. Another point I'd add is effective altruism is a young movement also focused on making updates and change its goals as new and better info can be integrated into our thinking. This leads to the evolution of various causes, interventions and research projects in the movement undergoing changes which make them harder to describe.

For example, for a long time in EA, "existential risk reduction" was associated primarily with AI safety. In the last few years ideas from Brian Tomasik have materialized in the Foundational Research Institute and their focus on "s-risks" (risks of astronomical suffering). At the same time, organizations like Allfed are focused on mitigating existential risks which could realistically happen on a timeline in the medium-term future, i.e., the next few decades, but the intervention themselves aren't as focused on the far-future, e.g., at least the next few centuries.

However, x-risk and s-risk reduction dominate in EA through AI safety research as the favoured intervention, and with a focus motivated by astronomical stakes. Lumping that altogether could be called a "far future" focus. Meanwhile, 80,000 Hours advocates for the use of the term "long-run future" for a focus on risks extending from the present to the far future which depend on policy regarding all existential risks, including s-risks.

I think finding accurate terminology for the whole movement to use is a constantly moving target in effective altruism. Obviously using common language optimally would be helpful, but debating and then coordinating usage of common terminology also seems like it'd be a lot of effort. As long as everyone is roughly aware of what each other is talking about, I'm unsure how much of a problem this is. It seems professional publications out of EA organizations, as longer reports which can afford the space to define terms, should do so. The EA Forum is still a blog, so that it's regarded as lower-stakes, I think it makes sense to be tolerant of differing terminology, although of course clarifications or expansions upon definitions should be posted to the comments, as above.

Comment author: Milan_Griffes 03 October 2017 01:22:58AM 2 points [-]

Minor thing: it'd be helpful if people who downvoted commented with their reason why.

Comment author: Evan_Gaensbauer 03 October 2017 03:10:49AM *  3 points [-]

Presumably it's because they thought that either this sort of drug policy reform, or, more likely, they don't think an announcement for conferences exclusive to what is still only a minor cause in the effective altruism community justifies its own post on the EA Forum.

Based on our investigation so far, US drug policy reform appears to be an impactful and tractable cause area.

Some users might just not visit the Forum often enough to have heard of Enthea's work before, so you could edit the post and add some hyperlinks to your other posts on the EA Forum so everyone will know the context of this post.

Comment author: Evan_Gaensbauer 26 August 2017 09:24:51PM 6 points [-]

Another specific part of life that isn't replicable for lots of effective altruists as compared to others is being fully able-bodied, or being in good health. One common but largely unspoken facet of life is lots of people have problems with physical or mental illness which either cost money, or hinder their ability to earn money as they would have been able to otherwise. So, including opportunity costs, the costs of health problems can be quite steep. This is the number one thing I think would affect all kinds of people, and so is a primary consideration to take into account of what the added necessary and fixed costs in a budget would be in addition to the template provided above.

In response to EAGx Relaunch
Comment author: Evan_Gaensbauer 04 August 2017 07:17:55AM 1 point [-]

What's the timeframe in which CEA will be accepting applications to host/organize EAGx events?

Comment author: Linch 13 July 2017 05:56:57AM 0 points [-]

This seems like a perfectly reasonable comment to me. Not sure why it was heavily downvoted.

Comment author: Evan_Gaensbauer 14 July 2017 08:09:15AM 0 points [-]

Talking about people in the abstract, or in a tone as some kind of "other", is to generalize and stereotype. Or maybe generalizing and stereotyping people others them, and makes them too abstract to empathize with. Whatever the direction of causality, there are good reasons people might take my comment poorly. There's lots of skirmishes online in effective altruism between causes, and I expect most of us don't all being lumped together in a big bundle, because it feels like under those circumstances at least a bunch of people in your inner-ingroup or whatnot will feel strawmanned. That's what my comment reads like. That's not my intention.

I'm just trying to be frank. On the Effective Altruism Forum, I try to follow Grice's Maxims because I think writing in that style heuristically optimizes the fidelity of our words to the sort of epistemic communication standards the EA community would aspire to, especially as inspired by the rationality community to do so. I could do better on the maxims of quantity and manner/clarity sometimes, but I think I do a decent job on here. I know this isn't the only thing people will value in discourse. However, there are lots of competing standards for what the most appropriate discourse norms are, and nobody is establishing to others how the norms will not just maximize the satisfaction of their own preferences, but maximize the total or average satisfaction for what everyone values out of discourse. That seems the utilitarian thing to do.

The effects of ingroup favouritism in terms of competing cause selections in the community don't seem healthy to the EA ecosystem. If we want to get very specific, here's how finely the EA community can be sliced up by cause-selection-as-group-identity.

  • vegan, vegetarian, reducetarian, omnivore/carnist
  • animal welfarist, animal liberationist, anti-speciesist, speciesist
  • AI safety, x-risk reducer (in general), s-risk reducer
  • classical utilitarian, negative utilitarian, hedonic utilitarian, preference utilitarian, virtue ethicist, deontologist, moral intuitionist/none-of-the-above
  • global poverty EAs; climate change EAs?; social justice EAs...?

The list could go on forever. Everyone feels like their representing not only their own preferences in discourse, but sometimes even those of future generations, all life on Earth, tortured animals, or fellow humans living in agony. Unless as a community we make an conscientious effort to reach towards some shared discourse norms which are mutually satisfactory to multiple parties or individual effective altruists, however they see themselves, communication failure modes will keep happening. There's strawmanning and steelmanning, and then there's representations of concepts in EA which fall in between.

I think if we as a community expect everyone to impeccably steelman everyone all the time, we're being unrealistic. Rapid growth of the EA movement is what organizations from various causes seem to be rooting for. That means lots of newcomers who aren't going to read all the LessWrong Sequences or Doing Good Better before they start asking questions and contributing to the conversation. When they get downvoted for not knowing the archaic codex that are evolved EA discourse norms, which aren't written down anywhere, they're going to exit fast. I'm not going anywhere, but if we aren't more willing to be more charitable to people we at first disagree with than they are to us, this movement won't grow. That's because people might be belligerent, or alarmed, by the challenges EA presents to their moral worldview, but they're still curious. Spurning doesn't lead to learning.

All of the above refers only to specialized discourse norms within just effective altruism. This would be on top of the complicatedness of effective altruists private lives, all the usual identity politics, and otherwise the common decency and common sense we would expect on posters on the forum. All of that can already be difficult for diverse groups of people as is. But for all of us to go around assuming the illusion of transparency makes things fine and dandy with regards to how a cause is represented without openly discussing it is to expect too much of each and every effective altruist.

Also, as of this comment, my parent comment above has net positive 1 upvote, so it's all good.

Comment author: DavidNash 10 July 2017 09:04:47PM 0 points [-]

Might it be that 80k recommend X-risk because it's neglected (even within EA) and that if more then 50% of EAs had X-risk as their highest priority it would no longer be as neglected?

Comment author: Evan_Gaensbauer 10 July 2017 09:47:05PM -1 points [-]

I don't think that'd be the case as from inside the perspective of someone already prioritizing x-risk reduction, it can appear that cause is at least thousands of times more important than literally anything else. This is based on an idea formulated by philosopher Nick Bostrom: astronomical stakes (this is Niel Bowerman in the linked video, not Nick Bostrom). The ratio x-risk reducers think is appropriate for resources dedicated to x-risk relative to other causes is arbitrarily high. Lots of people think the argument is missing some important details, or ignoring major questions, but I think from their own inside view x-risk reducers probably won't be convinced by that. More effective altruists could try playing the double crux game to find the source of disagreement about typical arguments for far-future causes. Otherwise, x-risk reducers would probably maintain in the ideal as many resources as possible ought be dedicated to x-risk reduction, but in practice may endorse other viewpoints receiving support as well.

View more: Next