Comment author: Robert_Wiblin 21 May 2018 02:55:42PM 2 points [-]

Like you, at 80,000 Hours we view the relative impact of money vs talent to be specific to particular problems and potentially particular approaches too.

First you need to look for what activities you think are most impactful, and then see what your money can generate vs your time.

Comment author: RandomEA 21 May 2018 07:26:51PM 1 point [-]

First you need to look for what activities you think are most impactful, and then see what your money can generate vs your time.

This statement could be interpreted as suggesting that people should use a two-step process: first, choose a problem based on how pressing it is and then second, decide how to contribute to solving that problem.* That two-step approach would be a bad idea because some people may be able to make a greater impact working on a less pressing problem if they are especially effective at addressing that problem. Because of this, information about how pressing different problems are relative to each other should not be used to choose a single problem; instead, it should be used as background information when comparing careers across problems.

*I doubt that's what you actually meant since you wrote the linked article that discusses personal fit. But I figured some people might be unfamiliar with that article, so I thought it'd be worthwhile to note the issue.

Comment author: RandomEA 21 May 2018 12:28:43AM *  10 points [-]

Here's what Lewis Bollard had to say about the talent vs. funding issue when asked about it on the 80,000 Hours podcast (in September 2017):

Robert Wiblin: My impression is that fa …. animal welfare organisations, at least the ones that I’m aware of, they are associated with Effective Altruism are often among the most funding constrained. That they often feel like they’re most limited by access to money. Does this suggest that people who are concerned with animal welfare should be more inclined to do earning to give and, perhaps, rather than work in the area, instead make money and give it away?

Lewis Bollard: I don’t think so. I think that that was true until two years ago, or it was true until eighteen months ago when we started ground making in this field. I think the situation has dramatically improved in terms of funding largely because of Open Phil. Entering this field, but also because there are a number of other very generous donors who’ve either entered the field or significantly increased their giving in the last two years.

Right now I think there is a bigger talent gap than financial gap for farm animal welfare groups. That’s not to say it will always be that way, and I certainly do think that someone whose aptitude or inclination is heavily toward earning to give, it could still well make sense. If someone has great quantitative skills and enjoys working at a hedge fund, then I would say earn to give. That could be still a really powerful way and we will more and more funders over time to continue scaling up the movement, but all things equal, I would encourage someone to focus more on the talent piece now because I do think that things have really flipped in the last few years, and I’m pretty optimistic that the funding will continue to grow in this space for animal welfare.

Robert Wiblin: What makes you confident about that? You don’t expect to be fired in the next few years?

Lewis Bollard: First, I hope I won’t be fired, but I think there’s a deep commitment from the Open Philanthropy Project to continue strong funding in this space, to continue funding on at least the level we’re funding currently and hopefully more.

I’ve also just seen a number of new large-ish funders coming online. Just in the last two years I would say the number of funders giving more than two hundred thousand dollars a year has doubled, and I’ve started to see real interest from some other major potential funders.

I think it’s natural that, as this issue has gained public prominence, so were there a lot of potential donors, or people who have great wealth, have realised that this is something important and this is something that they can make a great difference.

Comment author: MichaelPlant 13 May 2018 11:05:26PM 13 points [-]

I appreciate the write up and think founding charities could be a really effective thing to do.

I do wonder if this might be an overly rosey picture for a couple of reasons.

  1. Are there any stories of EAs failing to start charities? If there aren't, that would be a bit strange and I'd want to know why there were no failures. If there are, what happened and why didn't they work? I'm a bit worried about a survivorship effect making it falsely look like starting charities is easy. (On a somewhat releated note, your post may prompt me to finally write up something about my own unsuccessful attempt to start a start up)

  2. One is that some of the charities you mention are offshoots/sister charities of each other - GWWC and 80k, Charity Science Health and Fortify Health. This suggests to me it might be easier to found a second charity than a first one. OPP and GiveWell also fit this mold.

  3. Including AMF is, in some sense a bit odd, because it wasn't (I gather) founded with the intention of being the most effective charity. I say it's odd because, if it hadn't existed, the EA world would have found another charity that it deemed to be the most effective. Unless AMF thought they would be the most effective, they sort of 'got lucky' in that regard.

Comment author: RandomEA 14 May 2018 04:56:20AM 6 points [-]

One is that some of the charities you mention are offshoots/sister charities of each other - GWWC and 80k, Charity Science Health and Fortify Health. This suggests to me it might be easier to found a second charity than a first one. OPP and GiveWell also fit this mold.

It's also worth noting that Animal Charity Evaluators started as an 80,000 Hours project and that the Good Food Institute was the brainchild of the Mercy for Animals leadership team.

Comment author: RandomEA 13 May 2018 06:42:10PM 2 points [-]

This is somewhat off-topic but it's relevant enough that I thought I'd raise it here.

What is the most impactful volunteering opportunity for a non-EA who prioritizes more conventional causes (including global poverty) and who lacks specialized skills? Basically, I'm seeking a general recommendation for non-EAs who ask how they can most effectively volunteer. I recognize that the recommended volunteering for a non-EA will be much less impactful than the recommended volunteering for an EA, but I think it can sometimes be worthwhile to spread a less impactful idea to a larger number of people (e.g. The Life You Can Save).

The standard view seems to be that volunteering in a low-skill position produces as much value for an organization as donating the amount necessary for them to hire a minimum wage worker as a replacement. While this may be correct as a general matter, I think there are likely exceptions:

  1. An organization may feel that volunteer morale will greatly decrease if there are some people doing the same work as the volunteers for the same number of hours who are paid.

  2. An organization may be unwilling to hire people to do the work for ideological reasons.

  3. An organization may be unwilling to hire people to do the work because doing so would look bad to the public.

  4. An organization may feel that passion about the cause is extremely important and that the best way to select for passion is to only accept people who will work for free.

  5. An all-volunteer organization may lack the infrastructure to pay employees meaning that it would have to pay a high initial cost before hiring its first employee.

Thus, it seems plausible to me that there is some relatively high impact organization with appeal to non-EAs where a person without specialized skills can have a significant impact. Does anyone know of a volunteering opportunity like this?

Comment author: RandomEA 13 May 2018 02:48:20PM 3 points [-]

The Humane League (THL) is an ACE-recommended charity. THL runs the Fast Action Network, an online group which sends out easy, one-minute actions two or three times per week, including signing petitions, posting on social media, or emailing decision makers, as part of campaigns to mitigate factory farming. You can sign up to join the Fast Action Network in the United States here, in the United Kingdom here and for a Spanish version of the Fast Action Network here.

Mercy for Animals (which was ACE-recommended for 2014, 2015, and 2016) runs a similar program called Hen Heroes.

Comment author: Joey 06 May 2018 06:11:43PM 3 points [-]

Say a person could check a box and commit to being vegan for the rest of their lives, do you think that would be a ethical/good thing for someone to do? Given what we know about average recidivism in vegans?

Comment author: RandomEA 07 May 2018 11:03:07AM *  4 points [-]

It could turn out to be bad. For example, say she pledges in 2000 to "never eat meat, dairy, or eggs again." By 2030, clean meat, dairy, and eggs become near universal (something she did not anticipate in 2000). Her view in 2030 is that she should be willing to order non-vegan food at restaurants since asking for vegan food would make her seem weird while being unlikely to prevent animal suffering. If she takes her pledge seriously and literally, she is tied to a suboptimal position (despite only intending to prevent loss of motivation).

This could happen in a number of other ways:

  1. She takes the Giving What We Can Further Pledge* intending to prevent herself from buying unnecessary stuff but the result is that her future self (who is just as altruistic) cannot move to a higher cost of living location.

  2. She places her donation money into a donor-advised fund intending to prevent herself from spending it non-altruistically later but the result is that her future self (who is just as altruistic) cannot donate to promising projects that lack 501(c)(3) status.

  3. She chooses a direct work career path with little flexible career capital intending to prevent herself from switching to a high earning career and keeping all the money but the result is that her future self (who is just as altruistic) cannot easily switch to a new cause area where she would be able to have a much larger impact.

It seems to me that actions that bind you can constrain you in unexpected ways despite your intention being to only constrain yourself in case you lose motivation. Of course, it may still be good to constrain yourself because the expected benefit from preventing reduced altruism due to loss of motivation could outweigh the expected cost from the possibility of preventing yourself from becoming more impactful. However, the possibility of constraining actions ultimately being harmful makes me think that they are distinct from actions like surrounding yourself with like-minded people and regularly consuming EA content.

*Giving What We Can does not push people to take the Further Pledge.

Comment author: Gregory_Lewis 05 May 2018 01:06:42AM *  7 points [-]

It's very easy for any of us to call "EA" as we see it and naturally make claims about the preferences of the community. But this would be very clearly circular. I'd be tempted to defer to the EA Survey. AI was only the top cause of 16% of the EA Survey. Even among those employed full-time in a non-profit (maybe a proxy for full-time EAs), it was the top priority of 11.26%, compared to 44.22% for poverty and 6.46% for animal welfare.

As noted in the fb discussion, it seems unlikely full-time non-profit employment is a good proxy for 'full-time EAs' (i.e. those working full time at an EA organisation - E2Gers would be one of a few groups who should also be considered 'full-time EAs' in the broader sense of the term).

For this group, one could stipulate every group which posts updates to the EA newsletter (I looked at the last half-dozen or so, so any group which didn't have an update is excluded, but likely minor) is an EA group, and toting up a headcount of staff (I didn't correct for FTE, and excluded advisors/founders/volunteers/freelancers/interns - all of these decisions could be challenged) and recording the prevailing focus of the org gives something like this:

  • 80000 hours (7 people) - Far future
  • ACE (17 people) - Animals
  • CEA (15 people) - Far future
  • CSER (11 people) - Far future
  • CFI (10 people) - Far future (I only included their researchers)
  • FHI (17 people) - Far future
  • FRI (5 people) - Far future
  • Givewell (20 people) - Global poverty
  • Open Phil (21 people) - Far future (mostly)
  • SI (3 people) - Animals
  • CFAR (11 people) - Far future
  • Rethink Charity (11 people) - Global poverty
  • WASR (3 people) - Animals
  • REG (4 people) - Far future [Edited after Jonas Vollmer kindly corrected me]
  • FLI (6 people) - Far future
  • MIRI (17 people) - Far future
  • TYLCS (11 people) - Global poverty

Totting this up, I get ~ two thirds of people work at orgs which focus on the far future (66%), 22% global poverty, and 12% animals. Although it is hard to work out the AI | far future proportion, I'm pretty sure it is the majority, so 45% AI wouldn't be wildly off-kilter if we thought the EA handbook should represent the balance of 'full time' attention.

I doubt this should be the relevant metric of how to divvy-up space in the EA handbook. It also seems unclear how clear considerations of representation play in selecting content, or if so what is the key community to proportionately represent.

Yet I think I'd be surprised if it wasn't the case that among those working 'in' EA, the majority work on the far future, and a plurality work on AI. It also agrees with my impression that the most involved in the EA community strongly skew towards the far future cause area in general and AI in particular. I think they do so, bluntly, because these people have better access to the balance of reason, which in fact favours these being the most important things to work on.

Comment author: RandomEA 05 May 2018 12:15:07PM 2 points [-]

I think your list undercounts the number of animal-focused EAs. For example, it excludes Sentience Politics, which provided updates through the EA newsletter in September 2016, January 2017, and July 2017. It also excludes the Good Food Institute, an organization which describes itself as "founded to apply the principles of effective altruism (EA) to change our food system." While GFI does not provide updates through the EA newsletter, its job openings are mentioned in the December 2017, January 2018, and March 2018 newsletters. Additionally, it excludes organizations like the Humane League, which while not explicitly EA, have been described as having a "largely utilitarian worldview." Though the Humane League does not provide updates through the EA newsletter, its job openings are mentioned in the April 2017 newsletters, February 2018, and March 2018.

Perhaps the argument for excluding GFI and the Humane League (while including direct work organizations in the long term future space) is that relatively few people in direct work animal organizations identify as EAs (while most people in direct work long term future organizations identify as EA). If this is the reason, I think it'd be good for someone to provide evidence for it. Also, if the idea behind this method of counting is to look at the revealed preference of EAs, then I think people earning to give have to be included, especially since earning to give appears to be more useful for farm animal welfare than for long term future causes.

(Most of the above also applies to global health organizations.)

Comment author: adamaero  (EA Profile) 03 May 2018 06:34:33PM *  0 points [-]

I also believe there are two broad types of EAs today. So this is interesting. Although, I am a little confused on some of your meaning. Can you make some of those into complete sentences?

2) How are these different between Type 1 and Type 2?

4) "Evidence is more direct" in what regard or context??

Lastly, the list seems skewed, favoring Type 2.

Comment author: RandomEA 04 May 2018 04:44:18AM *  0 points [-]

2) How are these different between Type 1 and Type 2?

To me, it cannot be seriously disputed that improving the lives of currently alive humans is good, that improving the welfare of current and future animals is good, and that preventing the existence of farm animals who would live overall negative lives is good.

By contrast, I think that you can make a plausible argument that there is no moral value to ensuring that a person who would live a happy life comes into existence (though as noted above, you can make the case for reducing global catastrophic risks without relying on that benefit).

4) "Evidence is more direct" in what regard or context??

It's easier to measure the effectiveness of the program being implemented by a global health charity, the effectiveness of that charity at implementing the program, and the effectiveness of an animal charity at securing corporate pledges than it is to measure the impact of biosecurity and AI alignment organizations.

Comment author: Alex_Barry 03 May 2018 03:52:01PM *  2 points [-]

I am somewhat confused by the framing of this comment, you start by saying "there are two types of EA" but the points seem to all be about the properties of different causes.

I don't think there are 'two kinds' of EAs in the sense you could easily tell which group people were going to fall into in advance, but that all of your characteristics just follow as practical considerations resulting from how important people find the longtermist view. (But I do think "A longtermist viewpoint leads to very different approach" is correct.)

I'm also not sure how similar the global poverty and farm animal welfare groups actually are. There seem to be significant differences in terms of the quality of evidence used and how established they are as areas. Points 3, 4, 7, 9 and 10 seem to have pretty noticeable differences between global poverty and farm animal welfare.

Comment author: RandomEA 04 May 2018 04:31:38AM 2 points [-]

Just to clarify, when I say that my sense is that there are two types of EA, I mean that I sense that there are two types of effective altruism, not that I sense that there are two types of effective altruists.

I agree that there are substantial differences between global poverty and farm animal welfare (with global poverty being more clearly Type 1). But it seems to me that those differences are more differences of degree, while the differences between global poverty/farm animal welfare and biosecurity/AI alignment are more differences of kind.

Comment author: RandomEA 03 May 2018 06:30:24AM *  9 points [-]

The shift from Doing Good Better to this handbook reinforces my sense that there are two types of EA:

Type 1:

  1. Causes: global health, farm animal welfare

  2. Moral patienthood is hard to seriously dispute

  3. Evidence is more direct (RCTs, corporate pledges)

  4. Charity evaluators exist (because evidence is more direct)

  5. Earning to give is a way to contribute

  6. Direct work can be done by people with general competence

  7. Economic reasoning is more important (partly due to donations being more important)

  8. More emotionally appealing (partly due to being more able to feel your impact)

  9. Some public knowledge about the problem

  10. More private funding and a larger preexisting community

Type 2:

  1. Causes: AI alignment, biosecurity

  2. Moral patienthood can be plausibly disputed (if you're relying on the benefits to the long term future; however, these causes are arguably important even without considering the long term future)

  3. Evidence is more speculative (making prediction more important)

  4. Charity evaluation is more difficult (because impact is harder to measure)

  5. Direct work is the way to contribute

  6. Direct work seems to benefit greatly from specific skills/graduate education

  7. Game theory reasoning is more important (of course, game theory is technically part of economics)

  8. Less emotionally appealing (partly due to being less able to feel your impact)

  9. Little public knowledge about the problem

  10. Less private funding and a smaller preexisting community

View more: Next