Comment author: MichaelPlant 13 May 2018 11:05:26PM 13 points [-]

I appreciate the write up and think founding charities could be a really effective thing to do.

I do wonder if this might be an overly rosey picture for a couple of reasons.

  1. Are there any stories of EAs failing to start charities? If there aren't, that would be a bit strange and I'd want to know why there were no failures. If there are, what happened and why didn't they work? I'm a bit worried about a survivorship effect making it falsely look like starting charities is easy. (On a somewhat releated note, your post may prompt me to finally write up something about my own unsuccessful attempt to start a start up)

  2. One is that some of the charities you mention are offshoots/sister charities of each other - GWWC and 80k, Charity Science Health and Fortify Health. This suggests to me it might be easier to found a second charity than a first one. OPP and GiveWell also fit this mold.

  3. Including AMF is, in some sense a bit odd, because it wasn't (I gather) founded with the intention of being the most effective charity. I say it's odd because, if it hadn't existed, the EA world would have found another charity that it deemed to be the most effective. Unless AMF thought they would be the most effective, they sort of 'got lucky' in that regard.

Comment author: RandomEA 14 May 2018 04:56:20AM 6 points [-]

One is that some of the charities you mention are offshoots/sister charities of each other - GWWC and 80k, Charity Science Health and Fortify Health. This suggests to me it might be easier to found a second charity than a first one. OPP and GiveWell also fit this mold.

It's also worth noting that Animal Charity Evaluators started as an 80,000 Hours project and that the Good Food Institute was the brainchild of the Mercy for Animals leadership team.

Comment author: RandomEA 13 May 2018 06:42:10PM 2 points [-]

This is somewhat off-topic but it's relevant enough that I thought I'd raise it here.

What is the most impactful volunteering opportunity for a non-EA who prioritizes more conventional causes (including global poverty) and who lacks specialized skills? Basically, I'm seeking a general recommendation for non-EAs who ask how they can most effectively volunteer. I recognize that the recommended volunteering for a non-EA will be much less impactful than the recommended volunteering for an EA, but I think it can sometimes be worthwhile to spread a less impactful idea to a larger number of people (e.g. The Life You Can Save).

The standard view seems to be that volunteering in a low-skill position produces as much value for an organization as donating the amount necessary for them to hire a minimum wage worker as a replacement. While this may be correct as a general matter, I think there are likely exceptions:

  1. An organization may feel that volunteer morale will greatly decrease if there are some people doing the same work as the volunteers for the same number of hours who are paid.

  2. An organization may be unwilling to hire people to do the work for ideological reasons.

  3. An organization may be unwilling to hire people to do the work because doing so would look bad to the public.

  4. An organization may feel that passion about the cause is extremely important and that the best way to select for passion is to only accept people who will work for free.

  5. An all-volunteer organization may lack the infrastructure to pay employees meaning that it would have to pay a high initial cost before hiring its first employee.

Thus, it seems plausible to me that there is some relatively high impact organization with appeal to non-EAs where a person without specialized skills can have a significant impact. Does anyone know of a volunteering opportunity like this?

Comment author: RandomEA 13 May 2018 02:48:20PM 3 points [-]

The Humane League (THL) is an ACE-recommended charity. THL runs the Fast Action Network, an online group which sends out easy, one-minute actions two or three times per week, including signing petitions, posting on social media, or emailing decision makers, as part of campaigns to mitigate factory farming. You can sign up to join the Fast Action Network in the United States here, in the United Kingdom here and for a Spanish version of the Fast Action Network here.

Mercy for Animals (which was ACE-recommended for 2014, 2015, and 2016) runs a similar program called Hen Heroes.

Comment author: Joey 06 May 2018 06:11:43PM 3 points [-]

Say a person could check a box and commit to being vegan for the rest of their lives, do you think that would be a ethical/good thing for someone to do? Given what we know about average recidivism in vegans?

Comment author: RandomEA 07 May 2018 11:03:07AM *  4 points [-]

It could turn out to be bad. For example, say she pledges in 2000 to "never eat meat, dairy, or eggs again." By 2030, clean meat, dairy, and eggs become near universal (something she did not anticipate in 2000). Her view in 2030 is that she should be willing to order non-vegan food at restaurants since asking for vegan food would make her seem weird while being unlikely to prevent animal suffering. If she takes her pledge seriously and literally, she is tied to a suboptimal position (despite only intending to prevent loss of motivation).

This could happen in a number of other ways:

  1. She takes the Giving What We Can Further Pledge* intending to prevent herself from buying unnecessary stuff but the result is that her future self (who is just as altruistic) cannot move to a higher cost of living location.

  2. She places her donation money into a donor-advised fund intending to prevent herself from spending it non-altruistically later but the result is that her future self (who is just as altruistic) cannot donate to promising projects that lack 501(c)(3) status.

  3. She chooses a direct work career path with little flexible career capital intending to prevent herself from switching to a high earning career and keeping all the money but the result is that her future self (who is just as altruistic) cannot easily switch to a new cause area where she would be able to have a much larger impact.

It seems to me that actions that bind you can constrain you in unexpected ways despite your intention being to only constrain yourself in case you lose motivation. Of course, it may still be good to constrain yourself because the expected benefit from preventing reduced altruism due to loss of motivation could outweigh the expected cost from the possibility of preventing yourself from becoming more impactful. However, the possibility of constraining actions ultimately being harmful makes me think that they are distinct from actions like surrounding yourself with like-minded people and regularly consuming EA content.

*Giving What We Can does not push people to take the Further Pledge.

Comment author: Gregory_Lewis 05 May 2018 01:06:42AM *  7 points [-]

It's very easy for any of us to call "EA" as we see it and naturally make claims about the preferences of the community. But this would be very clearly circular. I'd be tempted to defer to the EA Survey. AI was only the top cause of 16% of the EA Survey. Even among those employed full-time in a non-profit (maybe a proxy for full-time EAs), it was the top priority of 11.26%, compared to 44.22% for poverty and 6.46% for animal welfare.

As noted in the fb discussion, it seems unlikely full-time non-profit employment is a good proxy for 'full-time EAs' (i.e. those working full time at an EA organisation - E2Gers would be one of a few groups who should also be considered 'full-time EAs' in the broader sense of the term).

For this group, one could stipulate every group which posts updates to the EA newsletter (I looked at the last half-dozen or so, so any group which didn't have an update is excluded, but likely minor) is an EA group, and toting up a headcount of staff (I didn't correct for FTE, and excluded advisors/founders/volunteers/freelancers/interns - all of these decisions could be challenged) and recording the prevailing focus of the org gives something like this:

  • 80000 hours (7 people) - Far future
  • ACE (17 people) - Animals
  • CEA (15 people) - Far future
  • CSER (11 people) - Far future
  • CFI (10 people) - Far future (I only included their researchers)
  • FHI (17 people) - Far future
  • FRI (5 people) - Far future
  • Givewell (20 people) - Global poverty
  • Open Phil (21 people) - Far future (mostly)
  • SI (3 people) - Animals
  • CFAR (11 people) - Far future
  • Rethink Charity (11 people) - Global poverty
  • WASR (3 people) - Animals
  • REG (4 people) - Far future [Edited after Jonas Vollmer kindly corrected me]
  • FLI (6 people) - Far future
  • MIRI (17 people) - Far future
  • TYLCS (11 people) - Global poverty

Totting this up, I get ~ two thirds of people work at orgs which focus on the far future (66%), 22% global poverty, and 12% animals. Although it is hard to work out the AI | far future proportion, I'm pretty sure it is the majority, so 45% AI wouldn't be wildly off-kilter if we thought the EA handbook should represent the balance of 'full time' attention.

I doubt this should be the relevant metric of how to divvy-up space in the EA handbook. It also seems unclear how clear considerations of representation play in selecting content, or if so what is the key community to proportionately represent.

Yet I think I'd be surprised if it wasn't the case that among those working 'in' EA, the majority work on the far future, and a plurality work on AI. It also agrees with my impression that the most involved in the EA community strongly skew towards the far future cause area in general and AI in particular. I think they do so, bluntly, because these people have better access to the balance of reason, which in fact favours these being the most important things to work on.

Comment author: RandomEA 05 May 2018 12:15:07PM 2 points [-]

I think your list undercounts the number of animal-focused EAs. For example, it excludes Sentience Politics, which provided updates through the EA newsletter in September 2016, January 2017, and July 2017. It also excludes the Good Food Institute, an organization which describes itself as "founded to apply the principles of effective altruism (EA) to change our food system." While GFI does not provide updates through the EA newsletter, its job openings are mentioned in the December 2017, January 2018, and March 2018 newsletters. Additionally, it excludes organizations like the Humane League, which while not explicitly EA, have been described as having a "largely utilitarian worldview." Though the Humane League does not provide updates through the EA newsletter, its job openings are mentioned in the April 2017 newsletters, February 2018, and March 2018.

Perhaps the argument for excluding GFI and the Humane League (while including direct work organizations in the long term future space) is that relatively few people in direct work animal organizations identify as EAs (while most people in direct work long term future organizations identify as EA). If this is the reason, I think it'd be good for someone to provide evidence for it. Also, if the idea behind this method of counting is to look at the revealed preference of EAs, then I think people earning to give have to be included, especially since earning to give appears to be more useful for farm animal welfare than for long term future causes.

(Most of the above also applies to global health organizations.)

Comment author: adamaero  (EA Profile) 03 May 2018 06:34:33PM *  0 points [-]

I also believe there are two broad types of EAs today. So this is interesting. Although, I am a little confused on some of your meaning. Can you make some of those into complete sentences?

2) How are these different between Type 1 and Type 2?

4) "Evidence is more direct" in what regard or context??

Lastly, the list seems skewed, favoring Type 2.

Comment author: RandomEA 04 May 2018 04:44:18AM *  0 points [-]

2) How are these different between Type 1 and Type 2?

To me, it cannot be seriously disputed that improving the lives of currently alive humans is good, that improving the welfare of current and future animals is good, and that preventing the existence of farm animals who would live overall negative lives is good.

By contrast, I think that you can make a plausible argument that there is no moral value to ensuring that a person who would live a happy life comes into existence (though as noted above, you can make the case for reducing global catastrophic risks without relying on that benefit).

4) "Evidence is more direct" in what regard or context??

It's easier to measure the effectiveness of the program being implemented by a global health charity, the effectiveness of that charity at implementing the program, and the effectiveness of an animal charity at securing corporate pledges than it is to measure the impact of biosecurity and AI alignment organizations.

Comment author: Alex_Barry 03 May 2018 03:52:01PM *  2 points [-]

I am somewhat confused by the framing of this comment, you start by saying "there are two types of EA" but the points seem to all be about the properties of different causes.

I don't think there are 'two kinds' of EAs in the sense you could easily tell which group people were going to fall into in advance, but that all of your characteristics just follow as practical considerations resulting from how important people find the longtermist view. (But I do think "A longtermist viewpoint leads to very different approach" is correct.)

I'm also not sure how similar the global poverty and farm animal welfare groups actually are. There seem to be significant differences in terms of the quality of evidence used and how established they are as areas. Points 3, 4, 7, 9 and 10 seem to have pretty noticeable differences between global poverty and farm animal welfare.

Comment author: RandomEA 04 May 2018 04:31:38AM 2 points [-]

Just to clarify, when I say that my sense is that there are two types of EA, I mean that I sense that there are two types of effective altruism, not that I sense that there are two types of effective altruists.

I agree that there are substantial differences between global poverty and farm animal welfare (with global poverty being more clearly Type 1). But it seems to me that those differences are more differences of degree, while the differences between global poverty/farm animal welfare and biosecurity/AI alignment are more differences of kind.

Comment author: RandomEA 03 May 2018 06:30:24AM *  9 points [-]

The shift from Doing Good Better to this handbook reinforces my sense that there are two types of EA:

Type 1:

  1. Causes: global health, farm animal welfare

  2. Moral patienthood is hard to seriously dispute

  3. Evidence is more direct (RCTs, corporate pledges)

  4. Charity evaluators exist (because evidence is more direct)

  5. Earning to give is a way to contribute

  6. Direct work can be done by people with general competence

  7. Economic reasoning is more important (partly due to donations being more important)

  8. More emotionally appealing (partly due to being more able to feel your impact)

  9. Some public knowledge about the problem

  10. More private funding and a larger preexisting community

Type 2:

  1. Causes: AI alignment, biosecurity

  2. Moral patienthood can be plausibly disputed (if you're relying on the benefits to the long term future; however, these causes are arguably important even without considering the long term future)

  3. Evidence is more speculative (making prediction more important)

  4. Charity evaluation is more difficult (because impact is harder to measure)

  5. Direct work is the way to contribute

  6. Direct work seems to benefit greatly from specific skills/graduate education

  7. Game theory reasoning is more important (of course, game theory is technically part of economics)

  8. Less emotionally appealing (partly due to being less able to feel your impact)

  9. Little public knowledge about the problem

  10. Less private funding and a smaller preexisting community

Comment author: John_Maxwell_IV 26 April 2018 07:47:20AM *  10 points [-]

Thanks for doing this!

Under "Have you received career coaching from 80,000 Hours?" there are 3 options: "I have received career coaching", "I have not received career coaching, but would like to", and "None of the Above". I think if "None of the Above" was replaced by "I have not received career coaching, and would not like to" then you'd more accurately measure people in that category.

IMO the EA survey is a super powerful tool that's currently underused. Here's an idea bank for future surveys:

  • Ask what skillsets people in the community are attempting to build, and what career paths they are trying to move into. Maybe we can forecast talent gaps in advance and build/recruit for those skills, or identify if there's a glut of people moving into a particular area. Bonus: In order to help people coordinate to avoid gluts, also ask people how dedicated they are to their current career path/how much career capital they've built. Then if I'm in an overpopulated career area, and I know I have less career capital for this area than the average, I know I'm one of the people who is best-positioned to move out of it. (You might even set people from the survey up with each other in order to overcome coordination challenges of this type.)

  • A lightweight method for facilitating comparative advantage trades: In addition to asking people what career they are personally working on, also ask them what careers they think more EAs should work on. Then have EAs who are just getting started with the movement and feeling directionless look over the freeform responses for ideas. That way I can continue in a career path I have comparative advantage for while still getting to influence how our collective career capital is allocated on the margin.

  • You could also ask people if they are open to being contacted by EA organizations that are recruiting for their skills. 80K says talent gaps are big and junior hires are valued at over $1M by EA orgs. I'm guessing a lot of hires currently happen through networking, which is a relatively inefficient process. Using the EA survey as a talent clearinghouse could generate millions of dollars of value on an annual basis. I assume you'd first want to talk to EA orgs to see if a process like this might work for them. I can think of a few advantages of this relative to using LinkedIn: career profiles optimized for what EA orgs are interested in, avoid sketchiness of unsolicited LinkedIn messages, probably a more comprehensive and up-to-date database of potential hires. You could still use mutual connections on FB/LinkedIn to measure involvement & dig up references. One complication is you'd want to separate the survey into "professional" and "personal" sections to control what information potential employers see, but I think the potential upside is worth it.

  • Add calibration questions.

  • Ask people which causes they've changed their minds about and why.

  • Ask EAs about their biggest productivity bottlenecks.

  • Ask people what mental health issues they suffer from.

  • Ask people how much $ they have in donor-advised funds etc. that they are saving up for future giving opportunities, and what circumstances would trigger donation. In general, it'd be nice to know how the community as a whole currently balances giving now vs giving later. Asking people about the circumstances that would cause them to donate could also help unendorsed donation procrastination.

  • How many people read/contribute to online EA discussions? Why or why not?

  • What factors are holding people back from being more involved in EA? Why do people choose not to work for EA organizations?

  • LW and SSC surveys might have more ideas. (A number of the above ideas are things I remember from the LW survey that I wish the EA survey had.)

Comment author: RandomEA 03 May 2018 04:53:06AM 2 points [-]

Ask people which causes they've changed their minds about and why.

I second this. Specifically, I think people should be asked what their preferred cause area was when they first got involved in EA. This would allow us to know the proportion of long term future people who first got involved in EA through global health, which is information that would be useful for a number of different reasons.

Comment author: Gregory_Lewis 02 May 2018 06:10:23PM 4 points [-]

Thanks for the even-handed explication of an interesting idea.

I appreciate the example you gave was more meant as illustration than proposal. I nonetheless wonder whether further examination of the underlying problem might lead to ideas drawn tighter to the proposed limitations.

You note this set of challenges:

  1. Open Phil targets larger grantees
  2. EA funds/grants have limited evaluation capacity
  3. Peripheral EAs tend to channel funding to more central groups
  4. Core groups may have trouble evaluating people, which is often an important factor in whether to fund projects.

The result is a good person (but not known to the right people) with a good small idea is nonetheless left out in the cold.

I'm less sure about #2 - or rather, whether this is the key limitation. Max Dalton wrote on one of the FB threads linked.

In the first round of EA Grants, we were somewhat limited by staff time and funding, but we were also limited by the number of projects we were excited about funding. For instance, time constraints were not the main limiting factor on the percentage of people we interviewed. We are currently hiring for a part-time grants evaluator to help us to run EA Grants this year[...]

FWIW (and non-resiliently), I don't look around and see lots of promising but funding starved projects. More relevantly, I don't review recent history and find lots of cases of stuff rejected by major funders then supported by more peripheral funders which are doing really exciting things.

If not, then the idea here (in essence, of crowd-sourcing evaluation to respected people in the community) could help. Yet it doesn't seem to address #3 or #4.

If most of the money (even from the community) ends up going through the 'core' funnel, then a competitive approach would be advocacy to these groups to change their strategy, instead of providing a parallel route and hoping funders will come.

More importantly, if funders generally want to 'find good people', the crowd-sourced project evaluation only helps so much. For people more on the periphery of the community, this uncertainty from funders will remain even the anonymised feedback on the project is very positive.

Per Michael, I'm not sure what this idea has over (say) posting a 'pitch' on this forum, doing a kickstarter, etc.

Comment author: RandomEA 03 May 2018 04:46:46AM *  1 point [-]

If most of the money (even from the community) ends up going through the 'core' funnel, then a competitive approach would be advocacy to these groups to change their strategy, instead of providing a parallel route and hoping funders will come.

I should have been clearer in my classification of donors. Other than institutional sources (Open Phil, EA Grants, EA Funds), I see three primary categories:

  1. EAs who are only willing to give to charities recommended by GiveWell or ACE [what I meant when I said peripheral EAs]

  2. EAs who are willing to give to other organizations where the impact is less concrete but who do not know enough to know which project ideas are good [there may be many earning to give people in this category]

  3. EAs who are willing to give to other organizations where the impact is less concrete and do know enough to know which project ideas are good [this is the category from which evaluators would be drawn]

My concern is that people in category 2 have to rely on the choices of institutional donors to guide them. I want people in category 2 to know about projects that are viewed highly by people in category 3 but rejected by institutional donors.

More importantly, if funders generally want to 'find good people', the crowd-sourced project evaluation only helps so much. For people more on the periphery of the community, this uncertainty from funders will remain even the anonymised feedback on the project is very positive.

Under the proposed system, an evaluator can endorse a project idea and/or the person. In order for a proposal to appear on the platform, there would have to be at least n idea endorsements and m personal endorsements. Thus, potential donors would know for all proposals that there are at least m core EAs who think the person is sufficiently competent.

View more: Next