28

My Cause Selection: Michael Dickens

Cross-posted to my blog.

Last edited 2015-09-24.

In this essay, I provide my reasoning about the arguments for and against different causes and try to identify which one does the most good. I give some general considerations on cause selection and then lay out a list of causes followed by a list of organizations. I break up considerations on these causes and organizations into five categories: Size of Impact; Strength of Evidence; Tractability; Neglectedness/Room for More Funding; Learning Value. This roughly mirrors the traditional Importance; Tractability; Neglectedness criteria. I identify which cause areas look most promising. Then I examine a list of organizations working in these cause areas and narrow down to a few finalists. In the last section, I directly compare these finalists against each other and identify which organization looks strongest.

You can skip to Conclusions to see summaries of why I prioritize the finalists I chose, why I did not consider any of the other charities as finalists, and my decision about who to fund.

If you decide to change where you donate as a result of reading this, please tell me! You can send me a message on this site or email me: mdickens93 [at] gmail [dot] com.

TL;DR

I chose these three finalists:

Based on everything I considered, REG looks like the strongest charity because it produces a large donation multiplier and it directs donations to both MIRI and ACE (as well as other effective charities).

Contents

General Considerations

Purpose of This Document

To date, my thinking on cause prioritization has been insufficiently organized or rigorous. This is an attempt to lay out all the considerations in my head for and against different causes and organizations and get some clarity about who to support.

This is originally inspired by conversations with Buck Shlegeris about the importance of cause prioritization, which he makes a good case for here:

(Buck makes some non-obvious claims here but I agree with the main thesis that we should spend more effort on cause prioritization.)

EAs spend a tenth as much time discussing cause prioritization as they should. Cause prioritization is obviously incredibly important. If given perfect information you could know that you should be donating to [cause area 1] and you’re actually donating to [cause area 2], then you are doing probably at least an order of magnitude less good than you could be, and I’m only even granting you that much credit because donating to EA charities in [cause area 1] might raise the profile of EA and get more people to donate to [cause area 2] in the future.

If EAs were really interested in doing as much good as they could, then they would want to put their beliefs about cause prioritization under incredible scrutiny. I’m earning to give this year, and I plan to give about 25% of my income. If I could spend a month of my year full time researching cause prioritization, and I thought I was 80% likely to be right about my current cause area, and I thought that this had a 50% chance of changing my cause area from my current cause area to a better one if I were wrong about cause prioritization right now, then it would be worth it for me to do that. […]

If EAs wanted to help others, they would all maintain a written list of all the strongest arguments against their cause areas from other EAs, and they’d all have their list of rebuttals. Ideally, I’d be able to write a really good document on cause prioritization and sell it for $100, because it would save other EAs so much time figuring this out themselves.

What I Value

I value having enjoyable experiences and avoiding unpleasant experiences. If I value these experiences for myself, then it’s reasonable for me to value them in general. That’s the two-sentence version of why I’m a hedonistic utilitarian.

I have a few more specific beliefs which I believe follow from hedonistic utilitarianism, but a lot of people disagree with, so they are worth stating explicitly:

  • Pleasurable and painful experiences in non-humans have moral value. Non-humans includes non-human animals, computer simulations of sentient beings, artificial biological beings, and anything else that can experience pleasure and suffering.
    • Corollary: I am persuaded by the empirical evidence that chickens, pigs, and probably fish feel pain. Insects seem less likely to feel pain but it’s still conceivable.
  • Future beings have equal moral status to existing beings. We might discount them by their probability of existence, but we should not discount them solely because they exist in the future.
  • The best possible outcome would be to use fill the universe with beings that experience as much joy as possible for their entire lives. I refer to this outcome as universal eudaimonia (sometimes called utilitronium shockwave).
    • I have heard only one reasonably compelling argument that this is not the best possible outcome. It may be the case that giving two beings the exact same experience has no more moral value than one being having the experience. I do not see why this would be true, but I am sufficiently confused about this that I do not put much credence in my intuitions. If this claim is true, then we should fill the universe with beings that are optimized for both happiness and diversity of experiences rather than just for happiness. I’m confused about this but hopefully it’s possible to make a good decision about cause prioritization without solving the hard problem of consciousness.

I am not perfectly confident that hedonistic utilitarianism is true–I have some normative uncertainty. At the same time, I do not know what it would mean for hedonistic utilitarianism to be false (I don’t see how suffering could not be inherently bad, and I don’t see how anything other than suffering could be inherently bad). I am open to arguments that it is false, but I am unpersuaded by arguments of the form “utilitarianism produces an unintuitive result in this contrived thought experiment,” and almost all arguments take this form.

Terminology

This document is not optimized to be easy to read for people who aren’t familiar with popular effective altruist causes and organizations and has a lot of jargon and abbreviations. That said, I want people to be able to understand what I’m talking about, so I’m happy to offer clarification on specific terms or concepts in the comments section.

Personal Bias

Although I try to be as cause-neutral as possible, I feel some emotions that push me in the direction of one cause or another. Throughout this document I try to acknowledge any such feelings. This opens me to criticism along the lines of, “Your arguments for this cause are strong but you are emotionally biased against it; you should consider it more carefully.”

My Writing Process

I wrote this document over time as I researched different causes and organizations. I generally speak about choosing a charity in the future tense, because when I wrote most of this, I had not yet chosen where to donate. While reading this, imagine you are exploring the ideas with me, moving through all the major considerations and reaching a decision near the end of the document.

I found the process of writing this extremely valuable. I quickly identified which fundamental questions I needed to answer, and I wrote separate essays to answer a couple of important fundamental questions. Writing this document clarified for me what issues I need to think about when choosing a cause, and I learned a lot about what different organizations are doing and the arguments for and against them. Writing down your mental models is helpful for clarifying them and examining them from a distance.

I spent about 100 hours producing this document and ultimately changed my mind about where to donate. Even making conservative assumptions about how much I will donate and how much better my choice is now than it would have been, this time spent was worth over $100/hour (and I suspect it’s probably worth more like $500/hour). That said, I found this time enjoyable and probably wouldn’t have put in nearly as much work if it hadn’t been fun. For anyone else who finds this sort of work fun, I strongly encourage you to do it and publish your results in detail.

I had a few ideas that emerged as a result of working on cause prioritization, and I wrote them as separate essays:

In Is Preventing Human Extinction Good?, I examine the likely effects of the long-term survival of humanity and consider whether they are good or bad. I come out in favor of preventing extinction being good in expectation, and more likely good than bad.

In On Values Spreading, I discuss the value of values spreading and come to the (weak) conclusion that preventing global catastrophic risks looks more important.

In Charities I Would Like to See, I propose a few ideas for potentially high-impact interventions.

Things I Still Don’t Understand

I still have a few areas of uncertainty. I took a position on these questions, but I have weak confidence about my position. I’d like to see more work in these areas and will continue to think about them.

  • Could high-leverage values spreading be more important than global catastrophic risk reduction?
  • Can we actually have success with far-future interventions when we don’t have good feedback loops?
  • How much evidence should I require before supporting a speculative cause?
  • When is supporting a meta-charity not worth it?

Acknowledgments

I have many people to thank for helping me produce this document.

Thanks to Nick Beckstead, Daniel Dewey, Ruairi Donnelly, Eric Herboso, Victoria Krakovna, Howie Lempel, Toby Ord, Tobias Pulver, Joey Savoie, Carl Shulman, Nate Soares, Pablo Stafforini, Brian Tomasik, Emily Cutts Worthington, and Eliezer Yudkowsky for answering my questions about their work and discussing ideas with me.

Thanks to Jacy Anthis, Linda Dickens, Jeff Jordan, Kelsey Piper, Buck Shlegeris, and Claire Zabel for reviewing drafts and helping me develop my thoughts on cause selection.

If I inadvertently left out anyone else, then I apologize, and thanks to you, too.

Causes

Global Poverty

Among global poverty charities, the Against Malaria Foundation (AMF) probably has the strongest case that it saves lives effectively. There’s strong evidence that it helps humans in the short run, but I have some concerns about its larger effects. Does AMF (or other global poverty charities) negatively impact wild animals? Does making humans better off hurt the far future? My best guess on both these questions is “no,” but I have a lot of uncertainty about them, so the case for AMF is not as clear-cut as it first appears. That said, if every potentially high-impact but more speculative cause that I consider has insufficient evidence that it’s effective, I may donate to AMF. I consider this something of a fallback position: AMF is the strongest charity unless another charity can show that it’s better.

I do not discuss global poverty charities in depth here because I do not believe I have much to add to GiveWell’s extensive analysis.

Factory Farming

Some charities such as The Humane League work to prevent animals from suffering on factory farms. There’s a plausible case that some such charities do much more good than GiveWell top charities (perhaps by an order of magnitude or more), although the supporting evidence here is much weaker.

Similarly to global poverty, reducing factory farming may be net harmful in the short run. Reducing factory farming might reduce speciesism and spread good values in the long term, but this claim is highly speculative. Then it’s not a question of comparing speculative far-future causes versus proven factory farming: the case for the long-term benefits of reducing factory farming is on no firmer ground than the case for global catastrophic risk charities. I discuss values spreading as a separate cause below.

Charities against factory farming do not serve as a fallback position in the same way that GiveWell top charities do, because the evidence in their favor is a lot weaker. The state of this evidence is improving, and funding studies on animal advocacy could be highly effective; see my discussion of Animal Charity Evaluators.

Far Future (General)

Almost all utility lives in the far future. Thus, it’s likely that the most effective interventions are ones that positively affect the far future. But this line of reasoning has a major problem: it’s not at all obvious how to positively affect the far future. Some, such as Jeff Kaufman, believe this is sufficient reason to focus on short-term interventions instead.

Short-term interventions such as direct cash transfers will always have stronger evidence in their favor than far future interventions. But the far future is so overwhelmingly important that I believe our best bet is to support far future causes whenever we can find charities with reasonably good indicators of their effectiveness (e.g. success at achieving short-term goals or competent leadership). It’s conceivable that we won’t be able to find any sufficiently reliable charities (and this was my impression when I first investigated the issue a few years ago), but it’s worth trying.

I used to prefer short-term interventions with clear supporting evidence–I supported GiveWell top charities and, later, ACE top charities and ACE itself. But after a few conversations with Carl Shulman, Pablo Stafforini, Buck Shlegeris and others, I started more seriously considering the fact that almost all value lives in the far future, and the best interventions are probably those that focus on it. This is not to say that we should donate to whatever charity can give a naive argument that it has the highest expected value. When I discuss specific far-future charities below, I look for indicators on whether their activities are effective. I would not give to a far-future charity unless it had compelling evidence that its activities would be impactful. Obviously this evidence will not be as strong as the evidence in favor of GiveWell top charities, but there still exist far-future charities with better and worse evidence of impact.

We should be cautious about being lenient with a cause area’s strength of evidence. Jeff Kaufman explains:

People succeed when they have good feedback loops. Otherwise they tend to go in random directions. This is a problem for charity in general, because we’re buying things for others instead of for ourselves. If I buy something and it’s no good I can complain to the shop, buy from a different shop, or give them a bad review. If I buy you something and it’s no good, your options are much more limited. Perhaps it failed to arrive but you never even knew you were supposed to get it? Or it arrived and was much smaller than I intended, but how do you know. Even if you do know that what you got is wrong, chances are you’re not really in a position to have your concerns taken seriously.

[…]

[With AI risk, the] problem is we really really don’t know how to make good feedback loops here. We can theorize that an AI needs certain properties not to just kill us all, and that in order to have those properties it would be useful to have certain theorems proved, and go work on those theorems. And maybe we have some success at this, and the mathematical community thinks highly of us instead of dismissing our work. But if our reasoning about what math would be useful is off there’s no way for us to find out. Everything will still seem like it’s going well.

AI risk and other speculative causes don’t have good feedback loops, but we don’t know nothing about whether we’re succeeding. And there’s reason to believe we should support speculative causes. As Nick Beckstead writes:

My overall impression is that the average impact of people doing the most promising unproven activities contributed to a large share of the innovations and scientific breakthroughs that have made the world so much better than it was hundreds of years ago, despite the fact that they were a small share of all human activity.

The best interventions are probably those that significantly affect the far future, although probably many (or even most) far-future interventions do nothing useful. We should try to improve the far future, but be careful about naive claims of high cost-effectiveness and look for indicators that far-future charities are competent.

Values Spreading to Improve the Far Future

Some people propose focusing on spreading values now to increase the probability that the far future has beneficial results. I have never seen any strong reason to believe that anything we do now will affect far future values–if the case for organizations reducing global catastrophic risks is tenuous, then the case for values spreading is no better.

In Vegan Advocacy and Pessimism about Wild Animal Welfare, Carl Shulman points out that vegan advocacy could be bad for animals in the short-run (although Brian Tomasik believes it has positive short-run effects), so the main benefit comes from values spreading; but the benefits of values spreading are unclear.

I discuss this subject in more depth in On Values Spreading. I conclude that there are good arguments that values spreading is the most effective activity, but it has some serious considerations against it and global catastrophic risk reduction looks more important.

Global Catastrophic Risk Reduction

I include existential risks as a type of global catastrophic risk (GCR). Nick Bostrom has argued that existential risks are substantially worse than other GCRs. Nick Beckstead disagrees, and I find Beckstead’s case persuasive.

It appears to be the case that either (1) working on GCR reduction in general is the best thing to do, in which case there may be multiple different cause areas within GCR that are more effective than any non-GCR causes; or (2) working on GCR is not the best thing to do, in which case all cause areas within GCR are similarly ineffective. (In case (1), lots of GCR interventions may still be ineffective, for example, research on preventing alien hamster invasions.)

Is preventing human extinction good?

This section getting so in-depth that I moved it into a separate article. In summary: there are a few reasons why the impact of humanity on the far future could be negative; but overall, it looks like humanity’s impact has a positive expected value (and will probably be positive), so it’s highly valuable to ensure that human civilization continues to exist.

Size of Impact

Preventing global catastrophic risk is a sufficiently important problem that fairly small efforts in the right direction can have much larger long-term effects than GiveWell-recommended charities (see Beckstead’s dissertation “On the Overwhelming Importance of Shaping the Far Future”). Successful GCR interventions probably have a bigger positive impact than anything else except possibly ensuring that the beings controlling the far future have good values (although I believe GCR reduction is probably more important, for reasons discussed above).

Strength of Evidence

Previously, I had major concerns about whether any GCR interventions had any effect. However, with Open Phil’s recent research into GCRs, I am more confident that there will emerge opportunities with sufficiently strong evidence of effectiveness that they make good giving targets. Open Phil has high standards of rigor, and I trust it to recommend interventions that have strong arguments in their favor.

Due to the haste consideration, I want to seriously consider donating to GCR interventions this year or next year. The evidence for the effectiveness of GCR organizations is uniformly much weaker than the evidence for GiveWell top charities, but this does not rule them out as contenders for the best cause area. Their overwhelming importance means I am willing to be more lenient about their strength of evidence than I would be for proximate interventions.

Neglectedness

GCR reduction as a cause is highly neglected right now, but more large donors are showing an interest in the topic, so it’s plausible that it will receive more funding in the future. Even so, funding it now may provide more information for future donors and help the field grow more quickly. Additionally, there’s the haste consideration: we don’t know when a major global catastrophe will occur or how long it will take to prepare for, so we should begin preparing as early as possible. Something like unfriendly AI is probably at least two decades away, but it will probably take more than two decades to develop solid theory around friendly AI.

Tractability

It’s not obvious what counts as evidence that a GCR intervention is working. I discuss this specifically with regard to the individual organizations that I consider.

AI Safety

Preventing unfriendly AI might successfully avert human extinction, which would have an extremely large impact. Furthermore, building a friendly AI is plausibly more important than any other GCR if it enables us to produce astronomically good outcomes that we would not be able to produce otherwise.

Friendly AI and Non-Human Animals

Given that non-human animals may dominate the expected value of the far future, it’s important that an AI gives them appropriate consideration. I discuss this issue in a few places in this document. Here I have a couple of additional quick points:

  • An AI-controlled future could be much better or much worse than a human-controlled future because a superintelligent AI would have more power to shape the universe than unaided humans.
  • Research on cooperation between goal-agents is possibly more valuable than research on encoding human values, because it’s probably less likely to lead to a far future that’s very bad for animals.

GCRs Other than AI Safety

I do not know of any good ways to fund organizations putting work into individual GCRs other than AI safety. I agree with Open Phil’s assessment that biosecurity and geoengineering are highly promising (although I have only briefly investigated these areas, so much of my confidence comes from Open Phil’s position and not from my own research). I do not see reason to believe that some GCR other than AI risk is substantially more important; no GCR looks much more likely than AI risk, and right now it looks much easier to efficiently support efforts to improve AI safety than to support work on other major GCRs. I expect that there’s perhaps some geoengineering research worth funding, but I don’t have the expertise to identify it and I don’t know how to find geoengineering research that’s funding-constrained.

FLI has published a list of organizations working on biotechnology, although if I tried to read through this list and find ones worth supporting, I would do a poor job; if someone with some domain knowledge looked through these to find ones potentially worth donating to, that could be highly valuable. I believe I understand AI safety well enough to roughly assess organizations in the field, but this is not the case with any other GCR.

If I did come to the conclusion that some specific GCR other than AI safety was the most important, I should probably try to use Open Phil’s research to learn more. I discuss this potential giving opportunity in “Open Phil-Recommended GCR Charities” below. I would also encourage other EAs, especially those with some knowledge about some relevant field such as biosecurity, to explore the available options and publicly write about any good giving opportunities they find.

If a sufficiently strong giving opportunity arises in a field of GCR reduction other than AI safety, I will seriously consider it; but at present I don’t see any.

Movement Building

Effective charities that work to reduce GCRs may be many times better than global poverty charities, so organizations that create new EAs may largely be valuable insofar as they create new donations to GCR charities. If I thought global poverty were the best cause, then meta-organizations that attempt to grow the donation base may be even better. But if GCR reduction is vastly more important, then movement-building charities produce most of their value from a small set of donors who support GCR reduction.

It’s possible that there exist movement-building organizations that produce a sufficiently large benefit to outweigh donations to effective GCR charities. I discuss this possibility when looking at individual movement-building organizations below. But in the general case, I expect that donating directly to the best object-level charity will have a higher impact than donating to movement building organizations.

There are a few additional concerns with supporting movement building; Peter Hurford discusses the most important ones here.

Meta-Research

Meta-research (which mostly means charity evaluation, although it also could include things like Charity Science’s research on fundraising strategies) potentially has a lot of value if it successfully discovers new interventions with bigger impact or room for more funding than any current interventions that are popular among EAs. It’s hard to predict when this will actually happen, and depends on to what extent you believe EAs have identified the best interventions already. But I’m generally optimistic about efforts to produce new knowledge.

Organizations

Here I briefly discuss the major considerations for and against every organization I have seriously considered. The organizations are grouped roughly by category and otherwise listed in no particular order.

I do not discuss a number of potentially promising organizations because surface signs show that they are unlikely to be the most effective charity, and I couldn’t find good enough information about them to feel confident about donating to them.

Machine Intelligence Research Institute (MIRI)

Emotional disclosure: I feel vaguely uncomfortable about MIRI. Originally I was bothered by Eliezer’s lack of concern for animals and worried that he would make decisions to benefit humans at the expense of other conscious minds; MIRI’s new director Nate Soares does seem to give appropriate value to non-human animals, so this is less of a concern. I also was bothered by how hard it was to tell if it was doing anything good. Today, it is more transparent and produces more tangible results. This second concern may still be significant, but it is over-weighted in my emotional response to MIRI. I still have an intuitive reaction that AI research isn’t as good as actually helping people or animals, but I try to ignore this because I don’t believe it’s rational.

The evidence for MIRI’s effectiveness is considerably weaker than for GiveWell top charities. I have some concerns, but I see a number of reasons to expect that MIRI is succeeding at achieving its short-term goals, which gives me confidence in its organizational competence. It doesn’t have some of the same problems I see with FLI (which I discuss in the separate section on FLI), which makes me prefer MIRI over FLI.

Strength of Evidence

MIRI is trying to improve outcomes in the future, so it’s not clear what qualifies as evidence that MIRI is currently doing a good job. We can’t get direct evidence without predicting the future, so here are a few things I look for:

  1. Its researchers and leadership appear competent and devoted to the problem.
  2. It has high research output and its research is well-regarded by others in the field.
  3. It successfully convinces other AI researchers that alignment is an important problem.
  4. Respected AI researchers endorse MIRI as effective.
  5. It is transparent and makes an effort to publicly disclose its activities, accomplishments, and failures.
  6. Its researchers care about non-human animals.

Competence

Based on personal conversations with MIRI researchers and reading their public writings, I get the impression that they have a strong grasp on which sub-problems in AI safety are important and how to make progress on them. I have only had fairly limited personal interactions with MIRI researchers; the most extensive interaction I had was when I attended a MIRIx workshop where we discussed their paper “Robust Cooperation in the Prisoner’s Dilemma”. The problem this paper attempts to solve has clear relevance to AI safety–we would like superintelligent agents to cooperate with us and with each other on real-world prisoner’s dilemmas–and the paper makes obvious steps toward solving this problem while also outlining what remains unsolved.

More broadly, the items listed on MIRI’s technical agenda look like important and urgent problems. At the very least, MIRI appears competent at identifying significant research problems that it needs to solve. My impression is that MIRI is doing a better job than anyone else at identifying the important problems, although this is difficult to justify explicitly.

We have to consider how competent MIRI is compared to other researchers we could fund: perhaps if some other people were working on the sorts of problems that MIRI works on, they would solve them much more quickly and efficiently. I find this somewhat unlikely. I have read a lot of writings by Eliezer Yudkowsky, Luke Muehlhauser, and Nate Soares (Eliezer is the founder and senior researcher, Luke is the ex-director, and Nate is current director), and they strike me as intelligent people with strong analytic skills and a good grasp of the AI alignment problem. I briefly looked through the FLI grantees, and MIRI’s research plan seems more obviously important for AI safety than many of the grantees.

Published research

Although MIRI published little before 2014, it has started publishing more papers since then. I haven’t engaged much with its research papers but a cursory examination shows that they are probably relevant and valuable. It looks like most of MIRI’s papers are purely self-published, but a few have been accepted to respected conferences (including AAAI-15), although I don’t know how high a bar this is. This is another of MIRI’s weak points–there’s not clear evidence that other AI researchers respect its publications. MIRI papers are rarely cited by anyone other than MIRI itself and I would feel more confident about MIRI if it received more citations. This not a strong negative signal because AI safety is such a small field, but it’s certainly not a positive signal either.

Influence

Nate has discussed the fact that AI researchers appear more concerned about safety than they used to be, although it is unclear whether MIRI has had any causal role in bringing this about. Alyssa Vance lists a few prominent academics who are familiar or involved with MIRI’s work. I would like to see this trend continue–AI safety remains a small field, and few AI researchers work on safety full-time.

I don’t know much about this, but my understanding is that in the past year or so, FLI has done a good deal more than MIRI to generate academic interest in AI safety; but MIRI had done more in previous years, and FLI probably wouldn’t exist (or at least wouldn’t be concerned about AI) if it weren’t for MIRI. This suggests that MIRI has done a reasonably good job in the past of raising concern for AI safety, which is a good sign for MIRI’s competence. It certainly could have been much more successful–MIRI has existed for over a decade and AI safety has only recently begun gaining momentum. The idea of AI safety sounds prima facie absurd, so I’d expect it to be hard to convince people that it matters, but perhaps someone other than MIRI could still have done a better job raising concern. (Today FLI seems to be doing a better job, although this may largely come from the fact that MIRI is focusing less on advocacy and more on research.)

Endorsements

Stuart Russell has publicly endorsed the importance of AI safety work and serves as a research advisor to MIRI. The advisory board consists of professors and AI researchers. I don’t know what sort of relationship the advisors have with MIRI or to what extent serving as an advisor acts as an implicit endorsement of MIRI’s competence.

From what I have seen, MIRI is fairly lacking in endorsements from respected AI researchers. I do not know how likely it would be to get endorsements if it were doing valuable work, so I don’t know how concerning this is, but it certainly counts as evidence against MIRI’s effectiveness.

Nate has claimed that when he discusses the problems MIRI is working on with AI researchers, they agree that the problems are important:

I talk to industry folks fairly regularly about what they’re working on and about what we’re working on. Over and over, the reaction I get to our work is something along the lines of “Ah, yes, those are very important questions. We aren’t working on those, but it does seem like we’re missing some useful tools there. Let us know if you find some answers.”

Or, just as often, the response we get is some version of “Well, yes, that tool would be awesome, but getting it sounds impossible,” or “Wait, why do you think we actually need that tool?” Regardless, the conversations I’ve had tend to end the same way for all three groups: “That would be a useful tool if you can develop it; we aren’t working on that; let us know if you find some answers.”

Given that Nate is obviously motivated to believe that AI researchers value the work he’s doing, he could be cherry-picking or misinterpreting people’s claims here (I doubt he would do this deliberately but he may do it subconsciously or accidentally). It’s also possible that people exaggerate how important they believe his research is for the sake of politeness. He does not provide any specific quotes or name any researchers who endorse MIRI’s work as important, so I do not consider his claims here to be strong evidence.

Transparency

MIRI makes some effort to make itself more transparent:

  • It publishes monthly newsletters that describe its research and activities
  • It writes annual reviews
  • It occasionally writes up explanations of its papers on the MIRI blog

As far as I know, it was not doing any of these things three years ago, so this shows promise.

Even better, it has a detailed guide to what technical problems MIRI is researching and a technical agenda explaining why it works on the problems it does. These materials were published relatively recently, so MIRI is increasing transparency.

Concern for Animals

Strictly speaking, this doesn’t have anything to do with MIRI’s skill at AI safety work, but one of my major concerns with friendly AI research is that it could lead to the development of an AI that benefits humans at the expense of non-human animals. In a separate essay, I come to the conclusion that GCR reduction is probably valuable even considering its impact on non-human animals. Even so, I feel better about people doing AI safety research if they care about animals and are therefore motivated to do research that will not end up harming animals.

I am somewhat more optimistic because Nate Soares, the current director of MIRI, appears to place high value on non-human animals; I have spoken with him about this issue, and he agrees it would be bad if an AI did not respect the interests of non-human animals and that it’s a genuine concern. I briefly investigated the positions of most of the other full-time MIRI employees. From what I can glean from public information, it looks like Rob Bensinger places adequate value on non-human animals and has a good understanding of why it’s silly to not be vegetarian. Patrick LaVictoire apparently cares about animals, and Katja Grace talks as though she cares about animals but I find her arguments against vegetarianism concerning2 (Rob Bensinger has counterarguments). Eliezer Yudkowsky doesn’t believe animals are morally relevant at all. I don’t know about the rest of MIRI’s staff.

Room for More Funding

Based on MIRI’s fundraising goals and current funds raised, I expect that it has substantial room for more funding. It has laid out a fairly coherent plan for how it could use additional funds, and Nate believes it could effectively use up to $6 million. Although I am less confident about its ability to usefully deploy an additional $6 million than an additional, say, $1 million, it is unlikely to raise that much in the near future; I expect it to continue to have a substantial funding gap.

AI safety is attracting considerably more attention: Elon Musk has donated $10 million, and other donors or grantmakers may put more money into the field. This is still fairly uncertain, and I don’t want to count on it happening; plus, I expect MIRI to have a better idea of which problems matter than most AI researchers or grantmakers (MIRI researchers have been working full-time on AI safety for a while), so funding MIRI probably matters more than funding AI safety research in general.

I’m concerned that FLI did not make larger grants to MIRI; this reflects negatively on MIRI’s potential room for funding. I suspect FLI is being too conservative about making grants, but they have more information than I do, so it’s hard to say. This is one of my primary concerns with MIRI. I’ve tried to find out more information about FLI’s decision, here but their grantmaking process involved confidential information so there’s a limit to what I can learn.

Future for Humanity Institute (FHI) and Centre for the Study of Existential Risk (CSER)

Both of these organizations are potentially high value, but representatives of both organizations have claimed that they are not currently funding constrained.

Neil Bowerman from FHI:

I would argue that FHI is not currently funding constrained….We could of course still use money productively to hire a communications/events person, more researchers and to extend our runway, however at present I would suggest that funding x-risk-oriented movement building, for example through Kerry Vaughan and Daniel Dewey’s new projects, is a better use of funds than donating to FHI for EA-aligned funding. source

Sean O hEigeartaigh from CSER:

We’re not funding constrained in the large at the moment, having had success in several grants. We have good funding for postdoc positions and workshops for our initial projects. Most of our funding has some funder constraints, and so we may need small scale funding over the coming months for ‘centre’ costs that fall between the cracks, depending on what our current funders agree for their funds to cover – one example is an academic project manager position to aid my work. source

Both of these people posted comments on a Facebook thread after Eliezer said these organizations were funding-constrained. Apparently a good way to find information about an organization is to make public, incorrect claims about it.

Edited 2015-09-21 to add: The fact that these organizations claim they don’t have room for more funding makes me more confident that they’re optimizing for actually reducing existential risk rather than optimizing for personal success. If one of them does become substantially funding-constrained in the near future, I consider it fairly likely that it will be the best giving opportunity.

Future of Life Institute

FLI organized the “Future of AI” conference on AI safety and funded AI research projects that cover a somewhat broader range than MIRI’s research does. It has future plans to expand into biosecurity work but at the time of this writing it has not gotten beyond the early stages.

Size of Impact

I expect the median FLI grant to be less effective than the same amount of money given to MIRI, but due to its breadth it may hit upon a small number of extremely effective grants that end up making a large difference. That said, the broader approach of FLI looks more reasonable to fund for someone who doesn’t have strong confidence that MIRI is effective at reducing AI risk.

Some of FLI’s AI grants are probably highly effective. However, I find some of them concerning. Some of the research projects attempt to make progress on inferring human values. If the inferred human values are harmful (more specifically, they do not assign sufficient value to non-human animals or other sorts of non-human minds), the AI could produce very bad outcomes such as filling the universe with wild-animal suffering. I think this is more likely not to happen than to happen, but it’s a substantial concern, and it’s an argument in favor of spreading good values to ensure that if AI researchers create a superintelligent AI, they give it good values.

I do not have the same concern with MIRI: I have spoken to Nate Soares about this issue, and he agrees that encoding human values (as they currently exist) in an AI would be a bad idea, in part because it might give insufficient weight to non-human animals.

Room for More Funding

FLI recently received $10 million from Elon Musk and an additional $1 million from the Open Philanthropy Project. From Open Phil’s writeup:

After working closely with FLI during the receipt and evaluation of proposals, we determined that the value of high quality project proposals submitted was greater than the available funding. Consequently, we made a grant of $1,186,000 to FLI to enable additional project proposals to be funded.

It sounds like Open Phil gave FLI exactly as much money as it believed it needed to fund the most promising research proposals. This makes me believe that FLI has no room for more funding. Even if FLI had wanted to fund more grants, I don’t believe I could actually allow them to do so.

Suppose FLI has $X and would like to have $(X+A+B+C). Open Phil believes FLI should have $(X+A+B). If I do nothing, Open Phil will give $(A+B) to FLI. If I give $A to FLI, Open Phil will give $B, so either way FLI ends up with $(X+A+B). I cannot give money to FLI after Open Phil does be causes FLI will have finished making grants by then. I believe this model approximately describes the situation during the previous round of grantmaking and probably describes future rounds; so my donations only serve to reduce the amount of money that Open Phil gives to FLI.

At the time of this writing, Open Phil has not produced any recommendations on GCR interventions that small donors can viably support, and probably won’t for a while. In fact, it’s not even clear that it has any plans to do so. I looked through Open Phil’s published materials and could not find anything on this.

(Edited 2015-09-16 to clarify.)

But if Open Phil does produce recommendations for small donors, it’s likely that one or some of these recommendations will represent better giving opportunities than any existing GCR charity that I could identify on my own.

Size of Impact

GCRs have possibly the largest impact of any cause area; the case for this has been made before and does not need to be repeated here. Presumably, Open Phil recommendations will have as large an impact as any other organizations working in the GCR space, although it’s fairly likely that Open Phil will not find any organizations that have a higher impact than organizations like MIRI or FHI that are well known in EA circles.

Waiting for Open Phil means losing out on any value that could be generated between now and then, including both direct effects and learning value. The haste consideration weighs heavily in favor of supporting organizations now rather than waiting for Open Phil.

Strength of Evidence

I expect strength of evidence to be the main benefit of Open Phil-recommended organizations over current organizations. Although Open Phil focuses on more speculative causes than GiveWell classic, it still does extensive research into cause areas, and I would expect it to recommend specific interventions if it has strong reason to believe that they are effective. Right now, the organizations working on GCR reduction have only weak evidence of impact, and Open Phil will likely change this.

Room for More Funding

Although Open Phil-recommended GCR organizations may be the best giving opportunities in the world, I have major concerns about their neglectedness. Right now Good Ventures has more money than it knows how to move, and it could fill the room for more funding for all of Open Phil’s recommendations on GCR reduction. If I donate to GCR, it may only displace donations by Good Ventures. I see this as a major argument against waiting for Open Phil recommendations. It’s possible that Open Phil will find massively scalable opportunities in this space, but it does not seem likely that it will find anything so scalable that it can absorb any funds Good Ventures directs at it and still have more room for more funding.

GiveWell/Open Philanthropy Project

(Here I use GiveWell to refer to both classic GiveWell and Open Phil.)

Size of Impact

(Edited 2015-09-16 to expand on my reasoning.)

I consider it likely that Open Phil’s work on GCRs will find interventions that are more effective than anything EAs are currently doing. But it seems rather unlikely that their other current focus areas (except possibly factory farming) will produce anything as effective. Over the next 5-10 years the existing institutions working on GCR reduction may run out of room for more funding as the EA movement grows and/or GCR reduction efforts attract more interest, in which case Open Phil-type work of seeking out new interventions would be especially valuable; but I don’t think we’re there yet. It’s also unclear to what extent GiveWell can use additional funds from small donors to produce recommendations more quickly.

If I believe that GCR interventions are much more effective in expectation than most other sorts of interventions (which I do), then Open Phil’s effectiveness gets diluted whenever it works on anything other than GCR reduction. I understand that Open Phil/Good Ventures want to fund a broader range of interventions, and that may make sense for someone with as much money as Good Ventures; but if I believe they are leaving funding gaps in GCR interventions then I can probably have a bigger impact by funding those interventions directly rather than by supporting Open Phil.

Strength of Evidence

GiveWell appears to apply much more rigor and clear thinking to charity analysis than anyone else. I trust its judgment more than my own in many cases. I am concerned that it does not place sufficient attention on sentient beings other than humans. Open Phil recently committed $5 million to factory farming, which I find promising but ultimately much too limited. Good Ventures recently committed to a $25 million grant to GiveDirectly; it’s plausible that the factory farming grant will do perhaps two or three orders of magnitude more good per dollar than the GiveDirectly grant and should be receiving a lot more money. If GiveWell as an organization shared my values about the importance of animals, I might be more likely to support it, but their current spending patterns make me reluctant.

Room for More Funding

(Edited 2015-09-16 to clarify.)

Good Ventures currently pays for a large portion of GiveWell’s operating expenses, and GW has no apparent need for funding. It wants to maintain independence from Good Ventures by keeping other sources of funding, but I do not find this consideration very important. If Good Ventures stops backing them and GiveWell finds itself in need of funding, I will reconsider donating.

I don’t know how much influence Good Ventures has over GiveWell’s activities. I’m considerably less confident in Good Ventures’ cause prioritization skills than GiveWell’s, so I prefer that GiveWell has primary control over Open Phil’s cause selection, although I don’t know how much this matters. I wish GiveWell were more transparent about this.

Animal Charity Evaluators (ACE)

Emotional disclosure: I feel a strong positive affect surrounding ACE, so I may end up overrating its value. The thought of giving money to ACE makes me feel like I’m making a big difference on an emotional level, and this feeling probably biases me in ACE’s favor.

Size of Impact

ACE has three plausible routes to impact that I can see.

  1. It could discover new effective interventions.
  2. It could produce stronger evidence for known interventions and thus persuade more donors to direct money there.
  3. It could produce strong evidence that known interventions are ineffective and thus direct money elsewhere.

On #1: I expect there are highly effective methods of helping animals that are not yet tractable or even well-understood, such as environmental interventions to reduce wild-animal suffering. ACE cares about wild-animal suffering and would likely do research on under-examined and potentially high-impact topics such as this if it had a lot more funding. It’s unlikely that my funding would push ACE over the edge to where it decides to invest more in research of this sort; but it cannot do this research unless it has more funding, and it cannot get more funding unless people like me provide it. ACE is also small enough that if I requested that it do more research in some area, it would probably be willing to entertain the possibility.

On #2: I have met several people who do not donate to animal charities solely because they think the evidence for them is too weak. If ACE produced higher-quality research supporting animal charities, this would almost certainly persuade some people to donate to them; but I don’t know how much money would be directed this way.

On #3: If ACE discovers that popular interventions are much less effective than previous evidence showed, to the point where GW top charities look more effective and more donors start giving to GW charities instead, the maximum impact here would be the impact had by marginal donations to GW top charities. This is a much smaller impact than the plausible impact of effective animal charities. Certainly if current interventions are ineffective then we want to know, but discovering this is much less valuable than discovering that some factory farming intervention is definitively 10x more impactful than a GW top charity. Changing impact from 0x to 1x is much less important than changing from 1x to 10x.

Edited 2015-09-16 to add: ACE may find that some types of interventions are ineffective while others are effective and thus direct funding to the more effective interventions. This would be about as valuable as #2, and possibly more effective because this sort of evidence might be able to move more money. Thanks to Carl Shulman for raising this possibility.

It is unclear how to reason about the probability of #1 and #3 or the effect size of #1 and #2, so I do not know much about the expected impact of each. Number 1 certainly has the highest upside and makes me the most optimistic about the value of donations to ACE. I would like to see rigorous work on interventions to help animals on a massive scale (such as wild animals or animals in the far future). Right now, we’re nowhere close to being able to produce this sort of work, but the best way I can see to push us in that direction is to support ACE.

As I explain in “Is Preventing Human Extinction Good?”, I see good reason to be optimistic about the long-term impact of humanity on all sentient life. In the words of Carl Sagan, “If we do not destroy ourselves, we will one day venture to the stars” (and biologically engineer animals to be happy). Thus it looks like ensuring humanity continues to survive is more important than reducing wild-animal suffering in the medium-term future.

It’s possible that spreading concern for wild animals will have a massive effect on the far future; but it’s not at all clear that ACE research will ever have this effect, even if it does research on reducing wild animal suffering. I discuss my general concerns with values spreading in “On Values Spreading”. Even so, I believe ACE has a decent chance of being the most effective charity. It’s not too unlikely that if ACE had substantially more funding, it would find an intervention or interventions that are more effective than anything that currently receives funding. This makes supporting ACE look like a promising option.

Strength of Evidence

ACE does not have as strong a reputation as GiveWell, although it is a much newer and smaller organization so this is to be expected. The interactions I have had with employees and volunteers at ACE have left me with a strong positive impression of their competence and concern for the problems they are attempting to solve. Their research results have not been nearly as in-depth as GiveWell’s, but ACE acknowledges this. This is largely a product of the lack of studies that have been done on animal advocacy. ACE is making some efforts to improve the state of research, and these efforts look promising. I spoke to Eric Herboso3 about this, and he had clearly put some thought into how ACE can improve the state of research.

Room for More Funding

ACE appears to have strong ability to absorb more funding. Right now it has a budget of only about $150,000 a year–not nearly enough to do the sort of large randomized controlled trials that it wants. I expect ACE could expand its budget several-fold without having much diminishing marginal effectiveness. Additionally, if it expanded, it could broaden its scope, putting more effort into researching wild animal suffering or other speculative but potentially high-impact causes.

Learning Value

ACE does research and publicly publishes its results, so I believe donations to ACE have particularly high learning value. Peter Hurford has argued “when you’re in a position of high uncertainty, the best response is to use a strategy of exploration rather than a strategy of exploitation.” I expect donations to ACE to produce more valuable knowledge than donations almost anywhere else, which makes me optimistic about the value of donations to ACE. In particular, I expect ACE to produce substantially more valuable research per dollar spent than GiveWell.

Animal Ethics (AE) and Foundational Research Institute (FRI)

Both these organizations do high-level research and values spreading for fairly unconventional but important values like concern for wild animals. I wouldn’t be surprised if one of these turned out to be the best place to donate, but I don’t know much about their activities or room for more funding and I’ve had difficulty finding information. The only thing I can see them publicly doing is publishing essays. While I find these essays valuable to read, I don’t have a good picture of how much good this actually does.

A note to these organizations: if you were more transparent about how you use donor funds, I would more seriously consider donating.

Giving What We Can (GWWC)

I’m skeptical about the value of creating new EAs because the 2014 EA survey showed that the average donation size was rather small. However, Giving What We Can members are probably substantially better than generic self-identified EAs because GWWC carefully tracks members’ donations. I can’t find any more recent data, but from 2013 it looks like members have a fairly strong track record of keeping the pledge.

At present, only a tiny fraction of GWWC members’ donations go toward GCR-reduction or animal-focused organizations, which may be much higher value than global poverty charities. Based on GWWC’s public data, it has directed $92,000 to far-future charities so far (and apparently $0 to animal charities, which I find surprising). If we extrapolate from GWWC’s (speculative) expected future donations, current members will direct about $287,000 to far-future charities. That’s less than GWWC’s total costs of $443,000, but the additional donations to global poverty charities may make up for this. But I’m skeptical if GWWC will have as large a future impact as it expects to have (a 60:1 fundraising ratio seems implausibly high), and it’s not clear how many of its donations would have happened anyway. I know a number of people who signed the GWWC pledge but would have donated just as much if they hadn’t. (I don’t know how common this is in general.) Additionally, I don’t see a clear picture of how donations to GWWC translates into new members. GWWC might raise more money than Charity Science or Raising for Effective Giving (both discussed below), but I have a lot more uncertainty about it which makes me more skeptical.

These various factors make me inclined to believe that directly supporting GCR reduction or high-learning-value organizations will have greater impact that supporting GWWC.

Charity Science

Charity Science has successfully raised money for GiveWell top charities (it claims to have raised $9 for every $1 spent) through a variety of fundraising strategies. It has helped individuals run Christmas fundraisers and created a Shop for Charity browser extension that allows you to donate 5% of your Amazon purchases at no cost to you. It has plans to explore other methods of fundraising such as applying the REG model to other niches and convincing people to put charities in their wills.

Size of Impact

Right now Charity Science focuses on raising money for GiveWell top charities. Its fundraising model looks promising–it tries a lot of different fundraising methods, so I think it’s likely to find effective ones–but I expect that the best charities are substantially higher-impact than GiveWell top charities, so this leads me to believe that donations to Charity Science are not as impactful as donations to highly effective far-future-oriented charities. I spoke with Joey Savoie, and he has considered doing research on effective interventions to help non-human animals. This is promising, and I may donate to Charity Science in the future if it ever focuses on this, but for now its activities look less valuable than ACE or REG (see below).

Edited 2015-09-16 to add: Carl Shulman points out that Charity Science’s 9:1 fundraising ratio substantially undervalues the opportunity cost of staff time, so the effective fundraising ratio is less than this. This looks like a bigger problem for Charity Science than for the other fundraising charities I consider.

Room for More Funding

Based on Charity Science’s August 2015 monthly report, it looks like it could use new funding to scale up and broaden its activities. It has enough ideas about activities to pursue that I believe it could deploy substantially more funds without experiencing much diminishing marginal utility.

Learning Value

Donations to Charity Science will likely have high value in terms of learning how to effectively raise funds. I’m uncertain about how valuable this is; I feel more confident about the value of learning about object-level interventions and I’m somewhat wary of movement growth as a cause, largely for reasons Peter Hurford discusses here.

Raising for Effective Giving (REG)

Size of Impact

In 2014, REG had a fundraising ratio of 10:1, about the same as Charity Science’s. I am somewhat more optimistic about the value of REG’s fundraising than Charity Science’s because REG has successfully raised money for far future and animal charities in addition to GiveWell recommendations. For details, see REG’s quarterly transparency reports. In the conclusion, I look at REG’s fundraising in more detail (including how much it raises for far future and animal charities) to try to assess how much value it has.

Strength of Evidence

Edited 2015-09-21.

The case for REG’s effectiveness appears pretty straightforward: it has successfully persuaded lots of poker players to donate money to good causes. Along with other movement-building charities, REG faces a concern about counterfactuals: how many of REG-attributed donations would have happened anyway? I believe this is a serious concern for Giving What We Can–many people who signed the pledge would have donated the same amount anyway (I’m in this category, as are many of my friends).

REG’s case here looks much better than the other EA movement-building charities I’ve considered. REG focuses its outreach on poker players who were previously uninvolved in EA for the most part. Even if they were going to donate substantial sums prior to joining REG, they almost certainly would have given to much less effective charities.

Room for More Funding

REG is small and has considerable room to expand. They have specific ideas about things they would like to do but can’t because they don’t have enough money. I expect REG could effectively make use of an additional $100,000 per year and perhaps considerably more than that. This is not a lot of room for more funding (GiveWell moves millions of dollars per year to each of its top charities), but it’s enough that I expect REG could effectively use donations from me and probably from anyone else who might decide to donate to them as a result of reading this.

REG receives funding through the Effective Altruism Foundation (EAF), but you can donate through REG’s donations page and the funds are earmarked for REG you can donate to EAF (formerly known as GBS Switzerland) and earmark your donations for REG.

Learning Value

Added 2015-09-17.

REG looks less exploratory than Charity Science so it probably has worse learning value, but it’s still pursuing an unusual fundraising model with a lot of potential to expand (especially into other niches). REG appears to have fairly strong learning value, and I want to see what sorts of results it can produce moving forward.

Other Organizations

I know of a handful of other organizations that might be highly effective, but I don’t have much to say about. For these, I don’t have a strong sense of whether what they do is valuable, and they look sufficiently unlikely to be the best charity that I didn’t think they were worth investigating further at this time. I have included a brief note about why I’m not further investigating each charity.

  • Global Priorities Project: insufficiently transparent about activities
  • 80,000 Hours: unclear whether it has a positive effect
  • Direct Action Everywhere: evidence of effectiveness is too murky
  • Nonhuman Rights Project: weak evidence of effectiveness, unclear what partial success looks like
  • EA Ventures: insufficiently transparent about who gets money

Conclusions

I have selected three finalist charities that are all plausibly the best, but they are in substantially different fields and therefore difficult to compare.

Brief explanations for charities I’m not supporting

Here I list all the charities I considered but are not finalists and briefly explain why I have chosen not to support them.

  • GiveWell-recommended global poverty charities: small effect size relative to GCR reduction
  • ACE-recommended veg outreach charities: small effect size relative to GCR reduction; weak evidence
  • FHI, CSER: limited room for more funding
  • FLI: weaker case than MIRI; concerns about encoding human values; Open Phil will fill room for more funding
  • Future Open Phil-recommended GCR interventions: Good Ventures/other donors may fill room for more funding; money now is worth substantially more than money in a few years
  • GiveWell/Open Phil: Good Ventures will fill room for more funding; less valuable than ACE
  • Animal Ethics, Foundational Research Institute: too much uncertainty about whether they’re doing anything effective
  • GWWC: unclear value
  • Charity Science: raises for less effective charities than REG

Finalist Comparison

I have narrowed the list of considered charities to three finalists:

Here I give the advantages of each of them over the others.

In Favor of MIRI over ACE

  • GCR reduction probably matters more than helping animals in the short term or spreading concern for animals, and AI safety looks like the most important and neglected GCR.

In Favor of ACE over MIRI

  • I have a little more confidence that ACE leadership is good at achieving its goals.
  • ACE has better learning value. Due to the nature of its work, its activities produce a lot of new information, and ACE researchers are trying hard to make this information high-value.
  • ACE looks more funding-constrained, and animal welfare will probably continue to be an unpopular cause for longer than AI safety will. Similarly, funding now could do a lot to help ACE expand, whereas MIRI has stronger momentum.

In Favor of REG: Weighted Donation Multiplier

Edited 2015-09-17.

To get an idea of the value of REG’s fundraising, I looked at the charities for which they have raised money and assigned weightings to them based on how much impact I expect they have. I created two different sets of weightings: one where I assume AI safety is the most impactful intervention (with MIRI as the most effective charity) and one where I assume animal welfare/values spreading is highest leverage (with ACE as the most effective charity). The AI model reflects my current best guesses, but I created the animal model to see what sorts of results I would get.

This table shows how much money REG raised in each category over its four quarters of existence to date (in thousands of dollars), taken from its lovely transparency reports:

Name 2014Q3 2014Q4 2015Q1 2015Q2 Total
GBS 18 126 4 30 178
ACE 0 25 0 0 25
animal (veg) 0 100 0 0 100
speculative 0 25 0 20 45
MIRI 0 0 0 53 53
Other 20 93 53 50 216
Total 38 369 57 153 617

I used these fundraising numbers and assumed REG’s expenses through 2015Q2 are $100,000, extrapolating from 2014’s expenses of $52,318.

For my two models I used the following weights:

Category AI-Model Weight Animal-Model Weight
GBS 0.2 0.2
ACE 0.5 1
veg advocacy 0.2 0.3
speculative 0.3 0.5
MIRI 1 0.2
Other 0.1 0.1
  • Veg advocacy includes charities that promote vegetarianism and meat reduction. All charities in this category were ACE recommendations or ACE standout charities.
  • Speculative includes unconventional organizations that share my concern for non-human animals, including Animal Ethics and the Nonhuman Rights Project.
  • Other includes everything else; most of this money went to GiveWell top charities.

For GBS, I conservatively assume that all money directed toward GBS goes to activities other than REG (and I give these activities a weight of 0.2). Accounting for GBS funding going back to REG involves some complications so to be conservative I ignore any compounding effects that occur this way.

It’s not unlikely that categories in this model vary much more in effectiveness than the weights I have listed here. I decided to keep all the weights relatively close together because I do not have strong confidence about how much good each of these categories do. I might be able to make an inside-view argument that, say, MIRI is 1000x more effective than anything else on this list, but from the outside view, I shouldn’t let such an argument carry too much weight.

In the AI/MIRI model, I found that $10 of REG expenditures produced about $16 of weighted donations; in the ACE/animal model, every $10 spent produced $15 of weighted donations. This means that $10 to REG produced about $16 in equivalent donations to MIRI in the first model, and $15 in equivalent donations to ACE in the second model.4

When we weight the charities that REG has produced donations for, its fundraising ratio drops from 10:1 to a much more modest 1.5:1. Donating to REG instead of directly to an object-level charity produces an additional level of complexity, which means my money has more opportunities to fail to do good. A 1.5:1 fundraising ratio is probably high enough to outweigh my uncertainty about REG’s impact, but not by a wide margin.

But there’s another argument working in REG’s favor. I have considerable uncertainty about whether it’s more important to support values spreading-type interventions like what ACE or Animal Ethics does, or to support GCR reduction like MIRI. GCR reduction looks a little more important, but it’s a tough question. The fact that REG produces a greater-than-one multiplier using both a MIRI-dominated model and an ACE-dominated model means that if I donate to REG, I produce a positive multiplier either way. If I choose to donate to either MIRI or ACE, I could get it wrong; but if I donate to REG, in some sense I’m guaranteed to “get it right” because donations to REG probably produce greater than $1 in both MIRI-equivalent and ACE-equivalent donations.

I don’t want to put too much value on this fundraising ratio because there are various reasons why it could be off by a lot. It appears to show that REG fundraising is valuable even if you discount most of the charities it raises money for, which was my main intention. This alone is not sufficient to demonstrate REG’s effectiveness to my mind, but its leadership looks competent and its model has reasonably strong learning value.

A caveat: just because REG has raised a lot of funds for MIRI and animal charities in the past doesn’t mean it will continue to do so. But it raised these funds from a number of different people and over multiple quarters, so this is good reason to believe that it will continue to find donors interested in supporting MIRI and ACE/animal charities. Additionally, Ruairi Donnelly, REG’s Executive Director, has said to me in private communication that REG is meeting with more donors who want to fund far-future oriented work and that he hopes REG will move more money to these causes in the future.

There’s a concern about whether REG will continue to raise as much money per dollar spent as it has in the past. I expect REG to experience diminishing returns, although it is a new and very small organization so returns should not diminish much in the near future. I don’t have a strong sense for the size of the market of poker players who might be interested in donating to effective causes. It looks considerably bigger than REG’s current capacity so REG has some room to scale up, but I don’t know how long this will continue to be true. If REG’s fundraising ratio dropped to 5:1 and it didn’t increase funding to far-future charities, I would probably not donate to it; but it seems unlikely that it will drop that much in the near future.

Decision

Edited 2015-09-24. I had previously written that I didn’t know if I was going to donate this year.

Based on all these considerations, it looks like Raising for Effective Giving is the best charity to fund. My main concern here is falling into a meta trap. One possible solution here is to split donations 50/50 between meta- and object-level organizations. If I were to do this, I would give 50% to REG and 50% to MIRI. But I believe the EA movement could afford to be more meta-focused right now, so I feel comfortable giving 100% of my donations to REG.

I plan on directing my entire donation budget this year to REG. I will make the donation by the end of October unless I am persuaded otherwise by then. I am continuing to seek out reasons to change my mind and I’m open to criticism and to arguments that I should give somewhere else.

How to Donate

Added 2015-09-21.

You can donate to REG through the GBS Switzerland website (English, Swiss). If you live in the United States, you can make your donation tax-deductible by giving to GiveWell and asking it to forward the money to REG.

Where I’m Most Likely to Change My Mind

Added 2015-09-21.

  • Values spreading might be more important to fund than GCR reduction.
  • REG might not have as large a donation multiplier as it appears to.
  • Many of the charities REG directs donations to might be worse relative to the best object-level charity than I assumed, so donating directly to the best charity would have greater impact.
  • Current far-future-focused interventions might have too-weak evidence supporting them.

I’ve had conversations with people who believe each of these, and while I’m unpersuaded right now, I find their positions plausible.

Notes

  1. REG’s fundraising ratio is less than 1:1 for both MIRI and ACE, but I still consider it more valuable than direct donations to either MIRI or ACE individually. I explain why in the section on Raising for Effective Giving and in the conclusion.

  2. How to assess whether a person gives adequate concern to non-human animals could be the subject of an entire additional essay, but I don’t have a clear enough picture of how to do this to write well on the subject. My general impression is that people who claim to care about animals but have some justification for non-vegetarianism probably don’t actually care as much about animals as they say they do. They sometimes claim that the time and effort spent not eating animal products could be better spent donating to efficient charity (or something), but then don’t make trivial but hugely beneficial choices such as eating cow meat instead of chicken meat. I’m somewhat more convinced by people who eat animals but donate a lot of money to charities like The Humane League; I understand that vegetarianism is harder for some people than others, but actions signal beliefs more strongly than words do.

  3. Eric Herboso used to work at ACE as the Director of Communications; he’s currently earning to give while volunteering for ACE part time.

  4. It’s probably a coincidence that both models ended up with about the same weighted fundraising ratio. MIRI received about as much funding as ACE plus speculative non-human-focused charities, so these balance out in the two models.

Comments (67)

Comment author: Owen_Cotton-Barratt 28 September 2015 03:32:51PM 3 points [-]

Global Priorities Project: insufficiently transparent about activities

I wanted to say thank you for this. There's always a tradeoff between reporting on what you're doing and getting on and doing more stuff, but this was a good reminder to look at whether we're getting the balance right, and I think we're going to devote a bit more effort to transparency.

Comment author: MichaelDickens  (EA Profile) 28 September 2015 06:09:10PM 0 points [-]

That's good to hear! GPP looks promising and I'd like to see it talk more publicly about its activities.

Comment author: John_Maxwell_IV 23 September 2015 02:20:49PM 1 point [-]

Another reason to like REG: I expect bringing more poker players in to the EA movement will be good for our culture if poker is effective rationality training. (But I still think a profession where people are paid to make accurate predictions, say successful stock pickers, could be even better.)

Comment author: MichaelDickens  (EA Profile) 24 September 2015 06:10:26PM 0 points [-]

How big of a difference do you think this makes? I don't expect that bringing in high-rationality people is a particularly big consideration (I wouldn't fund it over something like MIRI or even AMF) although I agree that it's a small bonus.

Comment author: John_Maxwell_IV 25 September 2015 03:37:16AM *  3 points [-]

"I don't expect that bringing in high-rationality people is a particularly big consideration" - this is probably a point where we disagree. I've previously written about this here, here, and, in relation to the idea of effective altruists pursuing systemic change, in the comments of this post.

Let's contrast top gamblers like competitive poker players with bottom tier gamblers: people who play the lottery, even though it's negative expected value, and happen to win. Let's say the same amount will be donated either way, so the difference is just whether it's going to be directed by top tier or bottom tier gamblers. Imagine yourself reading over a cause selection piece from a top poker player vs a lottery winner... which cause selection piece do you anticipate learning something from? Being persuaded by? Which type of donor are you more confident will actually improve the world with their money vs doing something that sounds nice but isn't actually very effective, or worse, amounts to shooting ourselves in the foot in the long run?

I sometimes wish people in the EA movement would taboo the concept of EA. EA isn't some magic pixie dust you can sprinkle on someone such that they are automatically effective at doing good for the world. There's a sense in which the wisdom of the EA movement as a whole is the sum of collective wisdom of the people who are in the movement. Adding wise people has the potential to improve the movement's wisdom on the margin.

Comment author: MichaelDickens  (EA Profile) 25 September 2015 05:49:39AM 0 points [-]

Imagine yourself reading over a cause selection piece from a top poker player vs a lottery winner... which cause selection piece do you anticipate learning something from?

That's actually a really good point. I had been considering that most rational people don't do much good, so you need more than just rationality. But for something like REG where you're drawing in charitable and altruistic people, it's extremely valuable for those people to be as rational as possible.

Comment author: JeffMJordan 22 September 2015 05:46:26AM 2 points [-]

Really quick question: I was wondering why the 1.5:1 ratio is enough to outweigh your uncertainty about REG's impact?

Comment author: MichaelDickens  (EA Profile) 22 September 2015 05:54:01AM 3 points [-]

Well that's certainly a concern. I'm made more confident by the fact that REG directs funding to multiple charities that are good candidates for top charity, and I believe their model has reasonably good learning value. Plus 1.5:1 is sufficiently higher than 1:1 that I believe it's more likely to have a positive multiplicative effect from outside view.

Comment author: JeffMJordan 22 September 2015 06:02:52AM 2 points [-]

I'm not sure I understand. I would think that in the face of uncertainty it would be better to divide donations in accordance to how likely we find each model.

Comment author: iamasockpuppet 22 September 2015 05:49:35AM 2 points [-]

Surely that depends on the level of uncertainty?

Comment author: Ben_Todd 19 September 2015 08:21:51PM 1 point [-]

Great job writing this up Michael. I'd like to see many more people explaining their reasoning like this.

I was a bit surprised, however, to see 80,000 Hours as listed as "unclear has positive effective" when the charity you conclude is best, REG, likely wouldn't exist if it weren't for 80,000 Hours.

Similarly, your other finalist, ACE, is a spin-off of 80,000 Hours.

https://80000hours.org/about/impact/new-organisations/

A number of other organisations on your shortlist have also been boosted by us, including: CSER (received seed funding from someone etg in part due to us), Charity Science (Joey and Xio are 80k plan changes, they also received seed funding from etg donors, they recently hired someone who decided to work in EA orgs due in part to 80k), GWWC (recently hired someone who switched to EA orgs in part due to us), FHI (now managed by Niel Bowerman, an 80k plan change)...

Comment author: impala 20 September 2015 03:36:31AM 8 points [-]

This sounds worryingly close to claiming credit for all "etg donors", all EAs' careers and all EA organisations that have had some contact with EA organizations. Of course people like Jonas Vollmer are going to say nice things about 80,000 Hours when asked, and it would be impolitic for any organisation to challenge this, so I'll say it: I don't think all of GBS Switzerland's activities can be classed as counterfactually dependent on 80,000 Hours getting funding. Likewise the volunteers who founded Effective Animal Activism (the predecessor of ACE) or CSER or Effective Fundraising (the predecessor of Charity Science) might have done so at some point anyway, for all I know, and it's hard to buy their saying otherwise as unbiased.

This isn't to single out 80,000 Hours as the only organisation with these murky counterfactuals, it's only jumping off your comment. I've likewise heard people say that people were running fundraisers before Charity Science started recruiting people to do so and that people were giving (or, if students, planning to) before signing up to Giving What We Can's list, and that neither organisation can claim credit for everything these people then go on to do.

Comment author: Ben_Todd 20 September 2015 06:23:24AM 3 points [-]

I agree the counterfactuals are murky, so I'd never say it was 100% due to us. Nevertheless, I think we played a significant role.

We also certainly don't claim credit for all etg donors, only those who say they were influenced by us and made a significant plan change (something like 25-50% of the total).

Comment author: MichaelDickens  (EA Profile) 20 September 2015 04:34:35AM *  3 points [-]

Thanks for writing this Ben! I would like to see more representatives from orgs giving cases like this one where you go beyond saying "we're high impact" and explain why you believe you're the most high impact.

Here's the main reasons why I didn't consider 80K further:

  1. Based on my prior knowledge of 80K and the brief time I spent investigating it, I didn't see good evidence that 80K played an important causal role in pushing people toward substantially better careers. Similarly, I don't see much reason to believe that those organizations you listed wouldn't have happened without 80K.

  2. Some of 80K's recommendations confuse me and seem wrong. I agree with Peter Hurford's recent post about the importance of earning to give. I'm concerned because 80K's current stance on etg looks fairly obviously wrong, and everyone I've talked to about this whose opinion I highly respect has agreed that it looks fairly obviously wrong. More generally, 80K's public info on career recommendations focus more on individual fit and don't say much about how much good different careers do or how to do maximal good through those careers.

  3. It seems dubious that 80K could continue to have as large an impact as you claim it has had in the past.

  4. 80K is funded by YC and does not have clear room for more funding. I don't know what 80K could do if they had more money from me that it can't do now, and my donations may displace the donations of other donors.

I am open to considering donating to 80K more seriously if you can address these concerns and also give good reason to believe that the way 80K directs people's careers is likely to have a larger positive impact on the long term future than e.g. reducing AI risk. It's not obvious to me that 80K has a multiplicative effect in the same way REG does.

Comment author: Ben_Todd 20 September 2015 06:42:49AM 1 point [-]

Hi Michael,

On 1) have you seen our evaluation documents? https://80000hours.org/about/impact/ Why don't you think we're moving people towards higher impact careers?

On organisations in particular you say "I don't see much reason to believe that those organisations wouldn't have happened without 80k". The founders of those organisations say they likely wouldn't have existed otherwise, why do you think the founders are wrong?

With ACE in particular: we came up with the idea, it was started by an intern working at 80k and initially housed within 80k, we introduced them to their first seed donors, and an 80k team member continues to play a role on the board.

2) I'm not sure Peter Hurford and 80k actually disagree on the proportion of ppl who should do etg. We say 15-25% in the long-run. He says 50% or perhaps higher, but then in the comments he clarifies (in the reply to AGB) that he means 50% of those choosing between etg and direct work, and not counting those going into academic, policy, grant making etc. If you suppose 50% of people will do that, then Peter thinks 25% of people should etg all considered, in line with our estimate.

There's also a few other differences in how we each frame the question and define etg which could easily explain remaining differences (see Will's comments). I also listed a bunch of problems with Peter's arguments on the thread which he didn't yet address.

Our career research in general is highly focused on which careers do the most good (in the past we've mainly received criticism that we focus on personal fit too little - it's quite hard to say what's best to focus on if you want to maximise long-run impact https://80000hours.org/2014/10/interview-holden-karnofsky-on-the-importance-of-personal-fit/). We only list personal fit as one factor in our key principles: https://80000hours.org/career-guide/basics/ Our career reviews discuss impact just as much as personal fit: https://80000hours.org/career-guide/profiles/

On 3), that's a very big claim. Why? I expect the vast majority of 80k's impact lies in the future. There's the potential to develop a GiveWell-analogue but for career choice for all socially motivated graduates.

On 4), YC only provides $100,000 of funding once, so being YC-funded doesn't mean we never need to fundraise again.

However, it's true we haven't publicly said we have room for more funding, so you have no way to know. I think we do have a large room for more funding though.

I think we have a multiplicative effect exactly like REG does, except we direct people to better careers rather than directing money.

Comment author: MichaelDickens  (EA Profile) 21 September 2015 06:51:43PM 6 points [-]

On 1)

Why don't you think we're moving people towards higher impact careers?

That's not exactly what I said. What I said is that I don't think there's strong evidence that 80K is moving people toward higher impact careers.

80K's impact page lists a bunch of career changes that people made after talking to 80K. But it's not clear how many of these changes would have happened anyway or how much value 80K provided in the process. You also have to consider things that aren't happening. If 80K claims credit for money donated by people who are now earning to give, then it should also subtract money not donated by people who now aren't earning to give. The value of a career change isn't from the value of the person's career but from the marginal difference between their current career and their counterfactual career.

80K has moved people in lots of different directions and there's no clear pattern I can see from public data. I'd expect that some careers are considerably more important and neglected than others, and 80K should be pushing people toward these careers in general, but I don't see this happening.

On 3), if you believe most of 80K's impact comes from helping start new effective charities (which you sort of imply I should believe when you claim that ACE and REG would not exist without 80K), then we should expect this effect to get a lot weaker in the future. I don't think 80K played as big a role in creating ACE and REG as you say it did (there was a lot of demand for something like ACE when it came about so something similar probably would have arisen soon), but even if it did, creating new effective charities has rapidly diminishing marginal returns. The space of possible highly-effective charities (i.e. ones that are much more effective than top global poverty charities) is not that big.

On 4), there's a huge gulf between "We don't yet have all the money we could ever use" and "Giving us more funding would let us continue to be as effective as we have been with current funds." You really only claim the former, but you have to establish the latter for 80K to be the best place to donate.

Comment author: Ben_Todd 24 September 2015 05:34:51AM 1 point [-]

Hey Michael,

It's better to look at the evaluations rather than the list of studies if you want to get a systematic picture of career changes.

e.g. here: https://80000hours.org/2014/05/plan-change-analysis-and-cost-effectiveness/#what-were-the-changes

The most common changes are:

  1. More people earning to give
  2. More people setting up or working in effective altruist charities
  3. More people building career capital
  4. More people working on xrisk

These are things people very unusually do otherwise, so it's very unlikely they would happen without effective altruism or 80,000 Hours. Of course, it's hard to untangle 80k's impact on career choices from the rest of the EA movement, but it seems likely that 80k gets a substantial fraction of the impact. First, we're the main group doing career stuff within the movement. Second, we've done a huge amount to boost the EA movement (e.g. we were the first org to use the term publicly), so if the EA movement has a lot of impact, then a significant fraction is due to us.

Note that a similar objection applies to the other charities you propose: e.g. if REG / Charity Science / GWWC / GiveWell didn't exist, much of the impact would happen anyway eventually because the other groups would eventually step in to compensate. But that doesn't mean none of them are having much impact.

If 80K claims credit for money donated by people who are now earning to give, then it should also subtract money not donated by people who now aren't earning to give. The value of a career change isn't from the value of the person's career but from the marginal difference between their current career and their counterfactual career.

Of course, we address this in the evaluation.

In short, I think in many of the cases the effectiveness boosts are very large, so when you subtract the impact they would have had anyway, it's less than 10%. It depends on your view of how good "standard career choice" is.

On 3), if you believe most of 80K's impact comes from helping start new effective charities (which you sort of imply I should believe when you claim that ACE and REG would not exist without 80K), then we should expect this effect to get a lot weaker in the future.

I'd say our impact comes from giving people better information about how to have a social impact in their career, and so redirecting them into higher impact career paths.

You can try to quantify a component of that by looking at additional donations due to our members, number of new organisations founded, or other measures.

So, new organisations founded is just one component of our impact that's relatively tractable to analyse. More often, people assess us in terms of extra donations for charity raised from more people pursuing earning to give. Our estimate is that those earning to give will donate an extra $10m+ of counterfactually-adjusted funds to high-impact charities within the next 3 years because of us. I think either of these methods mean we've been very cost-effective in the past (historical financial costs are under $500k), and that's ignoring over half the plan changes.

https://80000hours.org/2015/07/update-how-many-extra-donations-have-we-caused/

even if it did, creating new effective charities has rapidly diminishing marginal returns. The space of possible highly-effective charities (i.e. ones that are much more effective than top global poverty charities) is not that big.

It seems really unclear to me how close we are to that margin. Bear in mind explicitly effective altruist funding is under 0.04% of total US philanthropy. It seems like we could expand the number of organisations a great deal before hitting substantially diminishing returns. In particular when you consider how little research has been done, relatively speaking, it's unlikely we discovered the best things already.

If we did run out of ideas for new organisations, 80k could move its focus to scaling up existing orgs. (Many people who've changed plans due to 80k have gone to work at existing EA orgs rather than found new ones). Or we could just encourage everyone to earn to give and donate to top global poverty charities.

Also, why you do you expect entrepreneurial-talent in EA to hit diminishing returns faster than donations? If anything, I expect we'll hit diminishing returns to additional donations faster than with talent, because funding gaps are so much easier to resolve than talent-gaps (e.g. one billionaire could flood EA with money tomorrow). And that means REG also doesn't have as much upside as it looks because in the future they won't be able to direct the money as effectively as well.

There's a huge gulf between "We don't yet have all the money we could ever use" and "Giving us more funding would let us continue to be as effective as we have been with current funds."

Of course. I actually think we're going to be more effective with future funds because we're getting better and better at changing plans, so our cost per plan change is falling. This is because our main focus in the past was learning and research, which is only just starting to pay off. There's a lot more to say here though!

Comment author: Denise_Melchin 24 September 2015 11:04:09AM 7 points [-]

"More people building career capital" ... "These are things people very unusually do otherwise"

Why do you think it's unusual for people to build career capital?

Comment author: Ben_Todd 06 October 2015 07:00:28AM *  3 points [-]

True, that one's an exception. The other 3/4 are unusual otherwise though.

Comment author: MichaelDickens  (EA Profile) 24 September 2015 06:06:14PM 0 points [-]

It feels like we're getting off track here. You originally claimed that 80K played a large role in creating REG and ACE (the implication presumably being that I should donate to 80K). Now we're talking about the strength of evidence on how 80K has changed people's career paths.

Although its evidence is weaker than I'd like, I'm still fairly confident that 80K has a positive impact, and I'm glad it exists. I just don't see that it's the best place to donate. Are you trying to convince me that 80K's activities are valuable, or that I should donate to it? If it's the former, I already believe that. If the latter, you need to:

  1. show why 80K has a higher impact than anything else
  2. do more to make the strength of evidence supporting 80K more robust
  3. demonstrate that 80K can effectively use marginal funds

Now that's a pretty high bar, but I'm donating a lot of money and I want to make sure I direct it well.

Comment author: Ben_Todd 06 October 2015 07:13:25AM 2 points [-]

Nitpick: robust evidence doesn't seem necessary - weak evidence of high upside potential should also count.

Comment author: Ben_Todd 06 October 2015 07:10:01AM *  1 point [-]

Hi Michael,

In the original document you say next to 80k "unclear whether it has a positive effect". So I was starting there.

REG and ACE are relevant because they're examples of the value of the plan changes we cause. If you think 80k is changing plans such that more high impact organisations are created, then it's likely 80k is also effective. (Though may of course still not be worth funding due to a lack of RFMF, but that's not what you said initially).

The closest we've got recently to publicly arguing the case for 80k is here: https://80000hours.org/2015/08/plans-for-the-coming-year-may-2015/

Of course there's a lot more to talk about. I'm always happy to answer more questions or share details about how marginal funds would be used via email.

Comment author: Robert_Wiblin 20 September 2015 06:27:56AM 1 point [-]

We have to debate back and forth and figure out this EtG thing properly.

I think Hurford's points about EtG are obviously really wrong. I find it baffling so many people are convinced.

See my comment here: http://effective-altruism.com/ea/mk/peter_hurford_thinks_that_a_large_proportion_of/515.

That smart people who have thought about this can have such different views is worrying.

Comment author: RyanCarey 24 September 2015 09:07:13PM 3 points [-]

Since it's so sensitive to what "from a good university" or "altruistically motivated" mean, it would make more sense to argue over a few hypothetical marginal case studies.

Comment author: MichaelDickens  (EA Profile) 24 September 2015 05:34:15PM 1 point [-]

That smart people who have thought about this can have such different views is worrying.

I'm not too worried about this; it's just a hard problem. That means we should perhaps invest more into solving it.

Comment author: Joey 18 September 2015 07:03:02PM 3 points [-]

Great post! Just a quick clarification, I definitely think AR research is worth doing but it would be better under a different organization/brand/startup . I think its valuable to keep an organization fairly focused on doing a few things well, and AR research is definitely not in the CS scope.

Comment author: TopherHallquist 18 September 2015 02:14:43AM *  6 points [-]

Thanks for writing this, Michael. More people should write up documents like these. I've been thinking of doing something similar, but haven't found the time yet.

I realized reading this that I haven't thought much about REG. It sounds like they do good things, but I'm a bit skeptical re: their ability to make good use of the marginal donation they get. I don't think a small budget, by itself, is strong evidence that they could make good use of more money. Can you talk more about what convinced you that they're a good giving opportunity on the margin? (I'm thinking out loud here, don't mean this paragraph to be a criticism.)

Re: ACE's recommended charities. I know you know I think this, but I think it's better for the health of ACE if their supporters divide their money between ACE and its recommended charities, even if the evidence for its recommended charities isn't currently as strong as I'd like. But I admit this is based on a fuzzy heuristic, not a knock-down argument.

Re: MIRI. Setting aside what I think of Yudkowsky, I think you may be overlooking the fact that that "competence" is relative to what you're trying to accomplish. Luke Muehlhauser accomplished a lot in terms of getting MIRI to follow nonprofit best practices, and from what I've read of his writing, I expect he'll do very well in his new role as an analyst for GiveWell. But there's a huge gulf between being competent in that sense, and being able to do (or supervise other people doing) ground breaking math and CS research.

Nate Soares seems as smart as you'd expect a former Google engineer to be, but would I expect him to do anything really ground breaking? No. Would I expect even the couple actual PhDs MIRI hired recently to do anything really ground breaking? They might, but I don't see why you'd think it likely.

In a way, it was easier to make a case for MIRI back when they did a lot of advocacy work. Now that they're billing themselves as a research institute, I think they've set a much higher bar for themselves, and when it comes to doing research (as opposed to advocacy) they've got much less of a track record to go on.

Comment author: tobiaspulver 21 September 2015 01:57:47AM 8 points [-]

I realized reading this that I haven't thought much about REG. It sounds like they do good things, but I'm a bit skeptical re: their ability to make good use of the marginal donation they get. I don't think a small budget, by itself, is strong evidence that they could make good use of more money. Can you talk more about what convinced you that they're a good giving opportunity on the margin? (I'm thinking out loud here, don't mean this paragraph to be a criticism.)

Thanks for bringing this up, Topher!

As Michael said, there are various things we would do if we had more funding.

1) REG’s ongoing operations need to be funded. Currently, we have around 6 months of reserves (at the current level of expenses), but ideally we would like to have 12 months. This would enable us to make use of more (sometimes unexpected) opportunities and to try things because we wouldn’t have to constantly be focused on our own funding situation.

2) We could potentially achieve (much) better results with REG by having additional people working on it. The best illustration of this is probably one person that we met (by going to poker stops) with a strong PR & marketing background who’s been working in the poker industry for 10 years now (there are not that many people with a level of expertise and network about the poker world like this person). This person woud like to work with us, but we had to decline her for the moment, even though we think that it would (clearly) be worth it to hire her. Another thing we would like to do is hiring someone to organise more charity tournaments and establish partnerships with industry leading organisations or strengthen existing ones, improve member communications and do social media. There are already several candidates who could do this, but we are hesitant to make this investment since we lack the appropriate funding.

3) Another way we would use additional funds is by working on various REG “extensions”. We are about to set up two REG expansions, but we won’t have enough resources to make the most out of even these two – and there are many more potentially really promising REG expansions that could be done. (The first of the two REG expansions that is likely going to be spread among the respective community in a few days is “DFS Charity”, a REG for Daily Fantasy Sports, an industry that is currently growing substantially and with a fair share of people with a similar (quantitative) mindset as poker players have. The preliminary website can be found at dfscharity.org – please don't share it widely yet.)

I hope this helped!

Comment author: RyanCarey 20 September 2015 09:12:23PM *  1 point [-]

In a way, it was easier to make a case for MIRI back when they did a lot of advocacy work. Now that they're billing themselves as a research institute, I think they've set a much higher bar for themselves, and when it comes to doing research (as opposed to advocacy) they've got much less of a track record to go on.

To put this in context, the emerging consensus is that publicly advocating for x risk reduction in the area of AI is counterproductive, and it is better to network with researchers directly, something that may be best done by performing relevant research.

Comment author: Tom_Ash  (EA Profile) 18 September 2015 02:29:28PM *  0 points [-]

In a way, it was easier to make a case for MIRI back when they did a lot of advocacy work. Now that they're billing themselves as a research institute, I think they've set a much higher bar for themselves, and when it comes to doing research (as opposed to advocacy) they've got much less of a track record to go on.

What are the best groups that are specifically doing advocacy for (against?) AI risk, or existential risks in general?

Comment author: TopherHallquist 20 September 2015 09:06:05PM 0 points [-]

If I had to guess, I would guess FLI, given their ability to at least theoretically use the money for grant-making. Though after Elon Musk's $10 million, donation this cause area seems to be short on room for more funding.

Comment author: RyanCarey 20 September 2015 09:08:51PM *  3 points [-]

Although FLI were only able to grant a very small fraction of the funds that researchers applied for, and many organisations have scope for expansion beyond the grants they recieved.

Comment author: MichaelDickens  (EA Profile) 18 September 2015 03:34:28AM 0 points [-]

Can you talk more about what convinced you that they're a good giving opportunity on the margin?

I asked Tobias Pulver about this specifically. He told me about their future plans and how they'd like to use marginal funds. They have things that they would have done if they'd had more money but couldn't do. I don't know if they're okay with me speaking about this publicly but I invite Tobias or anyone else at REG to comment on this.

I know you know I think this, but I think it's better for the health of ACE if their supporters divide their money between ACE and its recommended charities, even if the evidence for its recommended charities isn't currently as strong as I'd like.

If ACE thought this was best, couldn't it direct some of the funds I donate to its top charities? (This is something I probably should have considered and investigated, although it's moot since I'm not planning on donating directly to ACE.)

Would I expect even the couple actual PhDs MIRI hired recently to do anything really ground breaking? They might, but I don't see why you'd think it likely.

AI safety is such a new field that I don't expect you need to be a genius to do anything groundbreaking. MIRI researchers are probably about as intelligent as most FLI grantees. But I expect them to be better at AI safety research because MIRI has been working on it for longer and has a stronger grasp of the technical challenges.

Comment author: TopherHallquist 20 September 2015 09:15:22PM 1 point [-]

AI safety is such a new field that I don't expect you need to be a genius to do anything groundbreaking.

They claim to be working on areas like game theory, decision theory, and mathematical logic, which are all well-developed fields of study. I see no reason to think those fields have lots of low-hanging fruit that would allow average researchers to make huge breakthroughs. Sure, they have a new angle on those fields, but does a new angle really overcome their lack of an impressive research track-record?

But I expect them to be better at AI safety research because MIRI has been working on it for longer and has a stronger grasp of the technical challenges.

Do they have a stronger grasp of the technical challenges? They're certainly opinionated about what it will take to make AI safe, but their (public) justifications for those opinions look pretty flimsy.

Comment author: Vincent_deB 18 September 2015 03:21:42PM 1 point [-]

Can you talk more about what convinced you that they're a good giving opportunity on the margin?

I asked Tobias Pulver about this specifically. He told me about their future plans and how they'd like to use marginal funds. They have things that they would have done if they'd had more money but couldn't do. I don't know if they're okay with me speaking about this publicly but I invite Tobias or anyone else at REG to comment on this.

I heard - all be it second hand, and last year - of two people involved, Lukas Gloor and Tobias Pulver, saying that thought that the minimal share of GBS/EAF manpower - 1.5 FTEs - that was being invested in REG was sufficient.

Comment author: ClaireZabel 17 September 2015 09:15:06PM 6 points [-]

These comments are copied from some of the original ones I made when reviewing Michael's post. My views are my own, not GiveWell's.

I have never seen any strong reason to believe that anything we do now will affect far future values–if the case for organizations reducing global catastrophic risks is tenuous, then the case for values spreading is no better.

I think the case for values spreading is quite a bit better. Reducing global catastrophic risks is pretty bimodel. Either the catastrophe happens, or it doesn't. You can try to measure the risk being reduced, sometimes, but doing so isn't straightforward, obvious, or something we have experience in.

We have lots of experience tracking value change. We can see it happen in incremental parts in the near-future. You don't need special tools or access to confidential information to do a decent poll on values changing.

The strongest objection to this, I think, is that values changing in the short term won't necessarily affect the long-term trajectory of our values, or at least not in a predictable way. In contrast, preventing an x-risk in the short term at least allows for the possibility of doing stuff in the far future (and it seems plausible that GCRs might also change far-future trajectory).

Another consideration is that values may become vastly more or less mutable if we develop technology that allows of certain types of self-modification, or an AI that enforces values that are programmed into it. Depending on how you believe this might happen, you might believe spreading good values before those technologies develop is vastly more or less important, exactly because then the likelihood of those values affecting the far future increases.

I do not see reason to believe that some GCR other than AI risk is substantially more important; no GCR looks much more likely than AI risk, and right now it looks much easier to efficiently support efforts to improve AI safety than to support work on other major GCRs.

I think a lot of GCRs could be more tractable than AI risk (possibly by a large margin) if someone went through the work of identifying more opportunities to fund risk reduction for those GCRs, then made it available to small donors.

Comment author: MichaelDickens  (EA Profile) 17 September 2015 09:20:54PM 4 points [-]

I think a lot of GCRs could be more tractable than AI risk (possibly by a large margin) if someone went through the work of identifying more opportunities to fund risk reduction for those GCRs, then made it available to small donors.

This is definitely an important point. I think that if someone did identify opportunities like this, that's one of the most likely reasons why I might change where I donate. Right now it doesn't look like any GCR is substantially more important/tractable/neglected than AI risk (biosecurity is probably a bigger risk but not by a huge margin, geoengineering might be more tractable but not for small donors), but this could change in the future.

Comment author: Carl_Shulman 17 September 2015 02:30:41AM *  8 points [-]

It's good that you are sharing the research effort you put into this so that others can critique it, use/reference it, and build on it.

I have assorted comments below with quotes they are responding to.

But after a few conversations with Carl Shulman, Pablo Stafforini, Buck Shlegeris and others, I started more seriously considering the fact that almost all value lives in the far future, and the best interventions are probably those that focus on it.

Some context for my view: while I think there is a strong case for this in terms of additive total utilitarianism, I think the dominance is far weaker when one takes value pluralism and normative uncertainty into account. GiveWell staff have sometimes talked about whether decisions would be recommended if one valued the entire future of civilization at a 'mere' 5 or 10 times the absolute value of a century of the world as it is today. Value pluralism is one reason to apply such a heuristic.

Nick Bostrom has argued that existential risks are substantially worse than other GCRs. Nick Beckstead disagrees, and I find Beckstead’s case persuasive.

The argument there being that for most risks GCR versions are much more likely than direct existential risk versions, and GCRs have some chance of knock-on existential harms. Note that AI risk was excepted there, and has been noted as unusual in having a closer link between GCR and existential risk than others.

organizations that create new EAs may largely be valuable insofar as they create new donations to GCR charities

New staff members and entrepreneurs are very important in many cases. E.g. the EA movement has supplied a lot of GiveWell/OpenPhil staff, and founders for things like Charity Science and ACE which you mention.

Nate has discussed the fact that AI researchers appear more concerned about safety than they used to be, although it is unclear whether MIRI has had any causal role in bringing this about.

Some of this is definitely the recent surge of progress in AI, e.g. former AAAI president Eric Horvitz mentions that this was important for him and others.

For MIRI's causal influence some key elements I would highlight are:

  • The Singularity Summit playing a key causal role in getting Max Tegmark interested and the FLI created.
  • Bringing the issue to Stuart Russell's attention, resulting in Stuart's activity on the issue, including discussion in the most popular AI textbook, his involvement with the FLI grant program, etc.
  • Contributing substantially to Nick Bostrom's publication of Superintelligence, which played a key role in getting Elon Musk involved (and thus funding the FLI grant program), and eliciting favorable reviews from various others (e.g. Stephen Hawking, Bill Gates, etc.
  • The technical agenda helping to demonstrate some approaches that could work.
  • Drawing attention to the issue by a number of the academic researchers who have taken FLI grants, and some of OpenPhil's advisors.
  • Causing OpenPhil to be quite familiar with the issues, and ultimately to enter the area after seeing the results of the FLI conference, getting a sense of expert opinion, etc, as discussed on their website.

It sounds like Open Phil gave FLI exactly as much money as it believed it needed to fund the most promising research proposals.

ETA: OpenPhilanthropy has now just put up a detailed summary of the reasoning behind the FLI grant which may be helpful. They also talk about why they have raised their priority for work on AI in this post.

This is an issue that will recur on any area where OpenPhil/GiveWell is active (which will shortly include factory farming with the new hire and grant program). Here are two of my posts discussing the issues, (the first has important comments from Holden Karnofsky about their efforts to manage 'fungibility problems)'.

One quote from a GiveWell piece:

If you have access to other giving opportunities that you understand well, have a great deal of context on and have high confidence in — whether these consist of supporting an established organization or helping a newer one get off the ground — it may make more sense to take advantage of your unusual position and "fund what others won't," since GiveWell's research is available to (and influences) large numbers of people.

Also, you likely won't have zero effect, but would likely shift the budget constraint, so you could think of your donation as expanding all of Good Ventures' grants roughly in proportion to their size, which will be diversified and heavy on GiveDirectly. Or at least you could do that if they all had similar diminishing returns curves. If some have flatter curves (perhaps GiveDirectly) in Good Ventures' calculus then marginal funds would go disproportionately to those.

But if Open Phil does produce recommendations for small donors, it’s likely that one or some of these recommendations will represent better giving opportunities than any existing GCR charities.

That's a surprising claim. Probably it would recommend an existing charity. Maybe what you mean is that your expected value for any given GCR charity given what you know now is less than your expectation would be for the charities OpenPhil will recommend, given knowledge of those recommendations?

Or maybe you mean that OpenPhil's recommendations are likely to be charities that exist but that you currently don't know of?

My comment was too long to fit in the 1000 word limit, so the remainder is below.

Comment author: Carl_Shulman 17 September 2015 02:30:59AM *  8 points [-]

My comment was too long, so here's the rest:

it’s plausible that the factory farming grant will do perhaps two or three orders of magnitude more good per dollar than the GiveDirectly grant and should be receiving a lot more money

Is this something like unweighted QALYs per dollar? If you are analyzing in terms of long-run effects on the animal population, as elsewhere in the piece, those QALYs are a red herring. E.g. a tiny increase in economic activity very faintly expediting economic growth will overwhelm the direct QALYs involved with future populations. From the long-run point of view things like the changes in economic output, human populations, carbon emissions, human attitudes about other animals, and such would be the relevant metrics and don't scale with QALYs (this is made blatantly clear if one consider things like flies and ants). From the tiny-animal focus (with no accounting for differences in nervous system scale), the large farm animals will be neglible compared to various effects on tiny wild animals. If one considers neural processes within and across animals, then the numbers will be far less extreme.

Now, as I said at the start of this comment, normative pluralism and such would suggest not allowing complete dominance of long-run QALYs over current ones, but comparisons in terms of QALYs here don't track the purported long-run impacts, and if one focused only on unweighted animal QALYs without worrying about long-run consequences it would lead one away from farm animals towards wild animals.

Good Ventures currently pays for GiveWell’s operating expenses,

Not true. Previously GiveWell had capped Good Ventures contributions at 20% of GiveWell's budget. Recently they changed it to 20% for GiveWell's top charities work, and 50% for the Open Philanthropy Project (reasoning that Good Ventures is the main customer of the latter at this time, so it is reasonable for it to bear a larger share).

I don’t know how much influence Good Ventures has over GiveWell’s activities. I’m considerably less confident in Good Ventures’ cause prioritization skills than GiveWell’s, so I prefer that GiveWell has primary control over Open Phil’s cause selection, although I don’t know how much this matters. I wish GiveWell were more transparent about this.

Well nothing is going to force Good Ventures to hand over billions of dollars if it disagrees with the OpenPhil recommendations (and last year there was some disagreement between GW and GV about allocations to the different global poverty charities). But this does seem like a serious consideration to support outside donation to OpenPhil, and I think you may be underrating this donation option.

If ACE discovers that popular interventions are much less effective than previous evidence showed, to the point where GW top charities look more effective and more donors start giving to GW charities instead, the maximum impact here would be the impact had by marginal donations to GW top charities.

You only consider the case where it finds that all the current popular animal interventions are very poor. If many or most but not all are, then it could support productive reallocation from the ones that don't work as well to the ones that work better, potentially multiplying effectiveness severalfold. That's in fact the usual justification given by people in the animal charity community for doing this kind of research, but doesn't appear at all here. So I think the whole discussion of #3 has gone awry. Also the 'several orders of magnitude' claim appears again here, and the issues with QALYs vs metrics that better track long-run changes (e.g. attitude changes, population changes, legal changes) recur.

Charity Science has successfully raised money for GiveWell top charities (it claims to have raised $9 for every $1 spent)

Although note that that is valuing staff time at below minimum wage. If you valued it at closer to opportunity cost (or salaries at other orgs) the ratio would be far lower. I still think Charity Science is promising and deserving of support because of the knowledge it has produced, and I suspect its fundraising ratios will improve, but at the moment the ratio of EA resources put in to fundraising success is still on the lower end. See this discussion on the EA facebook group.

I decided to keep all the weights relatively close together because I do not have strong confidence about how much good each of these categories do. I might be able to make an inside-view argument that, say, MIRI is 1000x more effective than anything else on this list, but from the outside view, I shouldn’t let such an argument carry too much weight.

Shouldn't the same caveat apply to your suggestions earlier in the post about the future being 1000+ times more important than present beings?

Comment author: MichaelDickens  (EA Profile) 17 September 2015 03:47:12AM *  0 points [-]

I will make a few edits to the document based on your suggestions, thanks.

I have a few more points worth discussing that I didn't want to put into the doc, so I'll comment here:

Some context for my view: while I think there is a strong case for this in terms of additive total utilitarianism, I think the dominance is far weaker when one takes value pluralism and normative uncertainty into account.

Why does value pluralism justify reducing the importance of the far future? It seems unreasonable to me to discount the far future and I find it very implausible that beings in the far future don't have moral value.

On the flow through effects of global poverty over animal charities: Flow through effects probably do outweigh short-term effects, which means economic growth, etc. may be more impactful than preventing factory farming. But flow-through effects are hard to predict. I meant that effective factory farming interventions probably have much better demonstrable short-term effects than human-focused interventions. Actions that affect wild animals probably have much bigger effects, but again we run into the problem of not knowing whether our actions are net positive or negative. I'd certainly love to see some robust evidence that doing X will have a huge positive effect on wild animals so I can seriously consider supporting X.

Shouldn't the same apply to your claims earlier in the post about the future being 1000+ times more important than present beings, and ACE recommendations being 100-1000+ times better than GiveWell Classic recommendations?

I didn't say ACE recommendations are two to three orders of magnitude better than GiveWell Classic, I said Open Phil's factory farming grant is plausibly two to three orders of magnitude better than GV's grant to GiveDirectly. There are three distinctions here. First, I'm fairly uncertain about this. Second, I expect Open Phil's grant to be based on more robust evidence than ACE-recommended charities, so I can feel more confident about its impact. Third, GD has similar strength of evidence to AMF and is probably about an order of magnitude less impactful. So the difference between a factory farming grant and AMF may be more like one or two orders of magnitude.

I weighted ACE recs as 2x as impactful as GiveWell recs; Open Phil hasn't produced anything on factory farming but my best guess is its results will have an expected value of maybe 2-5x that of current ACE recs (although I expect that the EV of ACE recs will get better if ACE gets substantially more funding), largely because a lot of ACE top charities' activities are probably useless--all their activities look reasonable, but the evidence supporting them is pretty weak so it's reasonable to expect that some will turn out not to be effective.

(EDIT: I'd also add that even if I'm fairly confident about a 100-1000x effect size difference from inside an argument, when weighting donations I should take the outside view and not let these big effect sizes carry too much weight.)

After further consideration I'm thinking I rated GiveWell recs too highly; their weighting should be more like 0.05 instead of 0.1. Most of REG-raised money for GiveWell top charities went to AMF, although this might shift more toward GiveDirectly in the future in which case I should give GW recs a lower weight. I would probably rate Open Phil factory farming grants at maybe 0.3-0.5, which is an order of magnitude higher than for GiveWell top charities.

When I change the GW rec weighting from 0.1 to 0.05, the weighted donations drop by about $0.1 per $1 to REG. That's enough to make REG look a little weaker, although not enough to make me want to give to an object-level charity instead.

EDIT 2: Actually I'm not sure I should downweight GW recs from 0.1 to 0.05 because I don't know that I have strong enough outside-the-argument confidence that MIRI is 20x better than AMF in expectation. This sort of thing is really hard to put explicit numbers on since my brain can't really tell the difference between MIRI being 10x better and 100x better in expectation. My subjective perception of the probabilities of MIRI being 10x better versus 100x better feel about the same.

Comment author: Carl_Shulman 17 September 2015 04:09:38AM *  3 points [-]

Why does value pluralism justify reducing the importance of the far future? It seems unreasonable to me to discount the far future and I find it very implausible that beings in the far future don't have moral value.

Others think that we have special obligations to those with whom we have relationships or reciprocity, who we have harmed or been benefited by, or adopt person-affecting views although those are hard to make coherent. Others adopt value holism of various kinds, caring about other features of populations like the average and distribution, although for many parameterizations and empirical beliefs those still favor strong focus on the long-run.

(EDIT: I'd also add that even if I'm fairly confident about a 100-1000x effect size difference from inside an argument, when weighting donations I should take the outside view and not let these big effect sizes carry too much weight.)

Right, sounds good.

Comment author: MichaelDickens  (EA Profile) 17 September 2015 04:33:30AM 2 points [-]

I find all those views really implausible so I don't do anything to account for them. On the other hand, you seem to have a better grasp of utilitarianism than I do but you're less confident about its truth, which makes me think I should be less confident.

Comment author: MichaelDickens  (EA Profile) 17 September 2015 04:36:17AM 1 point [-]

On his old blog Scott talks about how there are some people who can argue circles around him on certain subjects. I feel like you can do this to me on cause prioritization. Like no matter what position I take, you can poke tons of holes in it and convince me that I'm wrong.

Comment author: RyanCarey 17 September 2015 01:41:00PM *  4 points [-]

The fact that Carl points out flaws with arguments on all sides makes him more trustworthy!

Comment author: Buck 16 September 2015 09:47:05PM 4 points [-]

Thanks so much for writing this. I agree with your arguments and I find your conclusion fairly persuasive.

Comment author: Tom_Ash  (EA Profile) 16 September 2015 03:14:02PM 1 point [-]

Providing such an in depth writeup is really useful, thanks. At the risk of derailing into an academic philosophy discussion, here are some clarificatory questions about what you value (which I'm particularly interested in because I think your values are relatively common among EAs):

I value having enjoyable experiences and avoiding unpleasant experiences. If I value these experiences for myself, then it’s reasonable for me to value them in general. That’s the two-sentence version of why I’m a hedonistic utilitarian.

Why do you think that these are the only things of value?

Pleasurable and painful experiences in non-humans have moral value. Non-humans includes non-human animals, computer simulations of sentient beings, artificial biological beings, and anything else that can experience pleasure and suffering.

Leaving aside (presumably hypothetical) computer simulations and artificial biological beings, do you think non-humans like chickens and fish have equally bad experiences in a month in a factory farm as a human would? If not, roughly how much worse or less bad would you guess they are? (I'm talking about a similar equivalence to that described in this Facebook poll, but focusing purely on morally relevant attributes of experiences.)

The best possible outcome would be to use fill the universe with beings that experience as much joy as possible for their entire lives.

Can you give an example of the ideal form of joy? Would an intense, simple experience of physical pleasure be a decent candidate? (Picking an example of such an experience could be left as an exercise for the reader.)

I am unpersuaded by arguments of the form “utilitarianism produces an unintuitive result in this contrived thought experiment”

What's the most unintuitive result that you're prepared to accept, and which gives you most pause?

Comment author: MichaelDickens  (EA Profile) 16 September 2015 04:13:02PM *  4 points [-]

The great thing about nested comments is derailments are easy to isolate. :)

Why do you think that these are the only things of value?

I don't understand what it would mean for anything other than positive and negative experiences to have value. I believe that when people say they inherently value art (or something along those lines), the reason they say this is because the thought of art existing makes them happy and the thought of art not existing makes them unhappy, and it's the happy or unhappy feelings that have actual value, not the existence of art itself. If people thought art existed but it actually didn't, that would be just as good as if art existed. Of course, when I say that you might react negatively to the idea of art not existing even if people don't know it exists; but now you know that it doesn't exist so you still experience the negative feelings associated with art not existing. If you didn't experience those feelings, it wouldn't matter.

do you think non-humans like chickens and fish have equally bad experiences in a month in a factory farm as a human would?

I expect there's a high probability (maybe 50%) that factory farms are just as bad for chickens as they are for humans, and a somewhat lower probability (maybe 25%) that they are just as bad for fish. I expect it's more likely that factory farms are worse for humans than that they're worse for chickens/fish, so in expectation, they're worse for humans, but not much worse.

I don't know how consciousness works, although I believe it's fundamentally an empirical question. My best guess is that certain types of mental structures produce heightened consciousness in a way that gives a being greater moral value, but that most of the additional neurons that humans have do not contribute at all to heightened consciousness. For example, humans have tons of brain space devoted to facial recognition, but I don't expect that we can feel greater levels of pleasure or pain as a result of having this brain space.

Can you give an example of the ideal form of joy?

The best I can do is introspect about what types of pleasure I enjoy most and how I'm willing to trade them off against each other. I expect that the happiest possible being can be much happier than any animal; I also expect that it's possible in principle to make interpersonal utility comparisons, so we could know what a super-happy being looks like. We're still a long way away from being able to do this in practice.

What's the most unintuitive result that you're prepared to accept, and which gives you most pause?

There are a lot of results that used to make me feel uncomfortable, but I didn't consider this good evidence that utilitarianism is false. They don't make me uncomfortable anymore because I've gotten used to them. Whichever result gives me the most pause is one that I haven't heard of before, so I haven't gotten used to it. I predict that the next time I hear a novel thought experiment where utilitarianism leads to some unintuitive conclusion, it will make me feel uncomfortable but I won't change my mind because I don't consider discomfort to be good evidence. Our intuitions are often wrong about how the physical world works, so why should we expect them to always be right about how the moral world works?

At some point we have to use intuition to make moral decisions--I have a strong intuition that nothing matters other than happiness or suffering, and I apply this. But anti-utilitarian thought experiments usually prey on some identifiable cognitive bias. For example, the repugnant conclusion takes advantage of people's scope insensitivity and inability to aggregate value across separate individuals.

Comment author: Vincent_deB 18 September 2015 03:23:11PM 0 points [-]

I expect there's a high probability (maybe 50%) that factory farms are just as bad for chickens as they are for humans, and a somewhat lower probability (maybe 25%) that they are just as bad for fish. I expect it's more likely that factory farms are worse for humans than that they're worse for chickens/fish, so in expectation, they're worse for humans, but not much worse.

Woaha, I didn't realize that anyone thought that, it would make me change my views greatly if I did.

Comment author: DanielFilan  (EA Profile) 10 November 2015 04:48:03AM 0 points [-]

I plan on directing my entire donation budget this year to REG. I will make the donation by the end of October unless I am persuaded otherwise by then.

What was your final decision on this?

Comment author: MichaelDickens  (EA Profile) 10 November 2015 04:02:46PM 2 points [-]

I made the donation to REG about a week ago.

Comment author: Denkenberger 08 November 2015 04:33:16PM 0 points [-]

Impressive analysis. But what about the Global Catastrophic Risk Institute or the Copenhagen Consensus Center? Disclosure: I am an associate at GCRI.

Comment author: MichaelDickens  (EA Profile) 08 November 2015 06:18:49PM 0 points [-]

GCRI is probably worth looking into. My first impression is it's pretty similar to FHI and CSER and there's nothing that make GCRI look clearly better than FHI/CSER. I do think it's pretty unlikely that I would end up preferring GCRI to REG/MIRI/ACE, so I haven't prioritized investigating it.

It hadn't occurred to me to look into the Copenhagen Consensus Center. Based on what I know about it, there are few factors working against it:

  1. It doesn't appear to be funding constrained.
  2. It does prioritization work on global poverty only, which is probably not the most effective cause area.
  3. Its prioritization work on global poverty is probably not as useful as GiveWell's.
  4. It recommends interventions instead of specific charities. Implementation matters a lot--you shouldn't support a poor implementation of a good intervention. GiveWell is more useful for this reason.

The big factor in CCC's favor is it could move a lot of money (it has potentially moved about $5 billion, although this is probably optimistic). This might actually be sufficiently valuable to make CCC worth supporting. CCC is seeking public donations, but there's still the big question of how donations translate into better recommendations.

Here's a few questions I'd need to answer before feeling comfortable donating to CCC:

  1. How do donations translate into better recommendations or more money moved?
  2. How much money does it move?
  3. How much better is the money moved compared to the counterfactual?
  4. How effective is CCC's money moved compared to GiveWell top charities, or compared to my favorite charities?

Right now it looks sufficiently unlikely that CCC is the best donation target that I don't think it's worth it for me to look into more.

Comment author: Denkenberger 26 April 2016 01:52:18AM 1 point [-]

Well, GCRI is much more funding constrained than FHI or CSER.

Comment author: RyanCarey 17 September 2015 05:09:14PM 0 points [-]

Another interesting consideration is that not all of the funds raised by fundraising organisations are attributable to the existence of that organisation per se. A lot of the funds would have been raised by the organisation's founders, who would be enthusiastic networkers and fundraisers even if their organisation did not exist (and even moreso if it someday ceased to exist). Moreover, the ease of fundraising for an organisation like AMF is easier if they are better-funded, such that they use these funds to give themselves a good reputation and deliver positive results. I wonder how sensitive the results of your analysis would be to considerations about i) direct funding improving the ease of fundraising and ii) some funds raised being attributable to the existence of fundraising individuals rather than the organisations they establish.

Comment author: RyanCarey 17 September 2015 02:09:05PM *  -2 points [-]

It's pretty harsh to defund people's organisations because they make carefully reasoned arguments that disagree with your conclusions! I'm a vegetarian and thought the arguments were strong, so it's hard to write that off as motivated reasoning. If you want to make a balanced judgement of what their blogs posts say about their values, mightn't you want to do a more balanced survey of what the key players have written on a wider range of topics, rather than the one that reached your newsfeed because its claims were seemingly outrageous? It'd feel similarly unfair if people tried to discredit whatever outreach efforts I was performing because I'd made (quite good - or so I thought) arguments that organ donation was ineffective.

Comment author: MichaelDickens  (EA Profile) 17 September 2015 02:28:19PM *  2 points [-]

I assume you're referring to my discussion of MIRI.

I'm NOT saying that some MIRI employees don't care about animals, therefore they're bad at reasoning. That's NOT what I'm saying, and frankly that would be silly. Eliezer doesn't care about animals but I believe he's much smarter and probably more rational than I am.

What I AM saying is this:

  1. MIRI/FAI researchers may have a large influence on what values end up shaping the far future.
  2. Some sorts of FAI research are more likely to work out well for non-human animals than others. (I discuss this in OP.)
  3. Therefore, I should want FAI researchers to have good values, and in particular, to assign appropriate worth to non-human animals because I think this is by far the biggest potential failure mode. I want to trust that they will choose to do the sorts of research that will work out well for non-human animals.
  4. So I will attempt to assess how much value MIRI researchers assign to non-human animals, because this question is relevant to how much good I think they will produce for the far future.

This has nothing to do with my meta-level assessment of MIRI employees' reasoning abilities and everything to do with their object-level beliefs on an issue that could be critically important for the shape of the far future.

I find this consideration less important than I used to because I'm more confident that preventing human extinction is net positive, but I still thought it was worth discussing.

Comment author: RyanCarey 17 September 2015 03:47:36PM *  0 points [-]

You're sceptical of their organisation because you disagree with them about the object-level topic of animals, which they assign less importance than you, right?

From the reader's point of view, this kind of argument shouldn't get much weight.

Why would the future welfare of animals be important in a future world with AIs? It'd make more sense for computing resources to be used to create things that people want (like fun virtual worlds?) and that they'd optimise their use of it, rather than putting animals there, which are unlikely to be useful for any specific human purpose, except perhaps as pets. Moreover, the activities of animals themselves are not going to have any long-run impacts. For reasons related to these two, it seems to me that those who argue that being vegetarian now is not useful in the long-run are closer to the mark than those like Rob (who nonetheless are well-represented in MRI), who argue that it is morally obligatory.

And at the bottom of all of this, the reader will note that you have converged toward MIRI's views on other topics like the importance of AI research and existential risk reduction, and there's little reason that you couldn't update your views to be closer to the average of reasonable positions around this topic.

The argumentation 'i won't fund this because they criticised an endeavour that i value' also gives a bad incentive, but at any rate, it seems like it is appopriate to downweight it.

Comment author: MichaelDickens  (EA Profile) 17 September 2015 05:49:38PM *  1 point [-]

I still feel like you're misunderstanding my position but I don't know how to explain it any differently than I already have, so I'll just address some things I haven't talked about yet.

A lot of what you're talking about here is how I should change my beliefs when other smart people have different beliefs from me, which is a really complex question that I don't know how to answer in a way that makes sense. I get the impression that you think I should put more weight on the fact that some MIRI researchers don't think animals are important for the far future; and I don't think I should do that.

I already agree that wild animals probably won't exist in the far future, assuming humans survive. I also generally agree with Nate's beliefs on non-human animals and I expect that he does a good job of considering their interests when he makes decisions. And my current best guess is that MIRI is the strongest object-level charity in the world. I don't think I disagree with MIRI as much as you think I do.

Edited to add: I have seen evidence that Nate is asking questions like, "What makes a being conscious?" "How do we ensure that an AI makes all these beings well off and not just humans?" AI safety researchers need to be asking these questions.

Comment author: MichaelDickens  (EA Profile) 17 September 2015 04:37:17PM *  0 points [-]

EDIT: It looks like you heavily edited your comment so my reply here doesn't make much sense anymore.

Well different people at MIRI have different opinions so I don't want to treat them like a monolith. Nate explicitly agrees with me that extrapolating from human values could be really bad for non-human animals; Rob things vegetarianism is morally obligatory; Katja thinks animals matter but vegetarianism is probably not useful; Eliezer doesn't think animals matter.

It seems to me that they're likely right that the future welfare of animals is not particularly important (it'd make more sense that computational real estate would be used for people who could buy it, and then would try to optimise their use of it, rather than putting animals there, which are unlikely to be optimal for any purpose in particular.)

I agree, as I explain here. But I'm not that confident, and given that in expectation non-human animals currently account for maybe 99.9%[^1] of the utility of the world, it's pretty important that we get this right. I'm not remotely comfortable saying "Well, according to this wild speculation that seems prima facie reasonable, wild animals won't exist in the future, so we can safely ignore these beings that currently account for 99.9% of the utility."

might there be an update in the works that they might be less wrong about animals also than they had seemed?

I don't know what you mean by "less wrong about animals." Less wrong about what, exactly? Do you mean about how morally valuable animals are? About the probability that wild animal suffering will dominate the far future?

It's plausible that a lot of AI researchers have explicitly reasoned about why they expect safety research to be good for the far future even when you don't massively discount the value of animals. The only person I've seen discuss this publicly is Carl Shulman, and I've talked to Nate Soares about it privately so I know he's thought about it. But all of MIRI's public materials are entirely focused on why AI safety is important for humans, and make no mention of non-humans (i.e. almost all the beings that matter). Nate has adequately convinced me that he has thought about these issues but I haven't seen evidence that anyone else at MIRI has thought about them. I'm sure some of them have but I'm in the dark about it. Since hardly anyone talks publicly about this, I used "cares about animals/is veg*an" as a proxy for "will try to make sure that an AI produces a future that's good for all beings, not just humans." This is an imperfect metric but it's the best I could do in some cases. I did speak to Nate about this directly though and I felt good about his response.

Of course I did still come out strongly in favor of MIRI, and I'm supporting REG because I expect REG to product a lot of donations to MIRI in the future.

Comment author: RyanCarey 17 September 2015 05:08:16PM *  0 points [-]

As Carl points out, it's not the case that non-human animals account for 99.9% of utility if you're using brain mass as a heuristic for the importance of each animal.

I don't know what you mean by "less wrong about animals." Less wrong about what, exactly?

About how important valuing animals is to the future? Though Katja and Robin are on a different side of the spectrum to you on this question, epistemic modesty means better to avoid penalizing them for their views.

Comment author: Peter_Hurford  (EA Profile) 17 September 2015 09:12:53PM 1 point [-]

It sounds like you and Michael just have different values. It's pretty clear that you'd only find Michael's argument viable if you share his opinion on animals. If you don't share his value, you'd place different weights on the importance of the risk of MIRI doing a lot of bad things to animals.

I disagree that "[f]rom the reader's point of view, this kind of argument shouldn't get much weight." It should get weight for readers that agree with the value, and shouldn't get weight for readers that disagree with the value.

Comment author: RyanCarey 17 September 2015 10:08:51PM *  0 points [-]

No, that's exactly the issue - I want as much as the next person to see animals have better lives. I just don't see why the ratio of humans to animals would be high in the future, especially if you weight the moral consideration to brain mass or information states.

Comment author: Peter_Hurford  (EA Profile) 18 September 2015 01:07:16AM 1 point [-]

I'm just wary of making confident predictions of the far future. A lot can change in a million years...

Comment author: MichaelDickens  (EA Profile) 17 September 2015 10:26:44PM *  1 point [-]

I just don't see why the ratio of humans to animals would be high in the future

I agree with you that it probably won't be high. But I would have to be >99% confident that animals won't comprise much of the utility of the far future for me to be willing to just ignore this factor, and I'm nowhere near that confident. Maybe you're just a lot more confident than I am.

Comment author: MichaelDickens  (EA Profile) 17 September 2015 05:26:37PM 0 points [-]

As Carl points out, it's not the case that non-human animals account for 99.9% of utility if you're using brain mass as a heuristic for the importance of each animal.

That's a good point. I'd like to see what the numbers look like when you include wild animals too.

Comment author: Carl_Shulman 17 September 2015 05:53:06PM *  2 points [-]

Most of the neural mass will be wild animals, but I think more like 90% than 99.9% (the ratio has changed by orders of magnitude in recent thousands of years, and only needs to go a bit further on a log scale for human brain mass to dominate). Unless you very confidently think that a set of neurons being incorporated into a larger structure destroys almost all of their expected value, the 'small animals are dominant' logic can likewise be used to say 'small neural systems are dominant, within and between animals." If sapient populations grow rapidly (e.g. AI) then wild animals (including simulated ones) would be absolutely dwarfed on this measure. However, non-sapient artificial life might or might not use more computation than sapient artificial beings.

Also, there can be utility monsters both above and below. The number of states a brain can be in goes up exponentially as you add bits. The finite numbers it can represent (for pleasure, pain, preferences) go up super-exponentially. If you think a simple reinforcement learning Pac-Man program isn't enough for much moral value, that one needs more sensory or processing complexity, then one is allowing that the values of preferences and reward can scale depending on other features of the system. And once you allow that, it is plausible that parallel reinforcement/decision processes in a large mind will get a higher multiplier (i.e. not only will there be more neural-equivalent processes doing reinforcement updating, but each individual one will get a larger multiplier due to the system it is embedded in).

The conclusion that no existing animal will be maximally efficient at producing welfare according to a fairly impartial hedonistic utilitarianism is on much firmer ground than the conclusion that the maximally efficient production system on that ethical theory would involve exceedingly tiny minds rather than vast ones or enhanced medium-size ones, or complex systems overlapping these scales.

Comment author: Denkenberger 26 October 2015 05:11:58PM 1 point [-]

Small insects (the most common) have a order 10,000 neurons. One estimate is 10^18 insects, implying 10^22 neurons. In humans it is 10^21 neurons total. However, smaller organisms tend to have smaller cells, so if you go by mass, humans might actually be dominant. Of course there are other groups of wild and domestic animals, but it gives you some idea.