Comment author: MichaelDickens  (EA Profile) 12 December 2017 05:21:19PM 12 points [-]

I haven't yet gotten around to writing up where I plan on donating in 2018 (I already maxed out my 2017 donations in February), but I've been thinking along the same lines. Recently I've been leaning toward donating to these smaller, riskier organizations because I see a lot of value in helping new orgs grow and learning what they can accomplish--especially because the established charities that I like best have gotten a lot of funding recently and have room to scale up before they start to hit the limits of their funding.

Comment author: MichaelPlant 02 June 2017 12:13:34PM 7 points [-]

Thanks for this Kerry, very much appreciate the update.

Three funds I'd like to see:

  1. The 'life-improving' or 'quality of life'-type fund that tries to find the best way to increase the happiness of people whilst they are alive. My view on morality leads me to think that is what matters most. This is the area I do my research on too, so I'd be very enthusiastic to help whoever the fund manager was.

  2. A systemic change fund. Part of this would be reputational (i.e. no one could then complain EAs don't take systemic change seriously) another part would be that I'd really like to see what the fund manager would choose to give money too if it had to go to systemic change. I feel that would be a valuable learning experience.

  3. A 'moonshots' fund that supported high-risk, potentially high-reward projects. For reasons similar to 2 I think this would be a really useful way for us to learn.

My general thought is the more funds the better, presuming you can find qualified enough people to run them. It has the positive effect of demonstrating EA's openess and diversity, which should mollify our critics. As mentioned, it provides chances to learn stuff. And it strikes me as unlikely new funds would divert much money away from the current options. Suppose we had an EA environmentalism fund. I assume people who would donate to that wouldn't have been donating to, say, the health fund already. They'd probably be supporting green charities instead.

Comment author: MichaelDickens  (EA Profile) 03 June 2017 06:46:47AM 4 points [-]

Now that you mention it, I think this would be a much more interesting way to divide up funds. I have basically no idea whether AI safety or anti-factory farming interventions are more important; but given the choice between a "safe, guaranteed to help" fund and a "moonshot" fund I would definitely donate to the latter over the former. Dividing up by cause area does not accurately separate donation targets along the lines on which I am most confident (not sure if that makes sense). I would much rather donate to a fund run by a person who shares my values and beliefs than a fund for a specific cause area, because I'm likely to change my mind about which cause area is best, and perhaps the fund manager will, too, and that's okay.

Some possible axes:

  1. live-improving vs. life-saving (or, similarly, total view vs. person-affecting view)
  2. safe bets vs. moonshots
  3. suffering-focused vs. "classical"
  4. short-term vs. far future

Although having all possible combinations just along these axes would require 16 funds so in practice this won't work exactly as I've described.

Comment author: Kerry_Vaughan 02 June 2017 05:00:58PM 3 points [-]

This is an interesting idea. I have a few hesitations about it, however:

  1. The number of organizations which are doing cause prioritization and not also doing EA Community Building is very small (I can't think of any off the top of my head).
  2. My sense is that Nick wants to fund both community building and cause prioritization, so splitting these might place artificial constraints on what he can fund.
  3. EA Community building has the least donations so far ($83,000). Splitting might make the resulting funds too small to be able to do much.
Comment author: MichaelDickens  (EA Profile) 03 June 2017 06:39:32AM 1 point [-]

RE #1, organizations doing cause prioritization and not EA community building: Copenhagen Consensus Center, Foundational Research Institute, Animal Charity Evaluators, arguably Global Priorities Project, Open Philanthropy Project (which would obviously not be a good place to donate, but still fits the criterion).

RE #2: if the point is to do what Nick wants, it should really be a "Nick Beckstead fund", not an EA Community fund.

Comment author: Peter_Hurford  (EA Profile) 30 May 2017 12:12:53AM 0 points [-]

Indeed, how can you even ever know which works better?

Retrospective analysis of track record? Looking into Tetlock-style research?

Comment author: MichaelDickens  (EA Profile) 30 May 2017 02:42:42PM 0 points [-]

Suppose it's 10 years in the future, and we can look back at what ACE and MIRI have been doing for the past 10 years. We now know some new useful information, such as:

  • Has ACE produced research that influenced our understanding of effective charities?
  • Has MIRI published new research that moved us closer to making AI safe?
  • Has ACE moved more money to top animal charities?

But even then, we still don't know nearly as much as we'd like. We don't know if ACE really moved money, or if that money would have been donated to animal charities anyway. Maybe MIRI took funding away from other research avenues that would have been more fruitful. We still have no idea how (dis)valuable the far future will be.

Comment author: MichaelDickens  (EA Profile) 29 May 2017 11:34:34PM 1 point [-]

I'm still undecided on the question of whether quantitative models can actually work better than qualitative analysis. (Indeed, how can you even ever know which works better?) But very few people actually use serious quantitative models to make decisions--even if quantitative models ultimately don't work as well as well-organized qualitative analysis, they're still underrepresented--so I'm happy to see more work in this area.

Some suggestions on ways to improve the model:

Account for missing components

Quantitative models are hard, and it's impossible to construct a model that accounts for everything you care about. I think it's a good idea to consider which parts of reality you expect to matter most for the impact of a particular thing, and try to model those. Whatever your model is missing, try to figure out which parts of that matter most. You might decide that some things are too hard to model, in which case you should consider how those hard-to-model bits will likely affect the outcome and adjust your decision accordingly.

Examples of major things left out:

  • 80K model only considers impact in terms of new donations to GWWC based on 80K's own numbers. It would be better to use your own models of the effectiveness of different cause areas and account for how many people 80K moves into/away from these cause areas using your own effectiveness estimates for different causes.
  • ACE model only looks at the value from moving money among top charities. My own model includes money moved among top charities plus new money moved to top charities plus the value of new research that ACE funds.

Sensitivity analysis

The particular ordering you found (80K > MIRI > ACE > StrongMinds) depends heavily on certain input parameters. For example, for your MIRI model, "expected value of the far future" is doing tons of work. It assumes that the far future contains about 10^17 person-years; I don't see any justification given. What if it's actually 10^11? Or 10^50? This hugely changes the outcome. You should do some sensitivity analysis to see which inputs matter the most. If any one input matters too much, break it down into less sensitive inputs.

Comment author: MichaelDickens  (EA Profile) 24 April 2017 03:09:17AM 4 points [-]

Not sure if this is the right place to say this, but on where it links to "Donate Effectively," I think it would make more sense to link to GiveWell and ACE ahead of the EA Funds, because GiveWell and ACE are more established and time-tested ways of making good donations in global poverty and animal welfare.

(The downside is this adds complexity because now you're linking to two types of things instead of one type of thing, but I would feel much better about CEA endorsing GiveWell/ACE as the default way to give rather than its own funds, which are controlled by a single person and don't have the same requirement (or ability!) to be transparent.)

Comment author: MichaelDickens  (EA Profile) 28 April 2017 01:10:04AM 0 points [-]

Alternatively, you could have global poverty and animal welfare funds that are unmanaged and just direct money to GiveWell/ACE top charities (or maybe have some light management to determine how to split funds among the top charities).

Comment author: Daniel_Eth 24 April 2017 01:32:39AM 1 point [-]
Comment author: MichaelDickens  (EA Profile) 24 April 2017 03:11:23AM 4 points [-]

There's no shortage of bad ventures in the Valley

Every time in the past week or so that I've seen someone talk about a bad venture, they've given the same example. That suggests that there is indeed a shortage of bad ventures--or at least, ventures bad enough to get widespread attention for how bad they are. (Most ventures are "bad" in a trivial sense because most of them fail, but many failed ideas looked like good ideas ex ante.)

Comment author: MichaelDickens  (EA Profile) 24 April 2017 03:09:17AM 4 points [-]

Not sure if this is the right place to say this, but on where it links to "Donate Effectively," I think it would make more sense to link to GiveWell and ACE ahead of the EA Funds, because GiveWell and ACE are more established and time-tested ways of making good donations in global poverty and animal welfare.

(The downside is this adds complexity because now you're linking to two types of things instead of one type of thing, but I would feel much better about CEA endorsing GiveWell/ACE as the default way to give rather than its own funds, which are controlled by a single person and don't have the same requirement (or ability!) to be transparent.)

Comment author: MichaelDickens  (EA Profile) 04 February 2017 06:09:32PM 6 points [-]

I'm glad you're thinking about this. Investing is an important issue and I believe there's room for more discussion of the topic.

[I]t is commonly accepted by now that altruists should generally be less financially risk averse than other people. This implies that we shouldn't worry too much about diversification, but only about expected value.

False. By diversifying, you can increase your risk at any given level of return, which also means you can increase your return at any given level of risk. (These are dual optimization problems).) You should also be concerned about correlation with other altruistic investors, and most investors put way too much money in their home country (so mostly the US and UK).

I don't know that you are claiming this, but you sort of imply it, so to be clear: you should not believe that US stocks have higher expected returns than any other country. If anything, you should believe that the US market will perform worse than most other countries because it's substantially more expensive. Right now the US has a CAPE ratio of 26, versus 21 for non-US developed markets and 14 for emerging markets. CAPE ratio strongly predicts 10-year future market returns.

On the covariance-with-charities issue: I'm doubtful that this consideration matters enough to substantially change how you should invest. If your investments can perform 2 percentage points better by investing in emerging markets rather than developed markets (which they probably can), I would expect this to outweigh any benefits from increased covariance. I would need to see some sort of quantitative analysis to be convinced otherwise.

I'm also not convinced that we should actually want to increase covariance rather than decreasing it. By increasing covariance you increase expected value by expanding the tails, but I don't believe we should be risk-neutral at a global scale because marginal money put into helping the world has diminishing utility.

Similar concerns apply to investing in companies that are correlated with AI development. AI companies tend to be growth stocks, which underperform the market in the long run compared to value stocks.

Comment author: MichaelDickens  (EA Profile) 24 December 2016 08:38:08PM 13 points [-]

I'm glad that you write this sort of thing. 80K is one of the few organizations that I see writing "why you should donate to us" articles. I believe more organizations should do this because they generally know more about their own accomplishments than anyone else. I wouldn't take an organization's arguments as seriously as a third party's because they're necessarily biased toward themselves, but they can still provide a useful service to potential donors by presenting the strongest arguments in favor of donating to them.

I have written before about why I'm not convinced that I should donate to 80K (see the comments on the linked comment thread). I have essentially the same concerns that I did then. Since you're giving more elaborate arguments than before, I can respond in more detail about why I'm still not convinced.

My fundamental concern with 80K is that the evidence it its favor is very weak. My favorite meta-charity is REG because it has a straightforward causal chain of impact, and it raises a lot of money for charities that I believe do much more good in expectation than GiveWell top charities. 80K can claim the latter to some extent but cannot claim the former.

Below I give a few of the concerns I have with 80K, and what could convince me to donate.

Highly indirect impact. A lot of 80K's claims to impact rely on long chains such that your actual effect is pretty indirect. For example, the claim that an IASPC is worth £7500 via getting people to sign the GWWC pledge relies on assuming:

  • These people would not have signed the pledge without 80K.
  • These people would not have done something similarly or more valuable otherwise.
  • The GWWC pledge is as valuable as GWWC claims it is.

I haven't seen compelling evidence that any of these is true, and they all have to be true for 80K to have the impact here that it claims to have.

Problems with counterfactuals.

When someone switches from (e.g.) earning to give to direct work, 80K adds this to its impact stats. When someone else switches from direct work to earning to give, 80K also adds this to its impact stats. The only way these can both be good is if 80K is moving people toward their comparative advantages, which is a much harder claim to justify. I would like to see more effort on 80K's part to figure out whether its plan changes are actually causing people to do more good.

Questionable marketing tactics.

This is somewhat less of a concern, but I might as well bring it up here. 80K uses very aggressive marketing tactics (invasive browser popups, repeated asks to sign up for things, frequent emails) that I find abrasive. 80K justifies these by claiming that it increases sign-ups, and I'm sure it does, but these metrics don't account for the cost of turning people off.

By comparison, GiveWell does essentially no marketing but has still attracted more attention than any other EA organization, and it has among the best reputations of any EA org. It attracts donors by producing great content rather than by cajoling people to subscribe to its newsletter. For most orgs I don't believe this would work because most orgs just aren't capable of producing valuable content, but like GiveWell, 80K produces plenty of good content.

Perhaps 80K's current marketing tactics are a good idea on balance, but we have no way of knowing. 80K's metrics can only observe the value its marketing produces and not the value it destroys. It may be possible to get better evidence on this; I haven't really thought about it.

Past vs. future impact.

80K has made a bunch of claims about its historical impact. I'm skeptical that the impact has been as big as 80K claims, but I'm also skeptical that the impact will continue to be as big. For example, 80K claims substantial credit for about a half dozen new organizations. Do we have any reason to believe that 80K will cause more organizations to be created, and that they will be as effective as the ones it contributed to in the past? 80K's writeup claims that it will but doesn't give much justification. Similarly, 80K claims that a lot of benefit comes from its articles, but writing new articles has diminishing utility as you start to cover the most important ideas.

In summary, to persuade me to donate to 80K, you need to convince me that it has sufficiently high leverage that it does more good than the single best direct-work org, and it has higher leverage than any other meta org. More importantly, you need to find strong evidence that 80K actually has the impact it claims to have, or better demonstrate that the existing evidence is sufficient.

View more: Next