Comment author: Michael_PJ 25 April 2017 10:51:38PM 1 point [-]

This looks pretty similar to a model I wrote with Nick Dunkley way back in the 2012 (part 1, part 2). I still stand by that as a reasonable stab at the problem, so I also think your model is pretty reasonable :)

Charity population:

You're assuming a fixed pool of charities, which makes sense given the evidence gathering strategy you've used (see below). But I think it's better to model charities as an unbounded population following the given distribution, from which we can sample.

That's because we do expect new opportunities to arise. And if we believe that the distribution is heavy-tailed, a large amount of our expected value may come from the possibility of eventually finding something way out in the tails. In your model we only ever get N opportunities to get a really exceptional charity - after that we are just reducing our uncertainty. I think we want to model the fact that we can keep looking for things out in the tails, even if they maybe don't exist yet.

I do think that a lognormal is a sensible distribution for charity effectiveness. The real distribution may be broader, but that just makes your estimate more conservative, which is probably fine. I just did the boring thing and used the empirical distribution of the DCP intervention cost-effectivenss (note: interventions, not charities).

Evidence gathering strategy:

You're assuming that the evaluator does a lot of evaluating: they evaluate every charity in the pool in every round. In some sense I suppose this is true, in that charities which are not explicitly "investigated" by an evaluator can be considered to have failed the first test by not being notable enough to even be considered. However, I still think this is somewhat unrealistic and is going to drive diminishing returns very quickly, since we're really just waiting for the errors for the various charities settle down so that the best charity becomes apparent.

I modelled this as the process as the evaluator sequentially evaluating a single charity, chosen at random (with replacement). This is also unrealistic, because in fact an evaluator won't waste their time with things that are obviously bad, but even with this fairly conservative strategy things turned out pretty well.

I think it's interesting to think what happens when model the pool more explicitly, and consider strategies like investigating the top recommendation further to reduce error.

Increasing scale with money moved:

Charity evaluators have the wonderful feature that their effectiveness scales more or less linearly with the amount of money they move (assuming that the money all goes to their top pick). This is a pretty great property, so worth mentioning.

The big caveat there is room for more funding, or saturation of opportunities. I'm not sure how best to model this. We could model charities as rather "deposits" of effectiveness that are of a fixed size when discovered, and can be exhausted. I don't know how that would change things, but I'd be interested to see! In particular, I suspect it may be important how funding capacity co-varies with effectiveness. If we find a charity with a cost-effectiveness that's 1000x higher than our best, but it can only take a single dollar, then that's not so great.

Comment author: Peter_Hurford  (EA Profile) 25 April 2017 09:55:34PM 0 points [-]

Good question!

We have people report both household and individual income. If you have an individual income and you're comfortable disclosing that, put that as "individual income" and then report your joint income as "household income".

After that, I'd recommend that both of you each disclose the full joint donation amount on both surveys.

From there, we can figure it out.

Thanks! We'll try to make this more clear next year and we'd love any suggestions for a better way to handle joint donations.

Comment author: Julia_Wise 25 April 2017 07:28:35PM 2 points [-]

How should a couple that donate jointly answer the donation questions? Should one of us answer with the combined income and combined donations?

Comment author: RyanCarey 25 April 2017 06:19:18PM 2 points [-]

I expect that if anything it is broader than lognormally distributed.

It might depend what we're using the model for.

In general, it does seem reasonable that direct (expected) net impact of interventions should be broader than lognormal, as Carl argued in 2011. On the other hand, it seems like the expected net impact all things considered shouldn't be broader than lognormal. For one argument, most charities probably funge against each other by at least 1/10^6. For another, you can imagine that funding global health improves the quality of research a bit, which does a bit of the work that you'd have wanted done by funding a research charity. These kinds of indirect effects are hard to map. Maybe people should think more about them.

AFAICT, the basic thing for a post like this one to get right is to compare apples with apples. Tom is trying to evaluate various charities, of which some are evaluators. If he's evaluating the other charities on direct estimates, and is not smoothing the results over by assuming indirect effects, then he should use a broader than lognormal assumption for the evaluators too (and they will be competitive). If he's taking into account that each of the other charities will indirectly support the cause of one another (or at least the best ones will), then he should assume the same for the charity evaluators.

I could be wrong about some of this. A couple of final remarks: it gets more confusing if you think lots of charities have negative value e.g. because of the value of technological progress. Also, all of this makes me think that if you're so convinced that flow-through effects cause many charities to have astronomical benefits, perhaps you ought to be studying these effects intensely and directly, although that admittedly does seem counterintuitive to me, compared with working on problems of known astronomical importance directly.

Comment author: RyanCarey 25 April 2017 05:19:29PM 0 points [-]

This is more or less what happened with EA Ventures -- lots of people thought it was a good idea, but not many promising projects showed up and not many funders actually donated to the projects we happened to find.

It seems like the character of the EA movement needs to be improved somehow, (probably, as always, there are marginal improvements to the implementation too) but especially the character of the movement because arguably if EA could spawn many projects, its impact would be increased many-fold.

Comment author: Peter_Hurford  (EA Profile) 25 April 2017 04:56:22PM 1 point [-]

There has been hardly any analysis of other program areas (e.g. so far I haven't seen any kind of back-of-the-envelope analysis focusing on peace and security, nor any kind of "fact post" on the EA forum, nor anything similar),

80K does briefly compare deaths from health-related causes to deaths from war, but I agree it would be nice to see a more detailed, nuanced analysis that took into account Blattman and others' arguments.

Comment author: Peter_Hurford  (EA Profile) 25 April 2017 04:53:57PM 1 point [-]

I'd personally prefer if everyone who is interested in taking the full survey do so, so that we can track how beliefs and attitudes change (along with donations).

Comment author: Owen_Cotton-Barratt 25 April 2017 02:59:15PM 4 points [-]

The fact that sometimes people's estimates of impact are subsequently revised down by several orders of magnitude seems like strong evidence against evidence being normally distributed around the truth. I expect that if anything it is broader than lognormally distributed. I also think that extra pieces of evidence are likely to be somewhat correlated in their error, although it's not obvious how best to model that.

Comment author: Andy_Schultz 25 April 2017 02:45:52PM 1 point [-]

How much more helpful would it be to take the full survey vs the abridged one, for those who have taken the survey in prior years? I'm willing to take the full survey if it's helpful.

Comment author: vollmer 25 April 2017 07:13:40AM *  6 points [-]

Some of those charities are developed-world charities and would likely be seen as ineffective by most EAs. However, he might not give to those charities if he was running an EA Fund (similar to how many GiveWell staff are donating to charities not recommended by GiveWell), or maybe multiple people could run the fund together.

One thing I like about Blattman's work is that he has done a lot of research on armed conflict and violence and how to prevent it (with high-quality RCTs). This area seems to be very neglected in EA:

http://www.poverty-action.org/study/peace-education-rural-liberia

http://www.poverty-action.org/study/ex-combatant-reintegration-liberia

EAs seem to focus on health most of the time (e.g. Charity Entrepreneurship almost exclusively evaluated health programs). There are lots of good reasons for focusing on health, and maybe the goal of EA is not to find all the best charities/programs but only some of them such that there's enough RFMF for the EA community as a whole. However, I'm skeptical and still think non-health approaches are very neglected in EA because:

1) There has been hardly any analysis of other program areas (e.g. so far I haven't seen any kind of back-of-the-envelope analysis focusing on peace and security, nor any kind of "fact post" on the EA forum, nor anything similar),

2) there might be a lot of additional funding available for such alternative approaches (by donors who tend to be more skeptical of GiveWell's health focus, or by donors whose funds are restricted in some way),

3) it would demonstrate to the outside world that EAs are really doing their homework instead of being easily satisfied with some easy-to-measure approaches, and this might accelerate EA movement growth and strengthen its impact and credibility in society at large (which could also increase total funding for top charities).

For these reasons, I would very much like someone like Chris Blattman to be involved with the EA Funds in some way (maybe not as a fund manager). Or some external review of GiveWell's work by someone like Blattman.

EDIT: Actually Open Phil wrote a bit about aid in fragile contexts: http://www.openphilanthropy.org/research/cause-reports/fragile-states

Comment author: BenHoffman 25 April 2017 04:55:56AM *  3 points [-]

Yep! I think it's fine for them to exist in principle, but the aggressive marketing of them is problematic. I've seen attempts to correct specific problems that are pointed out e.g. exaggerated claims, but there are so many things pointing in the same direction that it really seems like a mindset problem.

I tried to write more directly about the mindset problem here:

http://benjaminrosshoffman.com/humility-argument-honesty/

http://effective-altruism.com/ea/13w/matchingdonation_fundraisers_can_be_harmfully/

http://benjaminrosshoffman.com/against-responsibility/

Comment author: Peter_Hurford  (EA Profile) 25 April 2017 04:44:56AM *  2 points [-]

Another thing that would be encouraging would be if at least one of the Funds were not administered entirely by an Open Philanthropy Project staffer, and ideally an expert who doesn't benefit from the halo of "being an EA." For instance, Chris Blattman is a development economist with experience designing programs that don't just use but generate evidence on what works.

Chris Blattman has put together some of his principles on giving and says he personally ranks GiveDirectly #1, but otherwise believes the "means and end to human well being is good government and political rights and freedoms" and therefore gives to Amnesty International, Human Rights Watch, the ACLU, the Southern Poverty Law Center, the Democratic National Committee, Planned Parenthood, the National Immigration Law Center, and the International Rescue Committee.

In response to Open Thread #36
Comment author: Linch 25 April 2017 04:04:22AM *  0 points [-]

Do you live in the South Bay (south of San Francisco?).

Did you recently move here and want to be plugged in to what EAs around here are doing and thinking? Did you recently learn about effective altruism and want to know what the heck it's about? Well, join South Bay Effective Altruism's first fully newbie-friendly meetup!

We'll discuss cause prioritization, what causes areas YOU are interested in, and how we can help each other do the most good!

https://www.facebook.com/events/305401856547678/?active_tab=discussion https://www.meetup.com/South-Bay-Effective-Altruism/events/239444560/

The actual meetup will be this Friday at 7pm, but you can also comment here or message me at email[dot]Linch[at]gmail[dot]com to be in the loop for future events.

Comment author: Peter_Hurford  (EA Profile) 25 April 2017 02:11:02AM 0 points [-]

That's a good question. I did intend "three fund managers" to mean "the three fund managers we have right now", but I could also see the optimal number of people being 2-3.

Comment author: jiwoonhwang 25 April 2017 12:03:15AM 0 points [-]

What do you think about index fund type of management structure (passive management)?

In cases of stock investing, fund managers (of the active fund) cannot 'beat the market' and achieve the higher return rate than market average (index fund). (see, for example, Random Walk Down Wall Street by Burton Melkiel)

There are certainly (statistically significant) very effective fund managers like Warren Buffet. But it is exceedingly difficult to select the effective fund manager, particularly if the objective metric of performance is hard to obtain. (that's particularly the case when comparing QALY-calculable and non-QALY-calculable causes)

For example, the EA Fund can 'mirror vote' proportionally based on the past donations data registered on eahub.org. (of course, some vetting should be used to verify that EA Hub profile is true (for example, maybe long-term EA Global participants' donation claim should be taken into account), and there should be due diligence on the receiving charities) Or, donations come from the people who do not earmark their donations can be distributed proportionally based on earmarking data by the EA Fund donators who earmark their donations. (again, it seems reasonable to allow to earmark charities that were accepted by EA Fund managers, i.e. the charities which will receive the donation should be pre-selected by fund managers, as GiveWell does, and the percentage of fund money distribution will be decided by thousands of donators who decide to earmark)

Comment author: Tom_Ash  (EA Profile) 24 April 2017 11:11:28PM 0 points [-]

Is anyone familiar with the philosophical literature on that? My understanding is that it's controversial.

Separately, what's the connection to moral realism?

Comment author: Michael_PJ 24 April 2017 10:53:05PM 2 points [-]

I found the analogy with confidence games thought-provoking, but it could have been a bit shorter.

Comment author: Michael_PJ 24 April 2017 10:51:54PM 4 points [-]

The point I was trying to make is that while GiveWell may not have acted "satisfactorily", they are still well ahead of many of us. I hadn't "inferred" that GiveWell had audited themselves thoroughly - it hadn't even occurred to me to ask, which is a sign of just how bad my own epistemics are. And I don't think I'm unusual in that respect. So GiveWell gets a lot of credit from me for doing "quite well" at their epistemics, even if they could do better (and it's good to hold them to a high standard!).

I think that making the final decision on where to donate yourself often offers only an illusion of control. If you're getting all your information from one source you might as well just be giving them your money. But it does at least keep more things out in the open, which is good.

Re-reading your post, I think I may have been misinterpreting you - am I right in thinking that you mainly object to the marketing of the EA Funds as the "default choice", rather than to their existence for people who want that kind of instrument? I agree that the marketing is perhaps over-selling at the moment.

Comment author: Michael_PJ 24 April 2017 10:42:40PM 1 point [-]

Yes, in case it wasn't clear, I think I agree with many of your concrete suggestions, but I think the current situation is not too bad.

Comment author: Ben_West  (EA Profile) 24 April 2017 09:59:15PM 0 points [-]
  1. Are there blocks of rooms reserved at some hotel?
  2. Are there "informal" events planned for around the official event? (I.e. should everyone plan to land Thursday night and leave Sunday night or would it make sense to leave earlier/stay later?)

Thanks!

View more: Next