Comment author: MichaelPlant 13 October 2017 10:57:38AM *  9 points [-]

Hello Tom, thanks very much for this write up. Three comments:

I very much admire your ability to self-criticise, but I think you're being overly harsh on yourself. It didn't turn out as well as you hoped, but you couldn't have known that in advance, which was the point. I think this is a good example of what is sometimes called 'hits-based charity': EAs trying new things with a high expected value but a low probability of success. I also hesitate to call this a failure because, as you noted, quite a few lessons were learnt. I think your (only?) substantial mistake was in having too high expectations about what a part-time student group could achieve. Perhaps you took "EAs", who are typically smart, consciousness and driven as your reference group, rather than "student club/society" which no one really expects to be very productive or world-changing.

On reflection, I wonder if OxPrio fell into a sort of research no-man's land. It was too detailed for student, average EAs to engage with, but maybe not in depth enough to attract critical commentary and engagement from full-time researchers, such as those in CEA or GiveWell, whose research you were, to some extent, replicating. I'm not sure who you thought the target audience of your research was.

I think a contributing factor to not having much local, Oxford university engagement is that you'd selected a team. Presumably the people who would be most interested in OxPrio's research applied. I imagine many of the people who applied, but you rejected from the team, then decided that, as a standard psychological reflex, that they didn't want to be involved further (disclaimer: I applied and was rejected, but ended up being really curious about what was OxPrio were doing anyway). Hence the process of selecting alienated much of your intended audience. I don't have suggestion for what would have been better, I just think this is worth factoring in.

Comment author: ThomasSittler 13 October 2017 05:13:17PM 4 points [-]

Thanks for the comment! I substantially disagree. The Project created some benefits, but (with low confidence) I don't think the costs were worth it. I'm seeing a lot of people note the benefits of the Project and conclude that the Project was net-positive, without engaging with the costs and/or counterfactuals.

Regarding the selection, I disagree that this was a substantial effect. But it's something we should discuss in person.

Comment author: MichaelPlant 01 October 2017 12:50:13AM 1 point [-]

All this was hard to follow.

Comment author: ThomasSittler 01 October 2017 11:49:58AM 0 points [-]

I tried hard, but I agree it's still hard to follow :/

Comment author: Roxanne_Heston  (EA Profile) 30 September 2017 03:33:07PM *  2 points [-]

My guess would be that the main cost of EA grants was CEA staff time, not the £500,000. Would you agree? And what do you think the ratio is, very roughly?

It depends on the value you place on CEA staff time. Internally we value the average CEA staff hour at ~$75 ($50-$150, depending on the nature of the work), so 840 * £56 = £47,040 in opportunity cost, plus real staff costs. This suggests that staff time wasn't the main cost, unless you think the counterfactual uses of time would have been far more impactful than our average.

This is a description of the counterfactual; not an evaluation. Would you like to venture a guess as to whether spending time on EA grants was better than this counterfactual? :)

Really tricky for me to say, especially because I have incentive to think this was the right choice. That being said, it does seem right to me, primarily because of the haste consideration:

As I noted elsewhere in the piece, "about one quarter of the projects wouldn’t have happened at all, and the rest would have received less time." This makes the immediate multipliers pretty high. We spent about 0.42 years of CEA staff time and gained (really rough guess) 10 years of counterfactual EA time. Since a lot of people we sponsored are doing movement-building work in some form, I expect their activities to have multipliers, too.

The counterfactual activities are higher risk but vie for long-run value similar to that we expect the recipients to produce. (e.g. Theron Pummer is writing introductory EA content and trying to engage academics.)

Comment author: ThomasSittler 30 September 2017 09:00:14PM *  3 points [-]

Internally we value the average CEA staff hour at ~$75

This is $75 roughly in "EA money" (i.e. OpenPhil's last dollar), yes? It's significantly lower than I thought. However, I suspect that this intuition was biased (upward), because I more often think in terms of "non-EA money". In non-EA money, CEA time would have a much higher nominal value. But if you think EA money can be used to buy good outcomes very cost-effectively (even at the margin) then $75 could make sense.

Comment author: ThomasSittler 30 September 2017 09:34:52AM *  3 points [-]

Thanks for this post, Roxanne! I found the grants spreadsheet especially interesting.

My guess would be that the main cost of EA grants was CEA staff time, not the £500,000. Would you agree? And what do you think the ratio is, very roughly?

When it comes to the CEA staff time counterfactual, you write:

Had CEA staff not worked on this program, we would have accelerated progress on writing collated EA content, built out the EA events infrastructure, and worked on plans for EA academic engagement.

This is a description of the counterfactual; not an evaluation. Would you like to venture a guess as to whether spending time on EA grants was better than this counterfactual? :)

Comment author: ThomasSittler 15 September 2017 08:36:42AM 1 point [-]

the model is to be read as if the initiative polls well

Have you thought about some cheap ways to get more information on how this is likely to poll (even poor quality info) ?

Comment author: ThomasSittler 15 September 2017 08:34:52AM 0 points [-]

The well-being improvement estimates seem to come from small pilot studies with no control group, showing very large impacts. I don't have enough background to guess how large these impacts are relative to other known treatments or placebo. The smoking impacts come from Johnson et al. 2017 (N = 15), the depression impacts come from Carhart-Harris et al. 2016 (N = 12).

In response to Open Thread #38
Comment author: Denkenberger 22 August 2017 10:20:51PM 3 points [-]

I have found it helpful in talking about donating large percentages of salary to be able to point out how many people do similar amounts of sacrifice. One comparison that has been made was with being vegetarian. But this is hard to compare and still only a few percent of people. More common is people taking a 10% pay cut for positive impact of their job, or donating 10% of their free time (which I am saying is roughly 40 hours per week if one has a full-time job (comments here)). I tried to get some rough estimates of the rates of these behaviors, but has anyone else done it more rigorously or would like to do it?

Comment author: ThomasSittler 24 August 2017 09:31:26PM *  1 point [-]
Comment author: Peter_Hurford  (EA Profile) 03 July 2017 02:41:30AM 7 points [-]

Can you speak to any plans on the horizon other than changing the domain name, even if long term?

Comment author: ThomasSittler 22 August 2017 07:20:56PM *  1 point [-]

From CEA's funding prospectus:

We have big plans for over the next year, including the following:

Launch a chapter portal with resources for local groups (90% confident this will happen)

Launch a community portal with a layered discussion forum and aggregated effective altruism-relevant content from the web (70%)

Publish a series of long-form essays that methodically explain the foundations of effective altruism (50%)

Comment author: rhys_lindmark 18 August 2017 04:38:31PM 2 points [-]

I definitely agree that information on these topics is ripe for aggregation/curation.

My instinct is to look to the VC/startup community for some insight here, specifically around uncertainty (they're in the business of "predicting/quantifying/derisking uncertain futures/projects"). Two quick examples:

I would expect an "EA-focused uncertainty model" to include gates that map a specific project through time given models of macro future trends.

Comment author: ThomasSittler 18 August 2017 08:50:51PM 2 points [-]

Could you give specific examples of how ideas from VCs or startups could contribute a novel insight to EA prioritisation? Your links weren't very helpful on their own.

Comment author: ThomasSittler 17 August 2017 08:09:05PM *  3 points [-]

You may be interested in Owen's talk Prospecting for Gold. Among other things it has a mathematical formalisation of the ITN framework.

View more: Next