In response to EA Funds Beta Launch
Comment author: Brian_Tomasik 02 March 2017 03:29:03AM *  13 points [-]

Open Phil currently tries to set an upper limit on the proportion of an organization’s budget they will provide, in order to avoid dependence on a single funder. In the case where EA Funds generates recurring donations from a large number of donors, Fund Managers may be able to fully fund an organization already identified, saving the organization from spending additional time raising funds from many small donors individually.

It seems like in practice, donations from EA Funds are extremely correlated with OPP's own donations. That is, if OPP decided to stop funding a charity, presumably the EA Funds fund would also stop donating, because the charity no longer looks sufficiently promising. So the risk involved in depending on getting fully funded by OPP + EA Funds is seemingly about as high as the risk of depending on getting fully funded by just OPP. In this case, either fully funding a charity isn't a good thing, or OPP should already be doing it.

This comment isn't very important -- just an observation about argument 1.3.

Comment author: Robert_Wiblin 02 March 2017 07:03:59PM 13 points [-]

I love EA Funds, but my main concern is that as a community we are getting closer and closer to a single point of failure. If OPP reaches the wrong conclusion about something, there's now fewer independent donors forming their own views to correct them. This was already true because of how much people used the views of OPP and its staff to guide their own decisions.

We need some diversity (or outright randomness) in funding decisions for robustness.

Comment author: SoerenMind  (EA Profile) 13 February 2017 02:39:10PM 3 points [-]

If the funding for a problem with known total funding needs (e.g. creating drug x which costs $1b) goes up 10x, its solvability will go up 10x too - how do you resolve that this will make problems with low funding look very intractable? I guess the high neglectedness makes up for it. But this definition of solvability doesn't quite capture my intuition.

Comment author: Robert_Wiblin 28 February 2017 10:22:57PM 0 points [-]

Don't the shifts in solvability and neglectedness perfectly offset one another in such a case? Can you write out the case you're considering in more detail?

Comment author: mhpage 24 February 2017 10:59:22AM 13 points [-]

and I wonder how the next generation of highly informed, engaged critics (alluded to above) is supposed to develop if all substantive conversations are happening offline.

This is my concern (which is not to say it's Open Phil's responsibility to solve it).

Comment author: Robert_Wiblin 25 February 2017 06:37:47AM 1 point [-]

and I wonder how the next generation of highly informed, engaged critics (alluded to above) is supposed to develop if all substantive conversations are happening offline.

Comment author: RomeoStevens 23 February 2017 10:25:12PM *  10 points [-]

I'm skeptical. The trajectory you describe is common among a broad class of people as they age, grow in optimization power, and consider sharp course corrections less. They report a variety of stories about why this is so, so I'm skeptical of any particular story being causal.

To be clear, I also recognize the high cost of public discourse. But part of those costs are not necessary, borne only because EAs are pathologically scrupulous. As a result, letting people shit talk various thing without response causes more worry than is warranted. Naysayers are an unavoidable part of becoming a large optimization process.

There was a thread on Marginal Revolution many years ago about why more economists don't do the blogging thing given that it seems to have resulted in outsize influence for GMU. Cowen said his impression was that many economists tried, quickly 'made fools of themselves' in some minor way, and stopped. Being wrong publicly is very very difficult. And increasingly difficult the more Ra energy one has acquired.

So, three claims.

  • Outside view says we should be skeptical of our stories about why we do things, even after we try to correct for this.
  • Inability to only selectively engage with criticism will lead to other problems/coping strategies that might be harmful.
  • Carefully shepherding the optimization power one has already acquired is a recipe for slow calcification along hard to detect dimensions. The principles section is an outline of a potential future straightjacket.
Comment author: Robert_Wiblin 23 February 2017 11:16:43PM *  9 points [-]

I don't find the view that publishing a lot of internal thinking for public consumption and feedback is a poor use of time to be implausible on its face. Here are some reasons:

  1. By the time you know enough to write really useful things, your opportunity cost is high (more and better grants, coaching staff internally, etc).
  2. Thoughtful and informative content tends to get very little traffic anyway because it doesn't generate controversy. Most traffic will go to your most dubious work, thereby wasting your time, other people's time and spreading misinformation. I've benefitted greatly from GiveWell/OpenPhil investing in public communication (including this blog post for example) but I think I'm in a small minority that arguably shouldn't be their main focus given the amount of money they have available for granting. If there are a few relevant decision-makers who would benefit from a piece of information, you can just quickly email it to them and they'll understand it without you having to explain things in great detail.
  3. The people with expertise who provide the most useful feedback will email you or meet you eventually anyway - and often end up being hired. I'd say 80% of the usefulness of feedback/learning I've received has come from 5% of providers, who can be identified as the most informed critics pretty quickly.
  4. 'Transparency' and 'engaging with negative public feedback' are applause lights in egalitarian species and societies, like 'public parks', 'community' and 'families'. No one wants to argue against these things, so people who aren't in senior positions remain unaware of their legitimate downsides. And many people enjoy tearing down those they believe to be powerful and successful for the sake of enforced egalitarianism, rather than positive outcomes per se.
  5. The personal desire for attention, and to be adulated as smart and insightful, already pushes people towards public engagement even when it's an inferior use of time.

This isn't to say overall people share too much of the industry expertise they have - there are plenty of forces in the opposite direction - but I don't come with a strong presupposition that they share far too little either.


In some cases, if a problem is harder humanity should invest more in it, but you should be less inclined to work on it

One criteria in the 80,000 Hours problem framework is 'solvability'. All else equal, it is more effective to dedicate yourself to a problem if a larger percentage of the problem will be solved by each additional person working on it. So far so good. However, this can lead to something counterintuitive. Here is an... Read More
Comment author: Robert_Wiblin 17 February 2017 12:12:10AM *  6 points [-]

I broadly agree with this and am often pleased to see people go into party politics, government bureaucracies or advocacy on particular policy areas. The skills and connections they gain will hopefully be useful in the long term.

The interesting questions remaining to me here are: i) how much leverage do you get through political engagement vs direct work, aiming to include in your sample people who try and fail; ii) how worrying is it to find yourself working on a controversial issue, both because you'll have to fight against opponents and because you might be on the wrong side. Tough questions to answer!

Comment author: Robert_Wiblin 15 February 2017 08:57:05PM 7 points [-]

Always pleased to see people collating information like this!

Comment author: Robert_Wiblin 08 February 2017 05:47:15AM 3 points [-]

Looks spot on to me, nice work folks! :)

Comment author: Ben_Todd 06 February 2017 03:27:59PM 3 points [-]

We've considered wrapping it into the problem framework in the past, but it can easily get confusing. Informativeness is also more of a feature of how you go about working on the cause, rather than which cause you're focused on.

The current way we show that we think VOI is important is by listing Global Priorities Research as a top area (though I agree that doesn't quite capture it). I also talk about it often when discussing how to coordinate with the EA community (VOI is a bigger factor when considering the community perspective than individual perspective).

Comment author: Robert_Wiblin 06 February 2017 06:49:25PM *  4 points [-]

The 'Neglectedness' criteria gets you a pretty big tilt in favour of working on underexplored problems already. But value of information is an important factor in choosing what project to work on within a problem area.

Comment author: Gregory_Lewis 29 January 2017 08:12:34PM *  8 points [-]

I don't see the merit of upbraiding 80k for aggregation various sources of 'EA philanthropic advice' because one element of this relies on political views one may disagree with. Not including Cockburn's recommendations whilst including all the other OpenPhil staffers also implies political views others would find disagreeable. It's also fairly clear from the introduction the post (at least for non-animal charities) was canvassing all relevant recommendations rather than editorializing.

That said, it is perhaps unwise to translate 'advice from OpenPhil staffers' into 'EA recommendations'. OpenPhil is clear about how it licenses itself to try and 'pick hits' which may involve presuming or taking a bet on a particular hot button political topic (i.e. Immigration, criminal justice, abortion), being willing to take a particular empirical bet in the face of divided expertise, and so forth. For these reasons OpenPhil are not a 'Givewell for everything else', and their staffer's recommendations, although valuable for them to share and 80k to publicise, should carry the health warning that they are often conditional on quite large and non-resilient conjunctions of complicated convictions - which may not represent 'expert consensus' on these issues.

Comment author: Robert_Wiblin 29 January 2017 10:22:36PM 2 points [-]

Note that we say when describing this source at the beginning of the post that:

"[We refer to] Open Philanthropy Project’s suggestions for individual donors. ... Though note that “These are reasonably strong options in causes of interest, and shouldn’t be taken as outright recommendations.”"

We then consistently throughout the post refer to these as 'suggestions' only, rather than 'recommendations', as for the other sources.

View more: Next