Comment author: Ben_Todd 30 December 2016 12:07:50AM 2 points [-]

Hi Vipul,

I was planning to write up the results, but haven't been able to fit it in yet. Most of the information is confidential, so it needs some care.

Comment author: vipulnaik 18 September 2017 01:31:05AM 2 points [-]

I'm following up regarding this :).

Comment author: Zeke_Sherman 06 September 2017 02:36:39PM *  1 point [-]

The effective altruism subreddit is growing in traffic: https://i.imgur.com/3BSLlgC.png (August figures are 2.5k and 9.5k)

The EA Wikipedia page is not changing much in pageviews: https://tools.wmflabs.org/pageviews/?project=en.wikipedia.org&platform=all-access&agent=user&start=2015-07&end=2017-08&pages=Effective_altruism

Comment author: vipulnaik 06 September 2017 06:50:01PM 0 points [-]

The subreddit stats used to be public (or rather, moderators could choose to make them public) but that option was removed by Reddit a few months ago.

https://www.reddit.com/r/ModSupport/comments/6atvgi/upcoming_changes_view_counts_users_here_now_and/

I discussed Reddit stats a little bit in this article: https://www.wikihow.com/Understand-Your-Website-Traffic-Variation-with-Time

Comment author: vipulnaik 06 September 2017 06:47:16PM 0 points [-]

I have been using PredictionBook for recording predictions related to GiveWell money moved; see http://effective-altruism.com/ea/xn/givewell_money_moved_in_2015_a_review_of_my/#predictions-for-2016 for links to the predictions. Unfortunately searching on PredictionBook itself does not turn up all the predictions because they use Google, which does not index all pages or at least doesn't surface them in search results.

Comment author: vipulnaik 04 July 2017 07:00:00AM 3 points [-]

Do you foresee any changes being made to the moderation guidelines on the forum? Now that CEA's brand name is associated with it, do you think that could mean forbidding the posting of content that is deemed "not helpful" to the movement, similar to what we see on the Effective Altruists Facebook group?

If there are no anticipated changes to the moderation guidelines, how do you anticipate CEA navigating reputational risks from controversial content posted to the forum?

Comment author: vipulnaik 23 April 2017 12:21:13AM *  5 points [-]

Thanks again for writing about the situation of the EA Funds, and thanks also to the managers of the individual funds for sharing their allocations and the thoughts behind it. In light of the new information, I want to raise some concerns regarding the Global Health and Development fund.

My main concern about this fund is that it's not really a "Global Health and Development" fund -- it's much more GiveWell-centric than global health- and development-centric. The decision to allocate all fund money to GiveWell's top charity reinforces some of my concerns, but it's actually something that is clear from the fund description.

From the description, it seems to be serving largely as a backup to GiveWell Incubation Grants (in cases where e.g. Good Ventures chooses not to fund the full amount) and as additional funding for GiveWell top charities.

This fund will support charities that the fund manager believes may be better in expectation than those recommended by GiveWell, a charity evaluator focused on outstandingly effective giving opportunities. For example, by pooling the funds of many individual donors, the fund could support new, but very promising global health charities in getting off the ground (e.g. Charity Science Health or No Lean Season). These organizations may not be able to meet GiveWell’s rigorous evaluation criteria at the moment, but may be able to meet the criteria in the future. If no such options are available, the fund will likely donate to GiveWell for granting. This means we think there is a strong likelihood that the fund will be at least as good as donating in accordance with GiveWell’s recommendations, but could be better in expectation.

Both the cited examples are recipients of GiveWell Incubation Grants, and in the pipeline for evaluation by GiveWell for top charity status. Even setting aside actual grantees, the value of the fund, according to the fund manager, is in terms of its value to GiveWell (emphasis mine):

Nonetheless, donating to this fund is valuable because it helps demonstrate to GiveWell that there is donor demand for higher-risk, higher-reward global health and development giving opportunities.

The GiveWell-centric nature of the fund is fine except that the fund's name suggests that it is a fund for global health and development, without affiliation to any institution.

Even beyond the GiveWell-as-an-organization-centered nature of the fund, there is a sense in which the fund reinforces the association of global health and development with quantifiable-and-low-risk, linear, easy buys. That association makes sense in the context of GiveWell (whose job it is to recommend linear-ish buys) but seems out of place to me here. Again quoting from the page about the fund:

Interventions in global health and development are generally tractable and have strong evidence to support them.

There are two distinct senses in which the statement could be interpreted:

  • There is large enough room for more funding for interventions in global health that have a strong evidence base, so that donors who want to stick to things with a strong evidence base won't run out of stuff to buy (i.e., lots of low-hanging fruit)
  • There's not much scope in global health for high-risk but high-expected value investments, because any good buy in global health would have a strong evidence base

I'd agree with the first interpretation, but the second interpretation seems quite false (looking at the Gates Foundation's portfolio shows a fair amount of risky, nonlinear efforts including new vaccine development, storage and surveillance technology breakthroughs, breakthroughs in toilet technology, etc.). The framing of the sentence, however, most naturally suggests the second interpretation, and moreover, may lead the reader to a careless conflation of the two. It seems to me like there's a lot of conflation in the EA community (and penumbra) between "global health and development" and "GiveWell current and potential top charities", and the setup of this EA Fund largely reflects that. So in that sense, my criticism isn't just of the fund but of what seems to me an implicit conflation.

Similar issues exist with two of the other funds: the animal welfare fund and the far future fund, but I think they are less concerning there. With "animal welfare" and "far future", the way the terms are used in EA Funds and in the EA community are different from the picture they'll conjure in the minds of people in general. But as far as I know, there isn't so much of an established existing cohesive infrastructure of organizations, funding sources, etc. that is at odds with the EA community.* Whereas with global health and development, you have things like WHO, Gates Foundation, Global Fund, and even an associated academic discipline etc. so the appropriation of the term for a fund that's somewhat of a GiveWell satellite seems jarring to me.

Some longer-term approaches that I think might help; obviously they wouldn't be changes you can make quickly:

(a) Rename funds so that the names capture more specifically the sort of things the funds are doing. e.g. if a fund is only being used for last-mile delivery of interventions as opposed to e.g. vaccine development, that can be specified within the fund name.

(b) Possibly have multiple funds within the same domain (e.g., global health & development) that capture different kinds of use cases (intervention delivery versus biomedical research) and have fund managers with relevant experience in the domains. e.g. it's possible that somebody with experience at the Gates Foundation, Global Fund, WHO, IHME, etc. could do fund allocation in some domains of global health and development better for some use cases.

Anyway, these are my thoughts. I'm not a contributor (or potential contributor, in the near term) to the funds, so take with appropriate amount of salt.

*It could be that if I had deeper knowledge of mainstream animal welfare and animal rights, or of mainstream far future stuff (like climate change) then I would find these jarring as well.

Comment author: vipulnaik 22 April 2017 03:53:10PM 4 points [-]

I appreciate the information being posted here, in this blog post, along with all the surrounding context. However, I don't see the information on these grants on the actual EA Funds website. Do you plan to maintain a grants database on the EA Funds website, and/or list all the grants made from each fund on the fund page (or linked to from it)? That way anybody can check in at any time to see how how much money has been raised, and how much has been allocated and where.

The Open Philanthropy Project grants database might be a good model, though your needs may differ somewhat.

Comment author: kbog  (EA Profile) 03 January 2017 04:09:37AM *  1 point [-]

.

Comment author: vipulnaik 16 April 2017 08:44:04PM 1 point [-]
Comment author: vipulnaik 17 March 2017 03:33:58PM 2 points [-]

Commenting here to avoid a misconception that some readers of this post might have. I wasn't trying to "spread effective altruism" to any community with these editing efforts, least of all the Wikipedia community (it's also worth noting that the Wikipedia community that participates in these debates is basically disjoint from the people who actually read those specific pages in practice -- many of the latter don't even have Wikipedia accounts).

Some of the editing activities were related to effective altruism in these two ways: (1) The pages we edited, and the content we added, were disproportionately (though not exclusively) of interest to people in and around the EA-sphere, and (2) Some of the topics worked on, I selected based on EA-aligned interests (an example would be global health and disease timelines).

Comment author: John_Maxwell_IV 01 March 2017 11:52:33PM *  17 points [-]

Interesting post.

I wonder if it'd be useful to make a distinction between the "relatively small number of highly engaged, highly informed people" vs "insiders".

I could easily imagine this causal chain:

  1. Making your work open acts as an advertisement for your organization.

  2. Some of the people who see the advertisement become highly engaged & highly informed about your work.

  3. Some of the highly engaged & informed people form relationships with you beyond public discourse, making them "insiders".

If this story is true, public discourse represents a critical first step in a pipeline that ends with the creation of new insiders.

I think this story is quite plausibly true. I'm not sure the EA movement would have ever come about without the existence of Givewell. Givewell's publicly available research regarding where to give was a critical part of the story that sold people on the idea of effective altruism. And it seems like the growth of the EA movement lead to growth in the number of insiders, whose opinions you say you value.

I can easily imagine a parallel universe "Closed Philanthropy Project" with the exact same giving philosophy, but no EA movement that grew up around it due to a lack of publicly available info about its grants. In fact, I wouldn't be surprised if many foundations already had giving philosophies very much like OpenPhil's, but we don't hear about them because they don't make their research public.

I didn't quite realize when I signed up just what it meant to read ten thousand applications a year. It's also the most important thing that we do because one of Y Combinator's great innovations was that we were one of the first investors to truly equalize the playing field to all companies across the world. Traditionally, venture investors only really consider companies who come through a personal referral. They might have an email address on their site where you can send a business plan to, but in general they don't take those very seriously. There's usually some associate who reads the business plans that come in over the transom. Whereas at Y Combinator, we said explicitly, "We don't really care who you know or if you don't know anyone. We're just going to read every application that comes in and treat them all equally."

Source. Similarly, Robin Hanson thinks that a big advantage academics have over independent scholars is the use of open competitions rather than personal connections in choosing people to work with.

So, a power law distribution in commenter usefulness isn't sufficient to show that openness lacks benefits.

As an aside, I hadn't previously gotten a strong impression that OpenPhil's openness was for the purpose of gathering feedback on your thinking. Givewell was open with its research for the purpose of advising people where to donate. I guess now that you are partnering with Good Ventures, that is no longer a big goal. But if the purpose of your openness has changed from advising others to gathering advice yourself, this could probably be made more explicit.

For example, I can imagine OpenPhil publishing a list of research questions on its website for people in the EA community to spend time thinking & writing about. Or highlighting feedback that was especially useful, to reinforce the behavior of leaving feedback/give examples of the kind of feedback you want more of. Or something as simple as a little message at the bottom of every blog post saying you welcome high quality feedback and you continue to monitor for comments long after the blog post is published (if that is indeed true).

Maybe the reason you are mainly gathering feedback from insiders is simply that only insiders know enough about you to realize that you want feedback. I think it's plausible that the average EA puts commenting on OpenPhil blog posts in the "time wasted on the internet" category, and it might not require a ton of effort to change that.

To relate back to the Y Combinator analogy, I would expect that Y Combinator gets many more high-quality applications through the form on its website than the average VC firm does, and this is because more people think that putting their info into the form on Y Combinator's website is a good use of time. It would not be correct for a VC firm to look at the low quality of the applications they were getting through the form on their website and infer that a startup funding model based on an online form is surely unviable.

More broadly speaking this seems similar to just working to improve the state of online effective altruism discussion in general, which maybe isn't a problem that OpenPhil feels well-positioned to tackle. But I do suspect there is relatively low-hanging fruit here.

Comment author: vipulnaik 02 March 2017 12:17:15AM *  1 point [-]

Great points! (An upvote wasn't enough appreciation, hence the comment as well).

Comment author: DonyChristie 24 February 2017 08:04:23PM -1 points [-]

Here's my submission. :)

Comment author: vipulnaik 26 February 2017 02:10:09AM 0 points [-]

Hi Dony,

The submission doesn't qualify as serious, and was past the deadline. So we won't be considering it.

View more: Next