Comment author: vipulnaik 04 July 2017 07:00:00AM 3 points [-]

Do you foresee any changes being made to the moderation guidelines on the forum? Now that CEA's brand name is associated with it, do you think that could mean forbidding the posting of content that is deemed "not helpful" to the movement, similar to what we see on the Effective Altruists Facebook group?

If there are no anticipated changes to the moderation guidelines, how do you anticipate CEA navigating reputational risks from controversial content posted to the forum?

Comment author: vipulnaik 23 April 2017 12:21:13AM *  5 points [-]

Thanks again for writing about the situation of the EA Funds, and thanks also to the managers of the individual funds for sharing their allocations and the thoughts behind it. In light of the new information, I want to raise some concerns regarding the Global Health and Development fund.

My main concern about this fund is that it's not really a "Global Health and Development" fund -- it's much more GiveWell-centric than global health- and development-centric. The decision to allocate all fund money to GiveWell's top charity reinforces some of my concerns, but it's actually something that is clear from the fund description.

From the description, it seems to be serving largely as a backup to GiveWell Incubation Grants (in cases where e.g. Good Ventures chooses not to fund the full amount) and as additional funding for GiveWell top charities.

This fund will support charities that the fund manager believes may be better in expectation than those recommended by GiveWell, a charity evaluator focused on outstandingly effective giving opportunities. For example, by pooling the funds of many individual donors, the fund could support new, but very promising global health charities in getting off the ground (e.g. Charity Science Health or No Lean Season). These organizations may not be able to meet GiveWell’s rigorous evaluation criteria at the moment, but may be able to meet the criteria in the future. If no such options are available, the fund will likely donate to GiveWell for granting. This means we think there is a strong likelihood that the fund will be at least as good as donating in accordance with GiveWell’s recommendations, but could be better in expectation.

Both the cited examples are recipients of GiveWell Incubation Grants, and in the pipeline for evaluation by GiveWell for top charity status. Even setting aside actual grantees, the value of the fund, according to the fund manager, is in terms of its value to GiveWell (emphasis mine):

Nonetheless, donating to this fund is valuable because it helps demonstrate to GiveWell that there is donor demand for higher-risk, higher-reward global health and development giving opportunities.

The GiveWell-centric nature of the fund is fine except that the fund's name suggests that it is a fund for global health and development, without affiliation to any institution.

Even beyond the GiveWell-as-an-organization-centered nature of the fund, there is a sense in which the fund reinforces the association of global health and development with quantifiable-and-low-risk, linear, easy buys. That association makes sense in the context of GiveWell (whose job it is to recommend linear-ish buys) but seems out of place to me here. Again quoting from the page about the fund:

Interventions in global health and development are generally tractable and have strong evidence to support them.

There are two distinct senses in which the statement could be interpreted:

  • There is large enough room for more funding for interventions in global health that have a strong evidence base, so that donors who want to stick to things with a strong evidence base won't run out of stuff to buy (i.e., lots of low-hanging fruit)
  • There's not much scope in global health for high-risk but high-expected value investments, because any good buy in global health would have a strong evidence base

I'd agree with the first interpretation, but the second interpretation seems quite false (looking at the Gates Foundation's portfolio shows a fair amount of risky, nonlinear efforts including new vaccine development, storage and surveillance technology breakthroughs, breakthroughs in toilet technology, etc.). The framing of the sentence, however, most naturally suggests the second interpretation, and moreover, may lead the reader to a careless conflation of the two. It seems to me like there's a lot of conflation in the EA community (and penumbra) between "global health and development" and "GiveWell current and potential top charities", and the setup of this EA Fund largely reflects that. So in that sense, my criticism isn't just of the fund but of what seems to me an implicit conflation.

Similar issues exist with two of the other funds: the animal welfare fund and the far future fund, but I think they are less concerning there. With "animal welfare" and "far future", the way the terms are used in EA Funds and in the EA community are different from the picture they'll conjure in the minds of people in general. But as far as I know, there isn't so much of an established existing cohesive infrastructure of organizations, funding sources, etc. that is at odds with the EA community.* Whereas with global health and development, you have things like WHO, Gates Foundation, Global Fund, and even an associated academic discipline etc. so the appropriation of the term for a fund that's somewhat of a GiveWell satellite seems jarring to me.

Some longer-term approaches that I think might help; obviously they wouldn't be changes you can make quickly:

(a) Rename funds so that the names capture more specifically the sort of things the funds are doing. e.g. if a fund is only being used for last-mile delivery of interventions as opposed to e.g. vaccine development, that can be specified within the fund name.

(b) Possibly have multiple funds within the same domain (e.g., global health & development) that capture different kinds of use cases (intervention delivery versus biomedical research) and have fund managers with relevant experience in the domains. e.g. it's possible that somebody with experience at the Gates Foundation, Global Fund, WHO, IHME, etc. could do fund allocation in some domains of global health and development better for some use cases.

Anyway, these are my thoughts. I'm not a contributor (or potential contributor, in the near term) to the funds, so take with appropriate amount of salt.

*It could be that if I had deeper knowledge of mainstream animal welfare and animal rights, or of mainstream far future stuff (like climate change) then I would find these jarring as well.

Comment author: vipulnaik 22 April 2017 03:53:10PM 4 points [-]

I appreciate the information being posted here, in this blog post, along with all the surrounding context. However, I don't see the information on these grants on the actual EA Funds website. Do you plan to maintain a grants database on the EA Funds website, and/or list all the grants made from each fund on the fund page (or linked to from it)? That way anybody can check in at any time to see how how much money has been raised, and how much has been allocated and where.

The Open Philanthropy Project grants database might be a good model, though your needs may differ somewhat.

Comment author: kbog  (EA Profile) 03 January 2017 04:09:37AM *  1 point [-]

Here are stats for the EA subreddit.

In March and April the group was created and advertised/linked from elsewhere.

Comment author: vipulnaik 16 April 2017 08:44:04PM 1 point [-]
Comment author: vipulnaik 17 March 2017 03:33:58PM 2 points [-]

Commenting here to avoid a misconception that some readers of this post might have. I wasn't trying to "spread effective altruism" to any community with these editing efforts, least of all the Wikipedia community (it's also worth noting that the Wikipedia community that participates in these debates is basically disjoint from the people who actually read those specific pages in practice -- many of the latter don't even have Wikipedia accounts).

Some of the editing activities were related to effective altruism in these two ways: (1) The pages we edited, and the content we added, were disproportionately (though not exclusively) of interest to people in and around the EA-sphere, and (2) Some of the topics worked on, I selected based on EA-aligned interests (an example would be global health and disease timelines).

Comment author: John_Maxwell_IV 01 March 2017 11:52:33PM *  17 points [-]

Interesting post.

I wonder if it'd be useful to make a distinction between the "relatively small number of highly engaged, highly informed people" vs "insiders".

I could easily imagine this causal chain:

  1. Making your work open acts as an advertisement for your organization.

  2. Some of the people who see the advertisement become highly engaged & highly informed about your work.

  3. Some of the highly engaged & informed people form relationships with you beyond public discourse, making them "insiders".

If this story is true, public discourse represents a critical first step in a pipeline that ends with the creation of new insiders.

I think this story is quite plausibly true. I'm not sure the EA movement would have ever come about without the existence of Givewell. Givewell's publicly available research regarding where to give was a critical part of the story that sold people on the idea of effective altruism. And it seems like the growth of the EA movement lead to growth in the number of insiders, whose opinions you say you value.

I can easily imagine a parallel universe "Closed Philanthropy Project" with the exact same giving philosophy, but no EA movement that grew up around it due to a lack of publicly available info about its grants. In fact, I wouldn't be surprised if many foundations already had giving philosophies very much like OpenPhil's, but we don't hear about them because they don't make their research public.

I didn't quite realize when I signed up just what it meant to read ten thousand applications a year. It's also the most important thing that we do because one of Y Combinator's great innovations was that we were one of the first investors to truly equalize the playing field to all companies across the world. Traditionally, venture investors only really consider companies who come through a personal referral. They might have an email address on their site where you can send a business plan to, but in general they don't take those very seriously. There's usually some associate who reads the business plans that come in over the transom. Whereas at Y Combinator, we said explicitly, "We don't really care who you know or if you don't know anyone. We're just going to read every application that comes in and treat them all equally."

Source. Similarly, Robin Hanson thinks that a big advantage academics have over independent scholars is the use of open competitions rather than personal connections in choosing people to work with.

So, a power law distribution in commenter usefulness isn't sufficient to show that openness lacks benefits.

As an aside, I hadn't previously gotten a strong impression that OpenPhil's openness was for the purpose of gathering feedback on your thinking. Givewell was open with its research for the purpose of advising people where to donate. I guess now that you are partnering with Good Ventures, that is no longer a big goal. But if the purpose of your openness has changed from advising others to gathering advice yourself, this could probably be made more explicit.

For example, I can imagine OpenPhil publishing a list of research questions on its website for people in the EA community to spend time thinking & writing about. Or highlighting feedback that was especially useful, to reinforce the behavior of leaving feedback/give examples of the kind of feedback you want more of. Or something as simple as a little message at the bottom of every blog post saying you welcome high quality feedback and you continue to monitor for comments long after the blog post is published (if that is indeed true).

Maybe the reason you are mainly gathering feedback from insiders is simply that only insiders know enough about you to realize that you want feedback. I think it's plausible that the average EA puts commenting on OpenPhil blog posts in the "time wasted on the internet" category, and it might not require a ton of effort to change that.

To relate back to the Y Combinator analogy, I would expect that Y Combinator gets many more high-quality applications through the form on its website than the average VC firm does, and this is because more people think that putting their info into the form on Y Combinator's website is a good use of time. It would not be correct for a VC firm to look at the low quality of the applications they were getting through the form on their website and infer that a startup funding model based on an online form is surely unviable.

More broadly speaking this seems similar to just working to improve the state of online effective altruism discussion in general, which maybe isn't a problem that OpenPhil feels well-positioned to tackle. But I do suspect there is relatively low-hanging fruit here.

Comment author: vipulnaik 02 March 2017 12:17:15AM *  1 point [-]

Great points! (An upvote wasn't enough appreciation, hence the comment as well).

Comment author: DonyChristie 24 February 2017 08:04:23PM -1 points [-]

Here's my submission. :)

Comment author: vipulnaik 26 February 2017 02:10:09AM 0 points [-]

Hi Dony,

The submission doesn't qualify as serious, and was past the deadline. So we won't be considering it.

Comment author: vipulnaik 24 February 2017 09:17:06PM *  13 points [-]

(1) Frustrating vagueness and seas of generality: This post, as well as many other posts you have recently written (such as , , , struck me as fairly vague. Even posts where you were trying to be concrete (e.g., , were really hard for me to parse and get a grip on your precise arguments.

I didn't really reflect on this much with the previous posts, but reading your current post sheds some light: the vagueness is not a bug, from your perspective, it's a corollary of trying to make your content really hard for people to take issue with. And I think therein lies the problem. I think of specificity, falsifiability, and concreteness as keys to furthering discourse and helping actually converge on key truths and correcting error. By glorifying the rejection of these virtues, I think your writing does a disservice to public discourse.

For a point of contrast, here are some posts from GiveWell and Open Phil that I feel were sufficiently specific that they added value to a conversation: , , , -- notice how most of these posts make a large number of very concrete claims and highlight their opposition to very specific other parties, which makes them targets of criticism and insult, but really helps delineate an issue and pushes conversations forward. I'm interested in seeing more of this sort of stuff and less of overly cautious diplomatic posts like yours.

Comment author: vipulnaik 25 February 2017 05:38:56AM *  6 points [-]

One point to add: the frustratingly vague posts tend to get FEWER comments than the specific, concrete posts.

From my list, the posts I identified as clearly vague: got 1 comment (a question that hasn't been answered) got 1 comment (a single sentence praising the post) got 6 comments got 8 comments

In contrast, the posts I identified as sufficiently specific (even though they tended on the fairly technical side) got 17 comments got 14 comments got 27 comments got 7 comments

If engagement is any indication, then people really thirst for specific, concrete content. But that's not necessarily in contradiction with Holden's point, since his goal isn't to generate engagement. In fact comments engagement can even be viewed negatively in his framework because it means more effort necessary to respond to and keep up with comments.

Comment author: vipulnaik 24 February 2017 09:16:53PM 9 points [-]

Thank you for the illuminative post, Holden. I appreciate you taking the time to write this, despite your admittedly busy schedule. I found much to disagree with in the approach you champion in the post, that I attempt to articulate below.

In brief: (1) Frustrating vagueness and seas of generality in your current post and recent posts, (2) Overstated connotations of expertise with regards to transparency and openness, (3) Artificially filtering out positive reputational effects, then claiming that the reputational effects of openness are skewed negative, (4) Repeatedly shifting the locus of blame to external critics rather than owning up to responsibility.

I'll post each point as a reply comment to this since the overall comment exceeds the length limits for a comment.

Comment author: vipulnaik 24 February 2017 09:17:50PM *  3 points [-]

(4) Repeatedly shifting the locus of blame to external critics rather than owning up to responsibility: You keep alluding to costs of publishing your work more clearly, yet there are no examples of how such costs have negatively affected Open Phil, or the specific monetary, emotional, or other damages you have incurred (this is related to (1), where I am critical of your frustrating vagueness). This vagueness makes your claims of the risks to openness frustrating to evaluate in your case.

As a more general claim about being public, though, your claim strikes me as misguided. The main obstacle to writing up stuff for the public is just that writing stuff up takes a lot of time, but this is mostly a limitation on the part of the writer. The writer doesn't have a clear picture of what he or she wants to say. The writer does not have a clear idea of how to convey the idea clearly. The writer lacks the time and resources to put things together. Failure to do this is a failure on the part of the writer. Blaming readers for continually trying to misinterpret their writing, or carrying out witch hunts, is simply failing to take responsibility.

A more humble framing would highlight this fact, and some of its difficult implications, e.g.: "As somebody in charge of a foundation that is spending ~$100 million a year and recommending tens of millions in donations by others, I need to be very clear in my thinking and reasoning. Unfortunately, I have found that it's often easier and cheaper to spend millions of dollars in grants than write up a clear public-facing document on the reasons for doing so. I'm very committed to writing publicly where it is possible (and you can see evidence of this in all the grant writeups for Open Phil and the detailed charity evaluations for GiveWell). However, there are many cases where writing up my reasoning is more daunting than signing off on millions of dollars in money. I hope that we are able to figure out better approaches to reducing the costs of writing things up."

Comment author: vipulnaik 24 February 2017 09:16:53PM 9 points [-]

Thank you for the illuminative post, Holden. I appreciate you taking the time to write this, despite your admittedly busy schedule. I found much to disagree with in the approach you champion in the post, that I attempt to articulate below.

In brief: (1) Frustrating vagueness and seas of generality in your current post and recent posts, (2) Overstated connotations of expertise with regards to transparency and openness, (3) Artificially filtering out positive reputational effects, then claiming that the reputational effects of openness are skewed negative, (4) Repeatedly shifting the locus of blame to external critics rather than owning up to responsibility.

I'll post each point as a reply comment to this since the overall comment exceeds the length limits for a comment.

Comment author: vipulnaik 24 February 2017 09:17:41PM 7 points [-]

(3) Artificially filtering out positive reputational effects, then claiming that the reputational effects of openness are skewed negative.

"By "public discourse," I mean communications that are available to the public and that are primarily aimed at clearly describing one's thinking, exploring differences with others, etc. with a focus on truth-seeking rather than on fundraising, advocacy, promotion, etc."

If you exclude from public discourse any benefits pertaining to fundraising, advocacy, and promotion, then you are essentially stacking the deck against public discourse -- now any reputational or time-sink impacts are likely to be negative.

Here's an alternate perspective. Any public statement should be thought of both in terms of the object-level points it is making (specifically, the information it is directly providing or what it is trying to convince people of), and secondarily in terms of how it affects the status and reputation of the person or organization making the statement, and/or their broader goals. For instance, when I wrote my direct goal was to provide information about web traffic to the Effective Altruism Forum and what the patterns tell us about effective altruism movement growth, but an indirect goal was to highlight the value of using data-driven analytics, and in particular website analytics, something I've championed in the past. Whether you choose to label the public statement as "fundraising", "advocacy", or whatever, is somewhat besides the point.

View more: Next