Comment author: AGB 28 April 2017 07:04:56PM *  4 points [-]

Trying to square this circle, because I think these observations are pretty readily reconcilable. My second-hand vague recollections from speaking to people at the time are:

  1. The programming had a moderate slant towards AI risk because we got Elon.
  2. The participants were generally very bullish on AI risk and other far-future causes.
  3. The 'Global poverty is a rounding error' crowd was a disproportionately-present minority.

Any one of these in isolation would likely have been fine, but the combination left some people feeling various shades of surprised/bait-and-switched/concerned/isolated/unhappy. I think the combination is consistent with both what Ben said and what Kerry said.

Further, (2) and (3) aren't surprising if you think about the way San Francisco EAs are drawn differently to EAs globally; SF is by some margin the largest AI hub, so committed EAs who care a lot about AI disproportionately end up living and working there.

Note that EAG Oxford, organised by the same team in the same month with the same private opinions, didn't have the same issues, or at least it didn't to the best of my knowledge as a participant who cared very little for AI risk at the time. I can't speak to EAG Melbourne but I'd guess the same was true.

While (2) and (3) aren't really CEA's fault, there's a fair challenge as to whether CEA should have anticipated (2) and (3) given the geography, and therefore gone out of their way to avoid (1). I'm moderately sympathetic to this argument but it's very easy to make this kind of point with hindsight; I don't know whether anyone foresaw it. Of course, we can try to avoid the mistake going forward regardless, but then again I didn't hear or read anyone complaining about this at EAG 2016 in this way, so maybe we did?

Comment author: BenHoffman 28 April 2017 08:22:28PM 2 points [-]

I think 2016 EAG was more balanced. But I don't think the problem in 2015 was apparent lack of balance per se. It might have been difficult for the EAG organizers to sincerely match the conference programming to promotional EA messaging, since their true preferences were consistent with the extent to which things like AI risk were centered.

The problem is that to the extent to which EA works to maintain a smooth, homogeneous, uncontroversial, technocratic public image, it doesn't match the heterogeneous emphases, methods, and preferences of actual core EAs and EA organizations. This is necessarily going to require some amount of insincerity or disconnect between initial marketing and reality, and represents a substantial cost to that marketing strategy.

Comment author: Kerry_Vaughan 27 April 2017 08:56:51PM 4 points [-]

Two years ago many attendees at the EA Global conference in the San Francisco Bay Area were surprised that the conference focused so heavily on AI risk, rather than the global poverty interventions they’d expected.

EA Global 2015 had one pannel on AI (in the morning, on day 2) and one talk tripplet on Global Poverty (in the afternoon, on day 2). Most of the content was not cause-specific.

People remember EA Global 2015 as having a lot of AI content because Elon Musk was on the AI pannel which made it loom very large in people's minds. So, while it's fair to say that more attention ended up on AI than on global poverty, it's not fair to say that the content focused more on AI than on global poverty

Comment author: BenHoffman 28 April 2017 02:48:27AM 1 point [-]

The featured event was the AI risk thing. My recollection is that there was nothing else scheduled at that time so everyone could go to it. That doesn't mean there wasn't lots of other content (there was), nor do I think centering AI risk was necessarily a bad thing, but I stand by my description.

Comment author: Elizabeth 26 April 2017 11:40:14PM 1 point [-]

i can see it clearly now, not sure if I was inattentive or something went wrong the first time I loaded the page.

Comment author: BenHoffman 27 April 2017 06:01:47AM 0 points [-]

I also originally saw the reply attributed to a different comment on Mobile.

Comment author: Elizabeth 26 April 2017 11:48:48PM 5 points [-]

I'm shocked that no one has commented on Elie Hassenfeld distributing 100% of money to GiveWell's top charity. Even if he didn't run GiveWell, this just seems like an extra step between giving to GiveWell. But given that one of the main arguments for the funds was to let smaller projects get funded quickly and with less overhead, giving 100% to one enormous charity with many large donors is clearly failing at a goal.

I would guess that $300k simply isn't worth Elie's time to distribute in small grants, given the enormous funds available via GoodVentures and even GiveWell direct and directed donations. It seems to me the obvious thing is to is have the fund managed by someone who has the time to do so, rather than make another way to give money to GiveWell.

Comment author: BenHoffman 27 April 2017 01:24:10AM *  3 points [-]

I would guess that $300k simply isn't worth Elie's time to distribute in small grants, given the enormous funds available via GoodVentures and even GiveWell direct and directed donations.

This is consistent with the optionality story in the beta launch post:

If the EA Funds raises little money, they can spend little additional time allocating the EA Funds’ money but still utilize their deep subject-matter expertise in making the allocation. This reduces the chance that the EA Funds causes fund managers to use their time ineffectively and it means that the lower bound of the quality of the donations is likely to be high enough to justify donations even without knowing the eventual size of the fund.

However, I do think this suggests that - to the extent to which GiveWell is already a known and trusted institution - for global poverty in particular it's more important to get the fund manager with the most unique relevant expertise than a fund manager with the most expertise.

Comment author: Elizabeth 26 April 2017 11:48:48PM 5 points [-]

I'm shocked that no one has commented on Elie Hassenfeld distributing 100% of money to GiveWell's top charity. Even if he didn't run GiveWell, this just seems like an extra step between giving to GiveWell. But given that one of the main arguments for the funds was to let smaller projects get funded quickly and with less overhead, giving 100% to one enormous charity with many large donors is clearly failing at a goal.

I would guess that $300k simply isn't worth Elie's time to distribute in small grants, given the enormous funds available via GoodVentures and even GiveWell direct and directed donations. It seems to me the obvious thing is to is have the fund managed by someone who has the time to do so, rather than make another way to give money to GiveWell.

Comment author: BenHoffman 27 April 2017 01:21:08AM *  2 points [-]

On the other hand, it does seem worthwhile to funnel money through different intermediaries sometimes if only to independently confirm that the obvious things are obvious, and we probably don't want to advocate contrarianism for contrarianism's sake. If Elie had given the money elsewhere, that would have been strong evidence that the other thing was valuable and underfunded relative to GW top charities (and also worrying evidence about GiveWell's ability to implement its founders' values). Since he didn't, that's at least weak evidence that AMF is the best global poverty funding opportunity we know about.

Overall I think it's good that Elie didn't feel the need to justify his participation by doing a bunch of makework. This is still evidence that channeling this through Elie probably gives a false impression of additional optimizing power, but I think that should have been our strong prior anyhow.

Comment author: Elizabeth 26 April 2017 11:48:48PM 5 points [-]

I'm shocked that no one has commented on Elie Hassenfeld distributing 100% of money to GiveWell's top charity. Even if he didn't run GiveWell, this just seems like an extra step between giving to GiveWell. But given that one of the main arguments for the funds was to let smaller projects get funded quickly and with less overhead, giving 100% to one enormous charity with many large donors is clearly failing at a goal.

I would guess that $300k simply isn't worth Elie's time to distribute in small grants, given the enormous funds available via GoodVentures and even GiveWell direct and directed donations. It seems to me the obvious thing is to is have the fund managed by someone who has the time to do so, rather than make another way to give money to GiveWell.

Comment author: BenHoffman 27 April 2017 01:13:59AM 1 point [-]

Or to simply say "for global poverty, we can't do better than GiveWell so we recommend you just give them the money".

Comment author: Fluttershy 22 April 2017 09:53:26PM 3 points [-]

A more detailed discussion of the considerations for and against concluding that EA Funds had been well received would have been helpful if the added detail was spent examining people's concerns re: conflicts of interest, and centralization of power, i.e. concerns which were commonly expressed but not resolved.

I'm concerned with the framing that you updated towards it being correct for EA Funds to persist past the three month trial period. If there was support to start out with and you mostly didn't gather more support later on relative to what one would expect, then your prior on whether EA Funds is well received should be stronger but you shouldn't update in favor of it being well received based on more recent data. This may sound like a nitpick, but it is actually a crucially important consideration if you've framed things as if you'll continue on with the project only if you update in the direction of having more public support than before.

I also dislike that you emphasize that some people "expressed confusion at your endorsement of EA Funds". Some people may have felt that way, but your choice of wording both downplays the seriousness of some people's disagreements with EA Funds, while also implying that critics are in need of figuring something out that others have already settled (which itself socially implies they're less competent than others who aren't confused). This is a part of what some of us mean when we talk about a tax on criticism in EA.

Comment author: BenHoffman 27 April 2017 01:09:16AM *  2 points [-]

I also dislike that you emphasize that some people "expressed confusion at your endorsement of EA Funds". Some people may have felt that way, but your choice of wording both downplays the seriousness of some people's disagreements with EA Funds, while also implying that critics are in need of figuring something out that others have already settled (which itself socially implies they're less competent than others who aren't confused).

I definitely perceived the sort of strong exclusive endorsement and pushing EA Funds got as a direct contradiction of what I'd been told earlier, privately and publicly - that this was an MVP experiment to gauge interest and feasibility, to be reevaluated after three months. If I'm confused, I'm confused about how this wasn't just a lie. My initial response was "HOW IS THIS OK???" (verbatim quote). I'm willing to be persuaded, of course. But, barring an actual resolution of the issue, simply describing this as confusion is a pretty substantial understatement.

ETA: I'm happy with the update to the OP and don't think I have any unresolved complaint on this particular wording issue.

Comment author: Kerry_Vaughan 10 February 2017 11:55:25PM 5 points [-]

My guess is that the optimal solution has people like Nick controlling quite a bit of money since he has a strong track record and strong connections in the space. Yet, the optimal solution probably has an upper limit on how much money he controls for purposes of viewpoint diversification and to prevent power from consolidating in too few hands. I'm not sure whether we've reached the upper limit yet, but I think we will if EA Funds moves a substantial amount of money.

How can we build these incentives and selection pressures, as well as on the object level, getting better ideas into EA orgs? Dviersifying funding would help, but mostly it seems like it would require CEA to care about this problem a lot and take a lot of effort.

I agree that this is worth being concerned about and I would also be interested in ways to avert this problem.

My hope is that as we diversify the selection of fund managers, EA Funds creates an intellectual marketplace of fund managers writing about why their funding strategies are best and convincing people to donate to them. Then our defense against entrenching the power of established groups (e.g. CEA) is that people can vote with their wallets if they think established groups are getting more money than makes sense.

Comment author: BenHoffman 27 April 2017 12:58:05AM *  0 points [-]

Tell me about Nick's track record? I like Nick and I approve of his granting so far but "strong track record" isn't at all how I'd describe the case for giving him unrestricted funds to grant; it seems entirely speculative based on shared values and judgment. If Nick has a verified track record of grants turning out well, I'd love to see it, and it should probably be in the promotional material for EA Funds.

Comment author: AGB 22 April 2017 12:51:34PM 14 points [-]

Things don't look good regarding how well this project has been received

I know you say that this isn't the main point you're making, but I think it's the hidden assumption behind some of your other points and it was a surprise to read this. Will's post introducing the EA funds is the 4th most upvoted post of all time on this forum. Most of the top rated comments on his post, including at least one which you link to as raising concerns, say that they are positive about the idea. Kerry then presented some survey data in this post. All those measures of support are kind of fuzzy and prone to weird biases, but putting it all together I find it much more likely than not that the community is as-a-whole positive about the funds. An alternative and more concrete angle would be money received into the funds, which was just shy of CEA's target of $1m.

Given all that, what would 'well-received' look like in your view?

If you think the community is generally making a mistake in being supportive of the EA funds, that's fine and obviously you can/should make arguments to that effect. But if you are making the empirical claim that the community is not supportive, I want to know why you think that.

Comment author: BenHoffman 27 April 2017 12:54:08AM *  2 points [-]

Will's post introducing the EA funds is the 4th most upvoted post of all time on this forum.

Generally I upvote a post because I am glad that the post has been posted in this venue, not because I am happy about the facts being reported. Your comment has reminded me to upvote Will's post, because I'm glad he posted it (and likewise Tara's) - thanks!

Comment author: Michael_PJ 24 April 2017 10:51:54PM 5 points [-]

The point I was trying to make is that while GiveWell may not have acted "satisfactorily", they are still well ahead of many of us. I hadn't "inferred" that GiveWell had audited themselves thoroughly - it hadn't even occurred to me to ask, which is a sign of just how bad my own epistemics are. And I don't think I'm unusual in that respect. So GiveWell gets a lot of credit from me for doing "quite well" at their epistemics, even if they could do better (and it's good to hold them to a high standard!).

I think that making the final decision on where to donate yourself often offers only an illusion of control. If you're getting all your information from one source you might as well just be giving them your money. But it does at least keep more things out in the open, which is good.

Re-reading your post, I think I may have been misinterpreting you - am I right in thinking that you mainly object to the marketing of the EA Funds as the "default choice", rather than to their existence for people who want that kind of instrument? I agree that the marketing is perhaps over-selling at the moment.

Comment author: BenHoffman 25 April 2017 04:55:56AM *  3 points [-]

Yep! I think it's fine for them to exist in principle, but the aggressive marketing of them is problematic. I've seen attempts to correct specific problems that are pointed out e.g. exaggerated claims, but there are so many things pointing in the same direction that it really seems like a mindset problem.

I tried to write more directly about the mindset problem here:

http://benjaminrosshoffman.com/humility-argument-honesty/

http://effective-altruism.com/ea/13w/matchingdonation_fundraisers_can_be_harmfully/

http://benjaminrosshoffman.com/against-responsibility/

View more: Next