Comment author: BenHoffman 06 May 2017 08:28:31PM *  1 point [-]

There was a recent post by 80,000 hours (which annoyingly I now can't find) describing how their founders' approaches to doing good have evolved and updated over the years. Is that something you'd like to see more of?

Yes! More clear descriptions of how people have changed their mind would be great. I think it's especially important to be able to identify which things we'd hoped would go well but didn't pan out - and then go back and make sure we're not still implicitly pitching that hope.

Comment author: AGB 07 May 2017 11:42:00AM 2 points [-]

I found the post, was struggling before because it's actually part of their career guide rather than a blog post.

Comment author: BenHoffman 01 May 2017 11:56:50PM *  5 points [-]

EffectiveAltruism.org's Introduction to Effective Altruism allocates most of its words to what's effectively an explanation of global poverty EA. A focus on empirical validation, explicit measurement and quantification, and power inequality between the developed and developing world. The Playpump example figures prominently. This would make no sense if I were trying to persuade someone to support animal charity EA or x-risk EA.

Other EA focus areas that imply very different methods are mentioned, but not in a way that makes it clear how EAs ended up there.

If you click "Donate Effectively," you end up on the EA Funds site, which presents the four Fund categories as generic products you might want to allocate a portfolio between. Two of the four products are in effect just letting Nick Beckstead do what he thinks is sensible with the money, which as I've said above is a good idea but a very large leap from the anti-Playpump pitch. "Trust friendly, sensible-seeming agents and empower them to do what they think is sensible" is a very, very different method than "check everything because it's easy to spend money on nice-sounding things of no value."

The GWWC site and Facebook page have a similar dynamic. I mentioned in this post that the page What We Can Achieve mainly references global poverty (though I've been advised that this is an old page pending an update). The GWWC Facebook page seems like it's mostly global poverty stuff, and some promotion of other CEA brands.

It's very plausible to me that in-person EA groups often don't have this problem because individuals don't feel a moral obligation to give the most generically effective pitch for EA, but instead just talk about what they personally care about and find interesting.

Comment author: AGB 06 May 2017 01:55:28PM *  1 point [-]

Thanks for digging up those examples.

EffectiveAltruism.org's Introduction to Effective Altruism allocates most of its words to what's effectively an explanation of global poverty EA. A focus on empirical validation, explicit measurement and quantification, and power inequality between the developed and developing world. The Playpump example figures prominently. This would make no sense if I were trying to persuade someone to support animal charity EA or x-risk EA.

I think 'many methods of doing good fail' has wide applications outside of Global Poverty, but I acknowledge the wider point you're making.

Other EA focus areas that imply very different methods are mentioned, but not in a way that makes it clear how EAs ended up there.

This is a problem I definitely worry about. There was a recent post by 80,000 hours (which annoyingly I now can't find) describing how their founders' approaches to doing good have evolved and updated over the years. Is that something you'd like to see more of?

It's very plausible to me that in-person EA groups often don't have this problem because individuals don't feel a moral obligation to give the most generically effective pitch for EA, but instead just talk about what they personally care about and find interesting.

This is a true dynamic, but to be specific about one of the examples I had in mind: A little before your post was written I was helping someone craft a general 'intro to EA' that they would give at a local event, and we both agreed to make the heterogeneous nature of the movement central to the mini speech without even discussing it. The discussion we had was more about 'which causes and which methods of doing good should we list given limited time', rather than 'which cause/method would provide the most generically effective pitch'.

We didn't want to do the latter for the reason I already gave; coming up with a great 5-minute poverty pitch is worthless-to-negative if the next person a newcomer talks to is entirely focused on AI, and with a diversity of cause areas represented among the 'core' EAs in the room that was a very real risk.

Comment author: Fluttershy 23 April 2017 06:44:56PM 1 point [-]

It just seems like the simplest explanation of your observed data is 'the community at large likes the funds, and my personal geographical locus of friends is weird'.

And without meaning to pick on you in particular (because I think this mistake is super-common), in general I want to push strongly towards people recognising that EA consists of a large number of almost-disjoint filter bubbles that often barely talk to each other and in some extreme cases have next-to-nothing in common. Unless you're very different to me, we are both selecting the people we speak to in person such that they will tend to think much like us, and like each other; we live inside one of the many bubbles. So the fact that everyone I've spoken to in person about the EA funds thinks they're a good idea is particularly weak evidence that the community thinks they are good, and so is your opposing observation.

I'd say this is correct. The EA Forum itself has such a selection effect, though it's weaker than the ones either of our friend groups have. One idea would be to do a survey, as Peter suggests, though this makes me feel slightly uneasy given that a survey may weight the opinions of people who have considered the problem less or feel less strongly about it equally with the opinions of others. A relevant factor here is that it sometimes takes people a fair bit of reading or reflection to develop a sense for why integrity is particularly valuable from a consequentialist's perspective, and then link this up to why EA Funds continuing has the consequence of showing people that projects others use relatively lower-integrity methods to report on and market can succeed despite (or even because?) of this.

I'd also agree that, at the time of Will's post, it would have been incorrect to say:

The community is probably net-neutral to net-negative on the EA funds, but Will's post introducing them is the 4th most upvoted post of all time

But what we likely care about is whether or not the community is positive on EA Funds at the moment, which may or may not be different from whether it was positive on EA Funds in the past.

My view is further that the community's response to this sort of thing is partly a function of how debates on honesty and integrity have been resolved in the past; if lack of integrity in EA has been an issue in the past, the sort of people who care about integrity are less likely to stick around in EA, such that the remaining population of EAs will have fewer people who care about integrity, which itself affects how the average EA feels about future incidents relating to integrity (such as this one), and so on. So, on some level I'm positing that the public response to EA Funds would be more negative if we hadn't filtered certain people out of EA by having an integrity problem in the first place.

Comment author: AGB 30 April 2017 12:22:24PM 0 points [-]

(Sorry for the slower response, your last paragraph gave me pause and I wanted to think about it. I still don't feel like I have a satisfactory handle on it, but also feel I should reply at this point.)

this makes me feel slightly uneasy given that a survey may weight the opinions of people who have considered the problem less or feel less strongly about it equally with the opinions of others.

This makes total sense to me, and I do currently perceive something of an inverse correlation between how hard people have thought about the funds and how positively they feel about them. I agree this is a cause for concern. The way I would describe that situation from your perspective is not 'the funds have not been well-received', but rather 'the funds have been well-received but only because too many (most?) people are analysing the idea in a superficial way'. Maybe that is what you were aiming for originally and I just didn't read it that way.

But what we likely care about is whether or not the community is positive on EA Funds at the moment, which may or may not be different from whether it was positive on EA Funds in the past.

True. That post was only a couple of months before this one though; not a lot of time for new data/arguments to emerge or opinions to change. The only major new data point I can think of since then is the funds raising ~$1m, which I think is mostly orthogonal to what we are discussing. I'm curious whether you personally a perceive a change (drop) in popularity in your circles?

My view is further that the community's response to this sort of thing is partly a function of how debates on honesty and integrity have been resolved in the past; if lack of integrity in EA has been an issue in the past, the sort of people who care about integrity are less likely to stick around in EA, such that the remaining population of EAs will have fewer people who care about integrity, which itself affects how the average EA feels about future incidents relating to integrity (such as this one), and so on. So, on some level I'm positing that the public response to EA Funds would be more negative if we hadn't filtered certain people out of EA by having an integrity problem in the first place.

This story sounds plausibly true. It's a difficult one to falsify though (I could flip all the language and get something that also sounds plausibly true), so turning it over in my head for the past few days I'm still not sure how much weight to put on it.

Comment author: BenHoffman 27 April 2017 12:54:08AM *  2 points [-]

Will's post introducing the EA funds is the 4th most upvoted post of all time on this forum.

Generally I upvote a post because I am glad that the post has been posted in this venue, not because I am happy about the facts being reported. Your comment has reminded me to upvote Will's post, because I'm glad he posted it (and likewise Tara's) - thanks!

Comment author: AGB 30 April 2017 10:48:40AM 1 point [-]

That seems like a good use of the upvote function, and I'm glad you try to do things that way. But my nit-picking brain generates a couple of immediate thoughts:

  1. I don't think it's a coincidence that a development you were concerned about was also one where you forgot* to apply your general rule. In practice I think upvotes track 'I agree with this' extremely strongly, even though lots of people (myself included) agree that ideally they shouldn't.

  2. In the hypothetical where there's lots of community concern about the funds but people are happy they have a venue to discuss it, I expect the top-rated comments to be those expressing those concerns. This possibility is what I was trying to address in my next sentence:

Most of the top rated comments on his post, including at least one which you link to as raising concerns, say that they are positive about the idea.

*Not sure if 'forgot' is quite the right word here, just mirroring your description of my comment as 'reminding' you.

Comment author: BenHoffman 28 April 2017 08:22:28PM 2 points [-]

I think 2016 EAG was more balanced. But I don't think the problem in 2015 was apparent lack of balance per se. It might have been difficult for the EAG organizers to sincerely match the conference programming to promotional EA messaging, since their true preferences were consistent with the extent to which things like AI risk were centered.

The problem is that to the extent to which EA works to maintain a smooth, homogeneous, uncontroversial, technocratic public image, it doesn't match the heterogeneous emphases, methods, and preferences of actual core EAs and EA organizations. This is necessarily going to require some amount of insincerity or disconnect between initial marketing and reality, and represents a substantial cost to that marketing strategy.

Comment author: AGB 30 April 2017 10:28:51AM 3 points [-]

The problem is that to the extent to which EA works to maintain a smooth, homogeneous, uncontroversial, technocratic public image, it doesn't match the heterogeneous emphases, methods, and preferences of actual core EAs and EA organizations.

If this is basically saying 'we should take care to emphasize that EAs have wide-ranging disagreements of both values and fact that lead them to prioritise a range of different cause areas', then I strongly agree. In the same vein, I think we should emphasize that people who self-identify as 'EAs' represent a wide range of commitment levels.

One reason for this is that depending which university or city someone is in, which meetup they turn up to, and who exactly they talk to, they'll see wildly different distributions of commitment and similarly differing representation of various cause areas.

With that said, I'm not totally sure if that's the point you're making because my personal experience in London is that we've been going out of our way to make the above points for a while; what's an example of marketing which you think works to maintain a homogenous public image?

Comment author: Kerry_Vaughan 28 April 2017 05:58:28PM *  4 points [-]

We didn't offer any alternative events during Elon's panel because we (correctly) perceived that there wouldn't be demand for going to a different event and putting someone on stage with few people in the audience is not a good way to treat speakers.

We had to set up an overflow room for people that didn't make it into the main room during the Elon panel, and even the overflow room was standing room only.

I think this is worth pointing out because of the proceeding sentence:

However, EA leadership tends to privately focus on things like AI risk.

The implication is that we aimed to bias the conference towards AI risk and against global poverty because of some private preference for AI risk as a cause area.[1]

I think we can be fairly accused of aiming for Elon as an attendee and not some extremely well known global poverty person.

However, with the exception of Bill Gates (who we tried to get), I don't know of anyone in global poverty with anywhere close to the combination of a) general renown and b) reachability. So, I think trying to get Elon was probably the right call.

Given that Elon was attending, I don't see what reasonable options we had for more evenly distributing attention between plausible causes. Elon casts a big shadow.

[1] Some readers contacted me to let me know that they found this sentence confusing. To clarify, I do have personal views on which causes are higher impact than others, but the program design of EA Global was not an attempt to steer EA on the basis of those views.

Comment author: AGB 28 April 2017 07:04:56PM *  8 points [-]

Trying to square this circle, because I think these observations are pretty readily reconcilable. My second-hand vague recollections from speaking to people at the time are:

  1. The programming had a moderate slant towards AI risk because we got Elon.
  2. The participants were generally very bullish on AI risk and other far-future causes.
  3. The 'Global poverty is a rounding error' crowd was a disproportionately-present minority.

Any one of these in isolation would likely have been fine, but the combination left some people feeling various shades of surprised/bait-and-switched/concerned/isolated/unhappy. I think the combination is consistent with both what Ben said and what Kerry said.

Further, (2) and (3) aren't surprising if you think about the way San Francisco EAs are drawn differently to EAs globally; SF is by some margin the largest AI hub, so committed EAs who care a lot about AI disproportionately end up living and working there.

Note that EAG Oxford, organised by the same team in the same month with the same private opinions, didn't have the same issues, or at least it didn't to the best of my knowledge as a participant who cared very little for AI risk at the time. I can't speak to EAG Melbourne but I'd guess the same was true.

While (2) and (3) aren't really CEA's fault, there's a fair challenge as to whether CEA should have anticipated (2) and (3) given the geography, and therefore gone out of their way to avoid (1). I'm moderately sympathetic to this argument but it's very easy to make this kind of point with hindsight; I don't know whether anyone foresaw it. Of course, we can try to avoid the mistake going forward regardless, but then again I didn't hear or read anyone complaining about this at EAG 2016 in this way, so maybe we did?

Comment author: RyanCarey 23 April 2017 11:08:18PM *  19 points [-]

Some feedback on your feedback (I've only quickly read your post once, so take it with a grain of salt):

  • I think that this is more discursive than it needs to be. AFAICT, you're basically arguing that you think that decision-making and trust in the EA movement is a over-concentrated in OpenPhil.
  • If it was a bit shorter, then it would also be easier to run it by someone involved with OpenPhil, which prima facie would be at least worth trying, in order to correct any factual errors.
  • It's hard to do good criticism, but starting out with long explanations of confidence games and Ponzi schemes is not something that makes the criticism likely to be well-received. You assert that these things are not necessarily bad, so why not just zero in on the thing that you think is bad in this case?
  • So maybe this could have been split into two posts?
  • Maybe there are more upsides of having somewhat concentrated decision-making than you lead on? Perhaps cause prioritization will be better? Since EA funds is a movement-wide scheme, perhaps reputational trust is extra important here, and the diversification would come from elsewhere? Perhaps the best decision-makers will naturally come to work on this full-time.

You may still be right, though I would want some more balanced analysis.

Comment author: AGB 24 April 2017 08:01:24AM *  9 points [-]

(Disclosure, I read this post, thought it was very thorough, and encouraged Ben to post it here.)

It's hard to do good criticism, but starting out with long explanations of confidence games and Ponzi schemes is not something that makes the criticism likely to be well-received. You assert that these things are not necessarily bad, so why not just zero in on the thing that you think is bad in this case?

Just to balance this, I actually liked the Ponzi scheme section. I think that making the claim 'aspects of EA have Ponzi-like elements and this is a problem' without carefully explaining what a Ponzi scheme is and without explaining that Ponzi-schemes don't necessarily require people with bad intentions would have potential to be much more inflammatory. As written, this piece struck me as fairly measured.

Also, since the claims are aimed at a potentially-flawed general approach/mindset rather than being specific to current actions, zeroing in too much might be net counterproductive in this case; there's some balance to strike here.

Comment author: Fluttershy 22 April 2017 09:53:26PM 3 points [-]

A more detailed discussion of the considerations for and against concluding that EA Funds had been well received would have been helpful if the added detail was spent examining people's concerns re: conflicts of interest, and centralization of power, i.e. concerns which were commonly expressed but not resolved.

I'm concerned with the framing that you updated towards it being correct for EA Funds to persist past the three month trial period. If there was support to start out with and you mostly didn't gather more support later on relative to what one would expect, then your prior on whether EA Funds is well received should be stronger but you shouldn't update in favor of it being well received based on more recent data. This may sound like a nitpick, but it is actually a crucially important consideration if you've framed things as if you'll continue on with the project only if you update in the direction of having more public support than before.

I also dislike that you emphasize that some people "expressed confusion at your endorsement of EA Funds". Some people may have felt that way, but your choice of wording both downplays the seriousness of some people's disagreements with EA Funds, while also implying that critics are in need of figuring something out that others have already settled (which itself socially implies they're less competent than others who aren't confused). This is a part of what some of us mean when we talk about a tax on criticism in EA.

Comment author: AGB 22 April 2017 11:20:50PM 2 points [-]

I'm concerned with the framing that you updated towards it being correct for EA Funds to persist past the three month trial period. If there was support to start out with and you mostly didn't gather more support later on relative to what one would expect...

In the OP Kerry wrote:

The donation amounts we’ve received so far are greater than we expected, especially given that donations typically decrease early in the year after ramping up towards the end of the year.

CEA's original expectation of donations could just have been wrong, of course. But I don't see a failure of logic here.

Re. your last paragraph, Kerry can confirm or deny but I think he's referring to the fact that a bunch of people were surprised to see (e.g.? Not sure if there were other cases.) GWWC start recommending the EA funds and closing down the GWWC trust recently when CEA hadn't actually officially given the funds a 'green light' yet. So not referring to the same set of criticisms you are talking about. I think 'confusion at GWWC's endorsement of EA funds' is a reasonable description of how I felt when I received this e-mail, at the very least*; I like the funds but prominently recommending something that is in beta and might be discontinued at any minute seemed odd.

*I got the e-mail from GWWC announcing this on 11th April. I got CEA's March 2017 update saying they'd decided to continue with the funds later on the same day, but I think that goes to a much narrower list and in the interim I was confused and was going to ask someone about it. Checking now it looks like CEA actually announced this on their blog on 10th April (see below link), but again presumably lots of GWWC members don't read that.

https://www.centreforeffectivealtruism.org/blog/cea-update-march-2017/

Comment author: Fluttershy 22 April 2017 08:20:20PM 4 points [-]

In one view, the concept post had 43 upvotes, the launch post had 28, and this post currently has 14. I don't think this is problematic in itself, since this could just be an indication of hype dying down over time, rather than of support being retracted.

Part of what I'm tracking when I say that the EA community isn't supportive of EA Funds is that I've spoken to several people in person who have said as much--I think I covered all of the reasons they brought up in my post, but one recurring theme throughout those conversations was that writing up criticism of EA was tiring and unrewarding, and that they often didn't have the energy to do so (though one offered to proofread anything I wrote in that vein). So, a large part of my reason for feeling that there isn't a great deal of community support for EA funds has to do with the ways in which I'd expect the data on how much support there actually is to be filtered. For example:

  • the method in which Kerry presented his survey data made it look like there was more support than there was
  • the fact that Kerry presented the data in this way suggests it's relatively more likely that Kerry will do so again in the future if given the chance
  • social desirability bias should also make it look like there's more support than there is
  • the fact that it's socially encouraged to praise projects on the EA Forum and that criticism is judged more harshly than praise should make it look like there's more support than there is. Contrast this norm with the one at LW, and notice how it affected how long it took us to get rid of Gleb.
  • we have a social norm of wording criticism in a very mild manner, which might make it seem like critics are less serious than they are.

It also doesn't help that most of the core objections people have brought up have been acknowledged but not addressed. But really, given all of those filters on data relating to how well-supported the EA Funds are, and the fact that the survey data doesn't show anything useful either way, I'm not comfortable with accepting the claim that EA Funds has been particularly well-received.

Comment author: AGB 22 April 2017 10:51:44PM *  6 points [-]

So I probably disagree with some of your bullet points, but unless I'm missing something I don't think they can be the crux of our disagreement here, so for the sake of argument let's suppose I fully agree that there are a variety of strong social norms in place here that make praise more salient, visible and common than criticism.

...I still don't see how to get from here to (for example) 'The community is probably net-neutral to net-negative on the EA funds, but Will's post introducing them is the 4th most upvoted post of all time'. The relative (rather than absolute) nature of that claim is important; even if I think posts and projects on the EA forum generally get more praise, more upvotes, and less criticism than they 'should', why has that boosted the EA funds in particular over the dozens of other projects that have been announced on here over the past however-many years? To pick the most obviously-comparable example that quickly comes to mind, Kerry's post introducing EA Ventures has just 16 upvotes*.

It just seems like the simplest explanation of your observed data is 'the community at large likes the funds, and my personal geographical locus of friends is weird'.

And without meaning to pick on you in particular (because I think this mistake is super-common), in general I want to push strongly towards people recognising that EA consists of a large number of almost-disjoint filter bubbles that often barely talk to each other and in some extreme cases have next-to-nothing in common. Unless you're very different to me, we are both selecting the people we speak to in person such that they will tend to think much like us, and like each other; we live inside one of the many bubbles. So the fact that everyone I've spoken to in person about the EA funds thinks they're a good idea is particularly weak evidence that the community thinks they are good, and so is your opposing observation. I think we should both discount it ~entirely once we have anything else to go on. Relative upvotes are extremely far from perfect as a metric, but I think they are much better than in-person anecdata for this reason alone.

FWIW I'm very open to suggestions on how we could settle this question more definitively. I expect CEA pushing ahead with the funds if the community as a whole really is net-negative on them would indeed be a mistake. I don't have any great ideas at the moment though.

*http://effective-altruism.com/ea/fo/announcing_effective_altruism_ventures/

Comment author: Fluttershy 21 April 2017 11:23:52AM 1 point [-]

A few of you were diligent enough to beat me to saying much of this, but:

Where we’ve received criticism it has mostly been around how we can improve the website and our communication about EA Funds as opposed to criticism about the core concept.

This seems false, based on these replies. The author of this post replied to the majority of those comments, which means he's aware that many people have in fact raised concerns about things other than communication and EA Funds' website. To his credit, someone added a paragraph acknowledging that these concerns had been raised elsewhere, in the pages for the EA community fund and the animal welfare fund. Unfortunately, though, these concerns were never mentioned in this post. There are a number of people who would like to hear about any progress that's been made since the discussion which happened on this thread regarding the problems of 1) how to address conflicts of interest given how many of the fund managers are tied into e.g. OPP, and 2) how centralizing funding allocation (rather than making people who aren't OPP staff into Fund Managers) narrows the amount of new information about what effective opportunities exist that the EA Funds' Fund Managers encounter.

I've spoken with a couple EAs in person who have mentioned that making the claim that "EA Funds are likely to be at least as good as OPP’s last dollar" is harmful. In this post, it's certainly worded in a way that implies very strong belief, which, given how popular consequentialism is around here, would be likely to make certain sorts of people feel bad for not donating to EA Funds instead of whatever else they might donate to counterfactually. This is the same sort of effect people get from looking at this sort of advertising, but more subtle, since it's less obvious on a gut level that this slogan half-implies that the reader is morally bad for not donating. Using this slogan could be net negative even without considering that it might make EAs feel bad about themselves, if, say, individual EAs had information about giving opportunities that were more effective than EA Funds, but donated to EA Funds anyways out of a sense of pressure caused by the "at least as good as OPP" slogan.

More immediately, I have negative feelings about how this post used the Net Promoter Score to evaluate the reception of EA Funds. First, it mentions that EA Funds "received an NPS of +56 (which is generally considered excellent according to the NPS Wikipedia page)." But the first sentence of the Wikipedia page for NPS, which I'm sure the author read at least the first line of given that he linked to it, states that NPS is "a management tool that can be used to gauge the loyalty of a firm's customer relationships" (emphasis mine). However, EA Funds isn't a firm. My view is that implicitly assuming that, as a nonprofit (or something socially equivalent), your score on a metric intended to judge how satisfied a for-profit company's customers are can be compared side by side with the scores received by for-profit firms (and then neglecting to mention that you've made this assumption) belies a lack of intent to honestly inform EAs.

This post has other problems, too; it uses the NPS scoring system to analyze donors and other's responses to the question:

How likely is it that your donation to EA Funds will do more good in expectation than where you would have donated otherwise?

The NPS scoring system was never intended to be used to evaluate responses to this question, so perhaps that makes it insignificant that an NPS score of 0 for this question just misses the mark of being "felt to be good" in industry. Worse, the post mentions that this result

could merely represent healthy skepticism of a new project or it could indicate that donors are enthusiastic about features other than the impact of donations to EA Funds.

It seems to me that including only positive (or strongly positive-sounding) interpretations of this result is incorrect and misleadingly optimistic. I'd agree that it's a good idea to not "take NPS too seriously", though in this case, I wouldn't say that the benefit that came from using NPS in the first place outweighed the cost that was incurred by the resultant incorrect suggestion that we should feel there was a respectable amount of quantitative support for the conclusions drawn in this post.

I'm disappointed that I was able to point out so many things I wish the author had done better in this document. If there had only been a couple errors, it would have been plausibly deniable that anything fishy was going on here. But with as many errors as I've pointed out, which all point in the direction of making EA Funds look better than it is, things don't look good. Things don't look good regarding how well this project has been received, but that's not the larger problem here. The larger problem is that things don't look good because this post decreases how much I am willing to trust communications made on the behalf of EA funds in particular, and communications made by CEA staff more generally.

Writing this made me cry, a little. It's late, and I should have gone to bed hours ago, but instead, here I am being filled with sad determination and horror that it feels like I can't trust anyone I haven't personally vetted to communicate honestly with me. In Effective Altruism, honesty used to mean something, consequentialism used to come with integrity, and we used to be able to work together to do the most good we could.

Some days, I like to quietly smile to myself and wonder if we might be able to take that back.

Comment author: AGB 22 April 2017 12:51:34PM 15 points [-]

Things don't look good regarding how well this project has been received

I know you say that this isn't the main point you're making, but I think it's the hidden assumption behind some of your other points and it was a surprise to read this. Will's post introducing the EA funds is the 4th most upvoted post of all time on this forum. Most of the top rated comments on his post, including at least one which you link to as raising concerns, say that they are positive about the idea. Kerry then presented some survey data in this post. All those measures of support are kind of fuzzy and prone to weird biases, but putting it all together I find it much more likely than not that the community is as-a-whole positive about the funds. An alternative and more concrete angle would be money received into the funds, which was just shy of CEA's target of $1m.

Given all that, what would 'well-received' look like in your view?

If you think the community is generally making a mistake in being supportive of the EA funds, that's fine and obviously you can/should make arguments to that effect. But if you are making the empirical claim that the community is not supportive, I want to know why you think that.

View more: Next