Comment author: Kerry_Vaughan 23 April 2017 07:01:03PM 2 points [-]

Kerry can confirm or deny but I think he's referring to the fact that a bunch of people were surprised to see (e.g.? Not sure if there were other cases.) GWWC start recommending the EA funds and closing down the GWWC trust recently when CEA hadn't actually officially given the funds a 'green light' yet.

Correct. We had updated in favor of EA Funds internally but hadn't communicated that fact in public. When we started linking to EA Funds on the GWWC website, people were justifiably confused.

I'm concerned with the framing that you updated towards it being correct for EA Funds to persist past the three month trial period. If there was support to start out with and you mostly didn't gather more support later on relative to what one would expect, then your prior on whether EA Funds is well received should be stronger but you shouldn't update in favor of it being well received based on more recent data.

The money moved is the strongest new data point.

It seemed quite plausible to me that we could have the community be largely supportive of the idea of EA Funds without actually using the product. This is more or less what happened with EA Ventures -- lots of people thought it was a good idea, but not many promising projects showed up and not many funders actually donated to the projects we happened to find.

Do you feel that the post as currently written still overhypes the communities perception of the project? If so, what changes would you suggest to bring it more in line with the observable evidence?

Comment author: RyanCarey 25 April 2017 05:19:29PM 0 points [-]

This is more or less what happened with EA Ventures -- lots of people thought it was a good idea, but not many promising projects showed up and not many funders actually donated to the projects we happened to find.

It seems like the character of the EA movement needs to be improved somehow, (probably, as always, there are marginal improvements to the implementation too) but especially the character of the movement because arguably if EA could spawn many projects, its impact would be increased many-fold.

Comment author: Owen_Cotton-Barratt 23 April 2017 09:10:49AM *  8 points [-]

Ryan, I substantially disagree and actually think all of your suggested alternatives are worse. The original is reporting on a response to the writing, not staking out a claim to an objective assessment of it.

I think that reporting honest responses is one of the best tools we have for dealing with emotional inferential gaps -- particularly if it's made explicit that this is a function of the reader and writing, and not the writing alone.

Comment author: RyanCarey 24 April 2017 09:30:24AM *  7 points [-]

I've discussed this with Owen a bit further. How emotions relate to norms of discourse is a tricky topic but I personally think many people would agree on the following pointers going forward (not addressed to Fluttershy in particular):

Dos:

  • flag your emotions when they are relevant to the discussion. e.g. "I became sick of redrafting this post so please excuse if it comes across as grumpy", or "These research problems seem hard and I'm unmotivated to try to work more on them".
  • discuss emotional issues relevant to many EAs

Don'ts:

  • use emotion as a rhetorical boost for your arguments (appeal to emotion)
  • mix arguments together with calls for social support
  • mix arguments with personal emotional information that would make an EA (or regular) audience uncomfortable.

Of course, if you want to engage emotionally with a specific people, you can use private messages.

Comment author: RyanCarey 23 April 2017 11:08:18PM *  19 points [-]

Some feedback on your feedback (I've only quickly read your post once, so take it with a grain of salt):

  • I think that this is more discursive than it needs to be. AFAICT, you're basically arguing that you think that decision-making and trust in the EA movement is a over-concentrated in OpenPhil.
  • If it was a bit shorter, then it would also be easier to run it by someone involved with OpenPhil, which prima facie would be at least worth trying, in order to correct any factual errors.
  • It's hard to do good criticism, but starting out with long explanations of confidence games and Ponzi schemes is not something that makes the criticism likely to be well-received. You assert that these things are not necessarily bad, so why not just zero in on the thing that you think is bad in this case?
  • So maybe this could have been split into two posts?
  • Maybe there are more upsides of having somewhat concentrated decision-making than you lead on? Perhaps cause prioritization will be better? Since EA funds is a movement-wide scheme, perhaps reputational trust is extra important here, and the diversification would come from elsewhere? Perhaps the best decision-makers will naturally come to work on this full-time.

You may still be right, though I would want some more balanced analysis.

Comment author: Fluttershy 21 April 2017 11:23:52AM *  1 point [-]

I appreciate that the post has been improved a couple times since the criticisms below were written.

A few of you were diligent enough to beat me to saying much of this, but:

Where we’ve received criticism it has mostly been around how we can improve the website and our communication about EA Funds as opposed to criticism about the core concept.

This seems false, based on these replies. The author of this post replied to the majority of those comments, which means he's aware that many people have in fact raised concerns about things other than communication and EA Funds' website. To his credit, someone added a paragraph acknowledging that these concerns had been raised elsewhere, in the pages for the EA community fund and the animal welfare fund. Unfortunately, though, these concerns were never mentioned in this post. There are a number of people who would like to hear about any progress that's been made since the discussion which happened on this thread regarding the problems of 1) how to address conflicts of interest given how many of the fund managers are tied into e.g. OPP, and 2) how centralizing funding allocation (rather than making people who aren't OPP staff into Fund Managers) narrows the amount of new information about what effective opportunities exist that the EA Funds' Fund Managers encounter.

I've spoken with a couple EAs in person who have mentioned that making the claim that "EA Funds are likely to be at least as good as OPP’s last dollar" is harmful. In this post, it's certainly worded in a way that implies very strong belief, which, given how popular consequentialism is around here, would be likely to make certain sorts of people feel bad for not donating to EA Funds instead of whatever else they might donate to counterfactually. This is the same sort of effect people get from looking at this sort of advertising, but more subtle, since it's less obvious on a gut level that this slogan half-implies that the reader is morally bad for not donating. Using this slogan could be net negative even without considering that it might make EAs feel bad about themselves, if, say, individual EAs had information about giving opportunities that were more effective than EA Funds, but donated to EA Funds anyways out of a sense of pressure caused by the "at least as good as OPP" slogan.

More immediately, I have negative feelings about how this post used the Net Promoter Score to evaluate the reception of EA Funds. First, it mentions that EA Funds "received an NPS of +56 (which is generally considered excellent according to the NPS Wikipedia page)." But the first sentence of the Wikipedia page for NPS, which I'm sure the author read at least the first line of given that he linked to it, states that NPS is "a management tool that can be used to gauge the loyalty of a firm's customer relationships" (emphasis mine). However, EA Funds isn't a firm. My view is that implicitly assuming that, as a nonprofit (or something socially equivalent), your score on a metric intended to judge how satisfied a for-profit company's customers are can be compared side by side with the scores received by for-profit firms (and then neglecting to mention that you've made this assumption) belies a lack of intent to honestly inform EAs.

This post has other problems, too; it uses the NPS scoring system to analyze donors and other's responses to the question:

How likely is it that your donation to EA Funds will do more good in expectation than where you would have donated otherwise?

The NPS scoring system was never intended to be used to evaluate responses to this question, so perhaps that makes it insignificant that an NPS score of 0 for this question just misses the mark of being "felt to be good" in industry. Worse, the post mentions that this result

could merely represent healthy skepticism of a new project or it could indicate that donors are enthusiastic about features other than the impact of donations to EA Funds.

It seems to me that including only positive (or strongly positive-sounding) interpretations of this result is incorrect and misleadingly optimistic. I'd agree that it's a good idea to not "take NPS too seriously", though in this case, I wouldn't say that the benefit that came from using NPS in the first place outweighed the cost that was incurred by the resultant incorrect suggestion that we should feel there was a respectable amount of quantitative support for the conclusions drawn in this post.

I'm disappointed that I was able to point out so many things I wish the author had done better in this document. If there had only been a couple errors, it would have been plausibly deniable that anything fishy was going on here. But with as many errors as I've pointed out, which all point in the direction of making EA Funds look better than it is, things don't look good. Things don't look good regarding how well this project has been received, but that's not the larger problem here. The larger problem is that things don't look good because this post decreases how much I am willing to trust communications made on the behalf of EA funds in particular, and communications made by CEA staff more generally.

Writing this made me cry, a little. It's late, and I should have gone to bed hours ago, but instead, here I am being filled with sad determination and horror that it feels like I can't trust anyone I haven't personally vetted to communicate honestly with me. In Effective Altruism, honesty used to mean something, consequentialism used to come with integrity, and we used to be able to work together to do the most good we could.

Some days, I like to quietly smile to myself and wonder if we might be able to take that back.

Comment author: RyanCarey 22 April 2017 09:13:00PM 2 points [-]

Writing this made me cry, a little. It's late, and I should have gone to bed hours ago, but instead, here I am being filled with sad determination and horror that it feels like I can't trust anyone I haven't personally vetted to communicate honestly with me.

There are a range of reasons that this is not really an appropriate way to communicate. It's socially inappropriate, it could be interpreted as emotional blackmail, and it could encourage trolling.

It's a shame you've been upset. Still, one can call others' writing upsetting, immoral, mean-spirited, etc etc etc - there is a lot of leeway to make other reasonable conversational moves.

Comment author: MichaelPlant 18 April 2017 09:02:52PM 0 points [-]

I not sure I understand your point but I think you're being a bit harsh. I would have thought floating this on the EA forum as a potential suggestion (rather than a fait accompli) is exactly consulting others to see if it's a good idea. If the EA forum weren't (as far as I can tell) just filled with EAs, I'd agree.

Also, I think it's unhelpful in turn to tell other people they're effectively stupid for floating ideas as that 1. discourages people from sharing their views, which restricts debate only to the bold and 2. makes people feel unwelcome.

Comment author: RyanCarey 19 April 2017 02:45:52AM 0 points [-]

You could argue that this particular post was net helpful (though I would disagree). The point I'm making, though, is that in general, people should consult others before posting things that can cause reputational damage on the public internet and that our social convention for such will need to be strong enough to counteract the unilateralist's curse.

Comment author: RyanCarey 18 April 2017 02:01:16AM *  3 points [-]

In general, if you're considering takign some arguably seedy action that carries collective risks, and you see that everyone else has been avoiding the action, you should guess that you've underestimated the magnitude of these risks. It's called the Unilateralist's Curse.

In this case, the reputational risks that you've incurred seem to make this a pretty unhelpful post.

The standard way to ward off the unilateralist's curse is to consult others who bear the risk but who hold different views and assumptions in order to help you to make a less biased assessment.

For this post and in general, people should consult others before writing potentially risky posts.

Comment author: RobBensinger 05 April 2017 05:53:14PM *  0 points [-]

Could you or Will provide an example of a source that explicitly uses "GCR" and "xrisk" in such a way that there are non-GCR xrisks? You say this is the most common operationalization, but I'm only finding examples that treat xrisk as a subset of GCR, as the Bostrom quote above does.

Comment author: RyanCarey 14 April 2017 02:58:47AM *  1 point [-]

You're right, it looks like most written texts, especially more formal ones give definitions where x-risks are equal or a strict subset. We should probably just try to roll that out to informal discussions and operationalisations too.

"Definition: Global Catastrophic Risk – risk of events or processes that would lead to the deaths of approximately a tenth of the world’s population, or have a comparable impact." GCR Report

"A global catastrophic risk is a hypothetical future event that has the potential to damage human well-being on a global scale." - Wiki

"Global catastrophic risk (GCR) is the risk of events large enough to significantly harm or even destroy human civilization at the global scale." GCRI

"These represent global catastrophic risks - events that might kill a tenth of the world’s population." - HuffPo

Comment author: Ben_Todd 06 April 2017 08:10:47PM 3 points [-]

Just a quick aside: currently the mean individual income for a US college grad is about $77,000. If you have a kid, that's a bit lower, and these are 2016 figures, which makes them a bit higher. Still, I think upper middle class implies higher earning than the mean college grad.

See footnote 2 here: https://80000hours.org/career-guide/job-satisfaction/

I think of 'upper middle class' as jobs like doctor, finance, corporate management. The means here are quite a bit higher e.g. the mean income of doctors in the US is over $200k.

Comment author: RyanCarey 07 April 2017 07:28:58AM 6 points [-]

In my experience, what the Brits humbly call "upper middle class" is what Aussies would call upper class.

Comment author: William_MacAskill 04 April 2017 07:00:51PM 0 points [-]

"counts as an xrisk (and therefore as a GCR)"

My understanding: GCR = (something like) risk of major catastrophe that kills 100mn+ people

(I think the GCR book defines it as risk of 10mn+ deaths, but that seemed too low to me).

So, as I was using the term, something being an x-risk does not entail it being a GCR. I'd count 'Humanity's moral progress stagnates or we otherwise end up with the wrong values' as an x-risk but not a GCR.

Interesting (/worrying!) how we're understanding widely-used terms so differently.

Comment author: RyanCarey 04 April 2017 11:45:44PM 0 points [-]

Agree that that's the most common operationalization of a GCR. It's a bit inelegant for GCR not to include all x-risks though, especially given that it is used interchangeably within EA.

It would odd if the onset of a permanently miserable dictatorship didn't count as a global catastrophe because no lives were lost.

Comment author: Askell 28 March 2017 09:56:44AM *  4 points [-]

There are two different claims here: one is "type x research is not very useful" and the other is "we should be doing more type y research at the margin". In the comment above, you seem to be defending the latter, but your earlier comments support the former. I don't think we necessarily disagree on the latter claim (perhaps on how to divide x from y, and the optimal proportion of x and y, but not on the core claim). But note that the second claim is somewhat tangential to the original post. If type x research is valuable, then even though we might want more type y research at the margin, this isn't a consideration against a particular instance of type x research. Of course, if type x research is (in general or in this instance) not very useful, then this is of direct relevance to a post that is an instance of type x research. It seems important not to conflate these, or to move from a defense of the former to a defense of the latter. Above, you acknowledge that type x research can be valuable, so you don't hold the general claim that type x research isn't useful. I think you do hold the view that either this particular instance of research or this subclass of type x research is not useful. I think that's fine, but I think it's important not to frame this as merely a disagreement about what kinds of research should be done at the margin, since this is not the source of the disagreement.

Comment author: RyanCarey 28 March 2017 06:14:42PM 0 points [-]

Of course, if type x research is (in general or in this instance) not very useful, then this is of direct relevance to a post that is an instance of type x research. It seems important not to conflate these, or to move from a defense of the former to a defense of the latter.

You're imposing on my argument a structure that it didn't have. My argument is that prima facie, analysing the concepts of effectiveness is not the most useful work that is presently to be done. If you look at my original post, it's clear that it had a parallel argument structure: i) this post seems mostly not new, and ii) posts of this kind are over-invested. It was well-hedged, and made lots of relative claims ("on the margin", "I am generally not very interested" etc. so it's really weird to be repeatedly told that I was arguing something else.

I think that's fine, but I think it's important not to frame this as merely a disagreement about what kinds of research should be done at the margin, since this is not the source of the disagreement.

The general disagreement about whether philosophical analysis is under-invested is source of about half of the disagreement. I've talked to Stefan and Ben, and I think that I was convinced that philosophical analysis was prima facie under-invested atm, then I would view analysis of principles of effectiveness a fair bit more favorably. I could imagine that if they became fully convinced that practical work was much more neglected then they might want to see more project proposals and literature reviews done too.

View more: Prev | Next