Comment author: arunbharatula 24 May 2017 04:24:04AM 0 points [-]

I try to hyperlink those parts of my writing that are evidenced by a particular source. This avoids the issue that arises in academic writing where it can be unclear what claims a citation relates to. There is a trade-off with the visual appeal of the writing, particularly since my fix for the aforementioned issue is unconventional. However, I believe gain in precision outweighs the stylistic considerations.

Comment author: RyanCarey 24 May 2017 08:04:38AM 1 point [-]

The greater ambiguity, I think, is in which part of the linked document you're citing. If you want to resolve ambiguity, then use footnotes and quote the relevant parts of the sources.

Comment author: RyanCarey 22 May 2017 07:30:42PM 5 points [-]

Just personally for me, I would find this easier to read if you linked just a word or a couple of words at a time, rather than a whole paragraph.

Comment author: RyanCarey 06 May 2017 06:42:01AM *  3 points [-]

More than half of the time, people who have a psychotic episode will have already had one before. I think the same is true of mania. The incidence for a first episode of psychosis is fairly low, about 0.03% per year [1].

[1] "Over the 8-year period May 1995–April 2003, there were 194 cases of any DSM-IV psychotic illness (117 male, 77 female; Table 2). The annual incidence of “all psychoses” was 31.6/100,000 aged >15, this being higher in males (37.2) than in females (25.7; risk ratio [RR] = 1.44 [95% CI 1.08, 1.93], p < .02; Table 3)."

https://academic.oup.com/schizophreniabulletin/article/31/3/624/1894444/Epidemiology-of-First-Episode-Psychosis

Comment author: RyanCarey 05 May 2017 05:40:10AM *  3 points [-]

A clear problem with this model is that AFAICT, it assumes that (i) the size of the research community working on safety when AI is developed is independent of (ii) the the degree to which adding a researcher now will change the total number of researchers.

Both (i) and (ii) can vary by orders of magnitude, at least on my model, but are very correlated, because they depend on timelines. This means I get an oddly high chance of averting existential risk. If the questions where combined together into "what fraction of the AI community will the community by enlarged by adding an extra person" then I think my chance of averting existential risk would come out much lower.

7

Informatica: Special Issue on Superintelligence

A special issue on Superintelligence is coming up at the journal Informatica . The call for proposals is given below. We would welcome submissions from a range of perspectives, including philosophical and other fields that effective altruists may work in. ---------------------------------------------------------------------------------- Introduction Since the inception of the field of artificial... Read More
Comment author: Owen_Cotton-Barratt 25 April 2017 02:59:15PM 4 points [-]

The fact that sometimes people's estimates of impact are subsequently revised down by several orders of magnitude seems like strong evidence against evidence being normally distributed around the truth. I expect that if anything it is broader than lognormally distributed. I also think that extra pieces of evidence are likely to be somewhat correlated in their error, although it's not obvious how best to model that.

Comment author: RyanCarey 25 April 2017 06:19:18PM 3 points [-]

I expect that if anything it is broader than lognormally distributed.

It might depend what we're using the model for.

In general, it does seem reasonable that direct (expected) net impact of interventions should be broader than lognormal, as Carl argued in 2011. On the other hand, it seems like the expected net impact all things considered shouldn't be broader than lognormal. For one argument, most charities probably funge against each other by at least 1/10^6. For another, you can imagine that funding global health improves the quality of research a bit, which does a bit of the work that you'd have wanted done by funding a research charity. These kinds of indirect effects are hard to map. Maybe people should think more about them.

AFAICT, the basic thing for a post like this one to get right is to compare apples with apples. Tom is trying to evaluate various charities, of which some are evaluators. If he's evaluating the other charities on direct estimates, and is not smoothing the results over by assuming indirect effects, then he should use a broader than lognormal assumption for the evaluators too (and they will be competitive). If he's taking into account that each of the other charities will indirectly support the cause of one another (or at least the best ones will), then he should assume the same for the charity evaluators.

I could be wrong about some of this. A couple of final remarks: it gets more confusing if you think lots of charities have negative value e.g. because of the value of technological progress. Also, all of this makes me think that if you're so convinced that flow-through effects cause many charities to have astronomical benefits, perhaps you ought to be studying these effects intensely and directly, although that admittedly does seem counterintuitive to me, compared with working on problems of known astronomical importance directly.

Comment author: Kerry_Vaughan 23 April 2017 07:01:03PM 2 points [-]

Kerry can confirm or deny but I think he's referring to the fact that a bunch of people were surprised to see (e.g.? Not sure if there were other cases.) GWWC start recommending the EA funds and closing down the GWWC trust recently when CEA hadn't actually officially given the funds a 'green light' yet.

Correct. We had updated in favor of EA Funds internally but hadn't communicated that fact in public. When we started linking to EA Funds on the GWWC website, people were justifiably confused.

I'm concerned with the framing that you updated towards it being correct for EA Funds to persist past the three month trial period. If there was support to start out with and you mostly didn't gather more support later on relative to what one would expect, then your prior on whether EA Funds is well received should be stronger but you shouldn't update in favor of it being well received based on more recent data.

The money moved is the strongest new data point.

It seemed quite plausible to me that we could have the community be largely supportive of the idea of EA Funds without actually using the product. This is more or less what happened with EA Ventures -- lots of people thought it was a good idea, but not many promising projects showed up and not many funders actually donated to the projects we happened to find.

Do you feel that the post as currently written still overhypes the communities perception of the project? If so, what changes would you suggest to bring it more in line with the observable evidence?

Comment author: RyanCarey 25 April 2017 05:19:29PM 0 points [-]

This is more or less what happened with EA Ventures -- lots of people thought it was a good idea, but not many promising projects showed up and not many funders actually donated to the projects we happened to find.

It seems like the character of the EA movement needs to be improved somehow, (probably, as always, there are marginal improvements to the implementation too) but especially the character of the movement because arguably if EA could spawn many projects, its impact would be increased many-fold.

Comment author: Owen_Cotton-Barratt 23 April 2017 09:10:49AM *  8 points [-]

Ryan, I substantially disagree and actually think all of your suggested alternatives are worse. The original is reporting on a response to the writing, not staking out a claim to an objective assessment of it.

I think that reporting honest responses is one of the best tools we have for dealing with emotional inferential gaps -- particularly if it's made explicit that this is a function of the reader and writing, and not the writing alone.

Comment author: RyanCarey 24 April 2017 09:30:24AM *  7 points [-]

I've discussed this with Owen a bit further. How emotions relate to norms of discourse is a tricky topic but I personally think many people would agree on the following pointers going forward (not addressed to Fluttershy in particular):

Dos:

  • flag your emotions when they are relevant to the discussion. e.g. "I became sick of redrafting this post so please excuse if it comes across as grumpy", or "These research problems seem hard and I'm unmotivated to try to work more on them".
  • discuss emotional issues relevant to many EAs

Don'ts:

  • use emotion as a rhetorical boost for your arguments (appeal to emotion)
  • mix arguments together with calls for social support
  • mix arguments with personal emotional information that would make an EA (or regular) audience uncomfortable.

Of course, if you want to engage emotionally with a specific people, you can use private messages.

Comment author: RyanCarey 23 April 2017 11:08:18PM *  18 points [-]

Some feedback on your feedback (I've only quickly read your post once, so take it with a grain of salt):

  • I think that this is more discursive than it needs to be. AFAICT, you're basically arguing that you think that decision-making and trust in the EA movement is a over-concentrated in OpenPhil.
  • If it was a bit shorter, then it would also be easier to run it by someone involved with OpenPhil, which prima facie would be at least worth trying, in order to correct any factual errors.
  • It's hard to do good criticism, but starting out with long explanations of confidence games and Ponzi schemes is not something that makes the criticism likely to be well-received. You assert that these things are not necessarily bad, so why not just zero in on the thing that you think is bad in this case?
  • So maybe this could have been split into two posts?
  • Maybe there are more upsides of having somewhat concentrated decision-making than you lead on? Perhaps cause prioritization will be better? Since EA funds is a movement-wide scheme, perhaps reputational trust is extra important here, and the diversification would come from elsewhere? Perhaps the best decision-makers will naturally come to work on this full-time.

You may still be right, though I would want some more balanced analysis.

Comment author: Fluttershy 21 April 2017 11:23:52AM 1 point [-]

A few of you were diligent enough to beat me to saying much of this, but:

Where we’ve received criticism it has mostly been around how we can improve the website and our communication about EA Funds as opposed to criticism about the core concept.

This seems false, based on these replies. The author of this post replied to the majority of those comments, which means he's aware that many people have in fact raised concerns about things other than communication and EA Funds' website. To his credit, someone added a paragraph acknowledging that these concerns had been raised elsewhere, in the pages for the EA community fund and the animal welfare fund. Unfortunately, though, these concerns were never mentioned in this post. There are a number of people who would like to hear about any progress that's been made since the discussion which happened on this thread regarding the problems of 1) how to address conflicts of interest given how many of the fund managers are tied into e.g. OPP, and 2) how centralizing funding allocation (rather than making people who aren't OPP staff into Fund Managers) narrows the amount of new information about what effective opportunities exist that the EA Funds' Fund Managers encounter.

I've spoken with a couple EAs in person who have mentioned that making the claim that "EA Funds are likely to be at least as good as OPP’s last dollar" is harmful. In this post, it's certainly worded in a way that implies very strong belief, which, given how popular consequentialism is around here, would be likely to make certain sorts of people feel bad for not donating to EA Funds instead of whatever else they might donate to counterfactually. This is the same sort of effect people get from looking at this sort of advertising, but more subtle, since it's less obvious on a gut level that this slogan half-implies that the reader is morally bad for not donating. Using this slogan could be net negative even without considering that it might make EAs feel bad about themselves, if, say, individual EAs had information about giving opportunities that were more effective than EA Funds, but donated to EA Funds anyways out of a sense of pressure caused by the "at least as good as OPP" slogan.

More immediately, I have negative feelings about how this post used the Net Promoter Score to evaluate the reception of EA Funds. First, it mentions that EA Funds "received an NPS of +56 (which is generally considered excellent according to the NPS Wikipedia page)." But the first sentence of the Wikipedia page for NPS, which I'm sure the author read at least the first line of given that he linked to it, states that NPS is "a management tool that can be used to gauge the loyalty of a firm's customer relationships" (emphasis mine). However, EA Funds isn't a firm. My view is that implicitly assuming that, as a nonprofit (or something socially equivalent), your score on a metric intended to judge how satisfied a for-profit company's customers are can be compared side by side with the scores received by for-profit firms (and then neglecting to mention that you've made this assumption) belies a lack of intent to honestly inform EAs.

This post has other problems, too; it uses the NPS scoring system to analyze donors and other's responses to the question:

How likely is it that your donation to EA Funds will do more good in expectation than where you would have donated otherwise?

The NPS scoring system was never intended to be used to evaluate responses to this question, so perhaps that makes it insignificant that an NPS score of 0 for this question just misses the mark of being "felt to be good" in industry. Worse, the post mentions that this result

could merely represent healthy skepticism of a new project or it could indicate that donors are enthusiastic about features other than the impact of donations to EA Funds.

It seems to me that including only positive (or strongly positive-sounding) interpretations of this result is incorrect and misleadingly optimistic. I'd agree that it's a good idea to not "take NPS too seriously", though in this case, I wouldn't say that the benefit that came from using NPS in the first place outweighed the cost that was incurred by the resultant incorrect suggestion that we should feel there was a respectable amount of quantitative support for the conclusions drawn in this post.

I'm disappointed that I was able to point out so many things I wish the author had done better in this document. If there had only been a couple errors, it would have been plausibly deniable that anything fishy was going on here. But with as many errors as I've pointed out, which all point in the direction of making EA Funds look better than it is, things don't look good. Things don't look good regarding how well this project has been received, but that's not the larger problem here. The larger problem is that things don't look good because this post decreases how much I am willing to trust communications made on the behalf of EA funds in particular, and communications made by CEA staff more generally.

Writing this made me cry, a little. It's late, and I should have gone to bed hours ago, but instead, here I am being filled with sad determination and horror that it feels like I can't trust anyone I haven't personally vetted to communicate honestly with me. In Effective Altruism, honesty used to mean something, consequentialism used to come with integrity, and we used to be able to work together to do the most good we could.

Some days, I like to quietly smile to myself and wonder if we might be able to take that back.

Comment author: RyanCarey 22 April 2017 09:13:00PM 2 points [-]

Writing this made me cry, a little. It's late, and I should have gone to bed hours ago, but instead, here I am being filled with sad determination and horror that it feels like I can't trust anyone I haven't personally vetted to communicate honestly with me.

There are a range of reasons that this is not really an appropriate way to communicate. It's socially inappropriate, it could be interpreted as emotional blackmail, and it could encourage trolling.

It's a shame you've been upset. Still, one can call others' writing upsetting, immoral, mean-spirited, etc etc etc - there is a lot of leeway to make other reasonable conversational moves.

View more: Next