Comment author: RyanCarey 03 July 2017 03:56:54AM *  12 points [-]

Thanks Julia!

I would like to add my thanks to Ali Woodman and Rebecca Raible, who did much of the moderation over the last couple of years, as well as Dot Impact, Trike and the rest of the previous moderators. My perspective is that since I've moved toward research and CEA has grown, it no-longer makes sense for me to dedicate my time to continuing to manage the forum. So I'm grateful for CEA's takeover. Of course, I'm still happy to consult if you need help understanding how the forum has run, or thinking about its strategy.

Thanks all and long live effective altruism! ;)

Ryan

Comment author: arunbharatula 24 May 2017 04:24:04AM *  0 points [-]

I try to hyperlink those parts of my writing that are evidenced by a particular source. This avoids the issue that arises in academic writing where it can be unclear what claims a citation relates to. There is a trade-off with the visual appeal of the writing, particularly since my fix for the aforementioned issue is unconventional. However, I believe gain in precision outweighs the stylistic considerations.

Edit: In light of the downvotes and various comments on my pieces recommending I rework my contributions and suggests they may be misleading I am taking down my work till I time I can edit it. Hope this improves things. Thanks for the tips.

Comment author: RyanCarey 24 May 2017 08:04:38AM 1 point [-]

The greater ambiguity, I think, is in which part of the linked document you're citing. If you want to resolve ambiguity, then use footnotes and quote the relevant parts of the sources.

Comment author: RyanCarey 22 May 2017 07:30:42PM 5 points [-]

Just personally for me, I would find this easier to read if you linked just a word or a couple of words at a time, rather than a whole paragraph.

Comment author: RyanCarey 06 May 2017 06:42:01AM *  3 points [-]

More than half of the time, people who have a psychotic episode will have already had one before. I think the same is true of mania. The incidence for a first episode of psychosis is fairly low, about 0.03% per year [1].

[1] "Over the 8-year period May 1995–April 2003, there were 194 cases of any DSM-IV psychotic illness (117 male, 77 female; Table 2). The annual incidence of “all psychoses” was 31.6/100,000 aged >15, this being higher in males (37.2) than in females (25.7; risk ratio [RR] = 1.44 [95% CI 1.08, 1.93], p < .02; Table 3)."

https://academic.oup.com/schizophreniabulletin/article/31/3/624/1894444/Epidemiology-of-First-Episode-Psychosis

Comment author: RyanCarey 05 May 2017 05:40:10AM *  3 points [-]

A clear problem with this model is that AFAICT, it assumes that (i) the size of the research community working on safety when AI is developed is independent of (ii) the the degree to which adding a researcher now will change the total number of researchers.

Both (i) and (ii) can vary by orders of magnitude, at least on my model, but are very correlated, because they depend on timelines. This means I get an oddly high chance of averting existential risk. If the questions where combined together into "what fraction of the AI community will the community by enlarged by adding an extra person" then I think my chance of averting existential risk would come out much lower.

7

Informatica: Special Issue on Superintelligence

A special issue on Superintelligence is coming up at the journal Informatica . The call for proposals is given below. We would welcome submissions from a range of perspectives, including philosophical and other fields that effective altruists may work in. ---------------------------------------------------------------------------------- Introduction Since the inception of the field of artificial... Read More
Comment author: Owen_Cotton-Barratt 25 April 2017 02:59:15PM 4 points [-]

The fact that sometimes people's estimates of impact are subsequently revised down by several orders of magnitude seems like strong evidence against evidence being normally distributed around the truth. I expect that if anything it is broader than lognormally distributed. I also think that extra pieces of evidence are likely to be somewhat correlated in their error, although it's not obvious how best to model that.

Comment author: RyanCarey 25 April 2017 06:19:18PM 3 points [-]

I expect that if anything it is broader than lognormally distributed.

It might depend what we're using the model for.

In general, it does seem reasonable that direct (expected) net impact of interventions should be broader than lognormal, as Carl argued in 2011. On the other hand, it seems like the expected net impact all things considered shouldn't be broader than lognormal. For one argument, most charities probably funge against each other by at least 1/10^6. For another, you can imagine that funding global health improves the quality of research a bit, which does a bit of the work that you'd have wanted done by funding a research charity. These kinds of indirect effects are hard to map. Maybe people should think more about them.

AFAICT, the basic thing for a post like this one to get right is to compare apples with apples. Tom is trying to evaluate various charities, of which some are evaluators. If he's evaluating the other charities on direct estimates, and is not smoothing the results over by assuming indirect effects, then he should use a broader than lognormal assumption for the evaluators too (and they will be competitive). If he's taking into account that each of the other charities will indirectly support the cause of one another (or at least the best ones will), then he should assume the same for the charity evaluators.

I could be wrong about some of this. A couple of final remarks: it gets more confusing if you think lots of charities have negative value e.g. because of the value of technological progress. Also, all of this makes me think that if you're so convinced that flow-through effects cause many charities to have astronomical benefits, perhaps you ought to be studying these effects intensely and directly, although that admittedly does seem counterintuitive to me, compared with working on problems of known astronomical importance directly.

Comment author: Kerry_Vaughan 23 April 2017 07:01:03PM 2 points [-]

Kerry can confirm or deny but I think he's referring to the fact that a bunch of people were surprised to see (e.g.? Not sure if there were other cases.) GWWC start recommending the EA funds and closing down the GWWC trust recently when CEA hadn't actually officially given the funds a 'green light' yet.

Correct. We had updated in favor of EA Funds internally but hadn't communicated that fact in public. When we started linking to EA Funds on the GWWC website, people were justifiably confused.

I'm concerned with the framing that you updated towards it being correct for EA Funds to persist past the three month trial period. If there was support to start out with and you mostly didn't gather more support later on relative to what one would expect, then your prior on whether EA Funds is well received should be stronger but you shouldn't update in favor of it being well received based on more recent data.

The money moved is the strongest new data point.

It seemed quite plausible to me that we could have the community be largely supportive of the idea of EA Funds without actually using the product. This is more or less what happened with EA Ventures -- lots of people thought it was a good idea, but not many promising projects showed up and not many funders actually donated to the projects we happened to find.

Do you feel that the post as currently written still overhypes the communities perception of the project? If so, what changes would you suggest to bring it more in line with the observable evidence?

Comment author: RyanCarey 25 April 2017 05:19:29PM 0 points [-]

This is more or less what happened with EA Ventures -- lots of people thought it was a good idea, but not many promising projects showed up and not many funders actually donated to the projects we happened to find.

It seems like the character of the EA movement needs to be improved somehow, (probably, as always, there are marginal improvements to the implementation too) but especially the character of the movement because arguably if EA could spawn many projects, its impact would be increased many-fold.

Comment author: Owen_Cotton-Barratt 23 April 2017 09:10:49AM *  8 points [-]

Ryan, I substantially disagree and actually think all of your suggested alternatives are worse. The original is reporting on a response to the writing, not staking out a claim to an objective assessment of it.

I think that reporting honest responses is one of the best tools we have for dealing with emotional inferential gaps -- particularly if it's made explicit that this is a function of the reader and writing, and not the writing alone.

Comment author: RyanCarey 24 April 2017 09:30:24AM *  7 points [-]

I've discussed this with Owen a bit further. How emotions relate to norms of discourse is a tricky topic but I personally think many people would agree on the following pointers going forward (not addressed to Fluttershy in particular):

Dos:

  • flag your emotions when they are relevant to the discussion. e.g. "I became sick of redrafting this post so please excuse if it comes across as grumpy", or "These research problems seem hard and I'm unmotivated to try to work more on them".
  • discuss emotional issues relevant to many EAs

Don'ts:

  • use emotion as a rhetorical boost for your arguments (appeal to emotion)
  • mix arguments together with calls for social support
  • mix arguments with personal emotional information that would make an EA (or regular) audience uncomfortable.

Of course, if you want to engage emotionally with a specific people, you can use private messages.

Comment author: RyanCarey 23 April 2017 11:08:18PM *  19 points [-]

Some feedback on your feedback (I've only quickly read your post once, so take it with a grain of salt):

  • I think that this is more discursive than it needs to be. AFAICT, you're basically arguing that you think that decision-making and trust in the EA movement is a over-concentrated in OpenPhil.
  • If it was a bit shorter, then it would also be easier to run it by someone involved with OpenPhil, which prima facie would be at least worth trying, in order to correct any factual errors.
  • It's hard to do good criticism, but starting out with long explanations of confidence games and Ponzi schemes is not something that makes the criticism likely to be well-received. You assert that these things are not necessarily bad, so why not just zero in on the thing that you think is bad in this case?
  • So maybe this could have been split into two posts?
  • Maybe there are more upsides of having somewhat concentrated decision-making than you lead on? Perhaps cause prioritization will be better? Since EA funds is a movement-wide scheme, perhaps reputational trust is extra important here, and the diversification would come from elsewhere? Perhaps the best decision-makers will naturally come to work on this full-time.

You may still be right, though I would want some more balanced analysis.

View more: Next