Comment author: RyanCarey 10 February 2017 08:11:49PM 13 points [-]

Seems like a great idea!

Re Nick, I trust his analysis of charities, including meta-charities a lot. But the conflict does seem worth thinking a bit about. He is responsible for all 2-3 of the top EA-org grant-makers. From a point of view of redundancy, diverse criticism and incentives, this is not so good.

If I was CEA, I'm not sure I have very much incentive to identify new good strategies, since a lot of my expected funding in the next decade comes from Nick, and most of the other funders are less thoughtful, he is really the one that I need to work to convince. And then If I am Nick, I'm only one person, so there are limits to how much strategic thinking I can transmit, and to the degree to which I will force it to engage with other strategic thinkers. It's also hard to see how, if some of its projects failed, I would allow CEA to go unfunded?

How can we build these incentives and selection pressures, as well as on the object level, getting better ideas into EA orgs? Dviersifying funding would help, but mostly it seems like it would require CEA to care about this problem a lot and take a lot of effort.

Comment author: Linch 09 February 2017 07:21:00AM *  2 points [-]

I'm not sure how you're operationalizing the difference between unlikely and very unlikely, but I think we should not be able to make sizable updates from this data unless the prior is REALLY big.

(You probably already understand this, but other people might read your comment as suggesting something more strongly than you're actually referring to, and this is a point that I really wanted to clarify anyway because I expect it to be a fairly common mistake)

Roughly: Unsurprising conclusions from experiments with low sample sizes should not change your mind significantly, regardless of what your prior beliefs are.

This is true (mostly) regardless of the size of your prior. If a null result when you have a high prior wouldn't cause a large update downwards, then a null result on something when you have a low prior shouldn't cause a large shift downwards either.

[Math with made-up numbers below]

As mentioned earlier:

If your hypothesis is 10%: 23% probability experiment confirms it.

If your hypothesis is 1%: 87% probability experiment is in line with this

5%: 49%

20%: 4.4%

Say your prior belief is that there's a 70% chance of talking to new people having no effect (or meaningfully close enough to zero that it doesn't matter), a 25% chance that it has a 1% effect, and a 5% chance that it has a 10% effect.

Then by Bayes' Theorem, your posterior probability should be: 75.3% chance it has no effect

23.4% chance it has a 1% effect

1.24% chance it has a 10% effect.

If, on the other hand, you originally believed that there's a 50% chance of it have no effect, and a 50% chance of it having a 10% effect, then your posterior should be:

81.3% chance it has no effect

18.7% chance it has a 10% effect.

Finally, if your prior is that it already has a relatively small effect, this study is far too underpowered to basically make any conclusions at all. For example, if you originally believed that there's a 70% chance of it having no effect, and a 30% chance of it having a .1% effect, then your posterior should be:

70.3% chance of no effect

29.7% chance of a .1% effect.

This is all assuming ideal conditions.Model uncertainty and uncertainty about the quality of my experiment should only decrease the size of your update, not increase it.

Do you agree here? If so, do you think I should rephrase the original post to make this clearer?

Comment author: RyanCarey 09 February 2017 07:02:04PM 0 points [-]

I trust that you can explain Bayes theorem, I'm just adding that we now can be fairly confident that the intervention has less than 10% effectiveness.

Comment author: RyanCarey 09 February 2017 03:12:55AM 0 points [-]

You should not update significantly towards “casual outreach about EA is ineffective”, or “outreach has a very low probability of success” since the study is FAR too underpowered to detect even large effects. For example, if talking about GWWC to likely candidates has a 10% chance of making them take the pledge in the next 15-20 days, and the 14 people who were contacted are exactly representative of the pool of “likely candidates”, then we have a .9^14=23% chance of getting 0 pledges.

Given that it was already unlikely that being put in contact with a GWWC member would have a 10% chance of making them take the pledge, we can now call it very unlikely.

Comment author: RyanCarey 05 February 2017 06:25:36PM 7 points [-]

Yes! Probably when we think of Importance, Neglectedness, and Tractability, we should also consider informativeness!

Comment author: RyanCarey 04 February 2017 06:37:31AM *  2 points [-]

I think that AI safety donors, and all those who seek to spread values with the intention of influencing the values guiding a singleton or technological transformation, should probably be positively correlated with U.S. markets.

If you want to correlate with near-term AI development, you would buy GOOG. (Which is ~1% DeepMind).

Comment author: the_jaded_one 29 January 2017 07:57:56PM 0 points [-]

Can you elaborate?

Comment author: RyanCarey 30 January 2017 04:54:41AM *  1 point [-]

I agree that people should be allowed to give criticism without talking to the critiqued organizations first. It does usually improve informativeness and persuasiveness, but if we required every critique to be of extremely high journalistic quality then we would never get any criticism done, so we have a lower standard.

By this point, though, the thread has created enough discussion that at least some of OpenPhil are probably reading it. Still you're effectively talking about them as though they're not in the room, even though they are. The fix is to email them a link, and to try to give arguments that you think they would appreciate as input for how they could improve their activities.

Comment author: the_jaded_one 29 January 2017 01:36:47PM *  3 points [-]

One way to resolve our initial skepticism would be to have a trusted expert in the field

And in what field is Chloe Cockburn a "trusted expert"?

If we go by her twitter, we might say something like "she is an expert left-wing, highly political, anti-trump, pro-immigration activist"

Does that seem like a reasonable characterization of Chloe Cockburn's expertise to you?

Characterizing her as "Trusted" seems pretty dishonest in this context. Imagine someone who has problems with EA and Cosecha, for example because they were worried about political bias in EA. Now imagine that they got to know her opinions and leanings, e.g. on her twitter. They wouldn't "trust" her to make accurate calls about the effectiveness of donations to a left-wing, anti-Trump activist cause, because she is a left-wing anti-Trump activist. She is almost as biased as it is possible to be on this issue, the exact opposite of the kind of person whose opinion you should trust. Of course, she may have good arguments since those can come from biased people, but she is being touted as a "trusted expert", not a biased expert with strong arguments, so that is what I am responding to.

Comment author: RyanCarey 29 January 2017 07:32:10PM 1 point [-]

I'm all for criticising organizations without having your post vetted by them. But at some point, it is useful to reach out to them to let them know your criticism, if you want it to to be useful, and it seems like you've now well-passed this point.

Comment author: Daniel_Dewey 13 January 2017 03:32:29PM *  5 points [-]

This is a great point -- thanks, Jacob!

I think I tend to expect more from people when they are critical -- i.e. I'm fine with a compliment/agreement that someone spent 2 minutes on, but expect critics to "do their homework", and if a complimenter and a critic were equally underinformed/unthoughtful, I'd judge the critic more harshly. This seems bad!

One response is "poorly thought-through criticism can spread through networks; even if it's responded to in one place, people cache and repeat it other places where it's not responded to, and that's harmful." This applies equally well to poorly thought-through compliments; maybe the unchallenged-compliment problem is even worse, because I have warm feelings about this community and its people and orgs!

Proposed responses (for me, though others could adopt them if they thought they're good ideas):

  • For now, assume that all critics are in good faith. (If we have / end up with a bad-critic problem, these responses need to be revised; I'll assume for now that the asymmetry of critique is a bigger problem.)
  • When responding to critiques, thank the critic in a sincere, non-fake way, especially when I disagree with the critique (e.g. "Though I'm about to respond with how I disagree, I appreciate you taking the critic's risk to help the community. Thank you! [response to critique]")
  • Agree or disagree with critiques in a straightforward way, instead of saying e.g. "you should have thought about this harder".
  • Couch compliments the way I would couch critiques.
  • Try to notice my disagreements with compliments, and comment on them if I disagree.

Thoughts?

Comment author: RyanCarey 13 January 2017 06:25:51PM 0 points [-]

"Though I'm about to respond with how I disagree, I appreciate you taking the critic's risk to help the community. Thank you!"

Not sure how much this helps because if the criticism is thoughtful and you fail to engage with it, you're still being rude and missing an opportunity, whether or not you say some magic words.

Comment author: jsteinhardt 12 January 2017 07:19:44PM 19 points [-]

I strongly agree with the points Ben Hoffman has been making (mostly in the other threads) about the epistemic problems caused by holding criticism to a higher standard than praise. I also think that we should be fairly mindful that providing public criticism can have a high social cost to the person making the criticism, even though they are providing a public service.

There are definitely ways that Sarah could have improved her post. But that is basically always going to be true of any blog post unless one spends 20+ hours writing it.

I personally have a number of criticisms of EA (despite overall being a strong proponent of the movement) that I am fairly unlikely to share publicly, due to the following dynamic: anything I write that wouldn't incur unacceptably high social costs would have to be a highly watered-down version of the original point, and/or involve so much of my time to write carefully that it wouldn't be worthwhile.

While I'm sympathetic to the fact that there's also a lot of low-quality / lazy criticism of EA, I don't think responses that involve setting a high bar for high-quality criticism are the right way to go.

(Note that I don't think that EA is worse than is typical in terms of accepting criticism, though I do think that there are other groups / organizations that substantially outperform EA, which provides an existence proof that one can do much better.)

Comment author: RyanCarey 12 January 2017 09:54:57PM 8 points [-]

I strongly agree with the points Ben Hoffman has been making (mostly in the other threads) about the epistemic problems caused by holding criticism to a higher standard than praise. I also think that we should be fairly mindful that providing public criticism can have a high social cost to the person making the criticism, even though they are providing a public service.

This is completely true.

I personally have a number of criticisms of EA (despite overall being a strong proponent of the movement) that I am fairly unlikely to share publicly, due to the following dynamic: anything I write that wouldn't incur unacceptably high social costs would have to be a highly watered-down version of the original point, and/or involve so much of my time to write carefully that it wouldn't be worthwhile.

There are at least a dozen people for whom this is true.

Comment author: LaurenMcG  (EA Profile) 10 January 2017 08:36:08PM 2 points [-]

Has anyone calculated a rough estimate for the value of an undergraduate student's hour? Assume they attend a top UK university, currently are unemployed, and plan to pursue earning to give. Thanks in advance for any info or links!

Comment author: RyanCarey 11 January 2017 01:26:16AM 2 points [-]

It's not an estimate, just some relevant argumentation, but see Katja's post here. Maybe $30-$150 but it would depend on a lot of factors and I haven't thought about it very hard.

View more: Prev | Next