I'm not sure how you're operationalizing the difference between unlikely and *very* unlikely, but I think we should not be able to make sizable updates from this data unless the prior is REALLY big.

(You probably already understand this, but other people might read your comment as suggesting something more strongly than you're actually referring to, and this is a point that I really wanted to clarify anyway because I expect it to be a fairly common mistake)

Roughly: Unsurprising conclusions from experiments with low sample sizes should not change your mind significantly, regardless of what your prior beliefs are.

This is true (mostly) regardless of the size of your prior. If a null result when you have a high prior wouldn't cause a large update downwards, then a null result on something when you have a low prior shouldn't cause a large shift downwards either.

[Math with made-up numbers below]

As mentioned earlier:

If your hypothesis is 10%: 23% probability experiment confirms it.

If your hypothesis is 1%: 87% probability experiment is in line with this

5%: 49%

20%: 4.4%

Say your prior belief is that there's a 70% chance of talking to new people having no effect (or meaningfully close enough to zero that it doesn't matter), a 25% chance that it has a 1% effect, and a 5% chance that it has a 10% effect.

Then by Bayes' Theorem, your posterior probability should be: 75.3% chance it has no effect

23.4% chance it has a 1% effect

1.24% chance it has a 10% effect.

If, on the other hand, you originally believed that there's a 50% chance of it have no effect, and a 50% chance of it having a 10% effect, then your posterior should be:

81.3% chance it has no effect

18.7% chance it has a 10% effect.

Finally, if your prior is that it already has a relatively small effect, this study is far too underpowered to basically make any conclusions at all. For example, if you originally believed that there's a 70% chance of it having no effect, and a 30% chance of it having a .1% effect, then your posterior should be:

70.3% chance of no effect

29.7% chance of a .1% effect.

This is all assuming ideal conditions.Model uncertainty and uncertainty about the quality of my experiment should only decrease the size of your update, not increase it.

Do you agree here? If so, do you think I should rephrase the original post to make this clearer?

Seems like a great idea!

Re Nick, I trust his analysis of charities, including meta-charities a lot. But the conflict does seem worth thinking a bit about. He is responsible for all 2-3 of the top EA-org grant-makers. From a point of view of redundancy, diverse criticism and incentives, this is not so good.

If I was CEA, I'm not sure I have very much incentive to identify new good strategies, since a lot of my expected funding in the next decade comes from Nick, and most of the other funders are less thoughtful, he is really the one that I need to work to convince. And then If I am Nick, I'm only one person, so there are limits to how much strategic thinking I can transmit, and to the degree to which I will force it to engage with other strategic thinkers. It's also hard to see how, if some of its projects failed, I would allow CEA to go unfunded?

How can we build these incentives and selection pressures, as well as on the object level, getting better ideas into EA orgs? Dviersifying funding would help, but mostly it seems like it would require CEA to care about this problem a lot and take a lot of effort.