One argument in favour of funding deworming, despite uncertain effects, is that 

  1. the expected value is high because the estimated effect size is slightly positive and the direct costs of deworming are low
  2. the quality of evidence is high (RCTs) (compared to many other initiatives which get funded by EA)

I think this excludes a third key consideration in prioritising causes, which is the potential for ongoing evaluation, both in terms of quality of evaluation and timescales for evaluation.

 

Imagine that we have 2 interventions, intervention A and intervention B.

Both have the same estimated effect size where the central estimate is positive, but there's a chance the effect is negative: +0.05 (95% Confidence Interval from -0.25 to +0.35)

Both have the same quality of evidence: Effect size estimated from a meta-analysis of 5 RCTs 

But Intervention A has marginally greater expected value because it's cheaper to implement

and

Intervention B would be much cheaper to evaluate through ongoing observational studies

 

In this case, I think we should fund Intervention B because of the value of being able to course-correct and update our estimated expected value based on more cheaply accessible new evidence.

 

If EA continues to fund deworming in the long-term, but further testing of effects via RCTs or monitoring of effects via observational studies doesn't occur, there's a risk that millions could be spent sub-optimally.

I think EA should either: 

  1. also fund further evaluation of deworming, or
  2. prioritise interventions with better expected value or better quality of evidence or potential for cheaper, higher quality or faster further evaluation, over deworming

36

0
0

Reactions

0
0
Comments3
Sorted by Click to highlight new comments since:

Nice. I think we could model this to see how ease/cost of evaluation interacts with other terms when assessing overall choice-worthiness. In your example the intuition sails through because A is only marginally cheaper to implement, while B is much cheaper to evaluate. I'd like to figure out  precisely when lower evaluative costs outweigh lower implementation costs, and what that depends on. 

Your post is also akin to a preference for good feedback loops when evaluating projects, which some orgs value highly. 

Yep, agree that this is similar to feedback loops, but I feel like people talking about feedback loops are more focused on the timescales for evaluation, rather than on timescale and quality and cost.

Would be interesting to see work looking at how precisely we should make trade offs between expected value, quality of evidence and potential for ongoing evaluation.

I think it might make sense to divide quality of evidence into quality of existing evidence and potential for ongoing evaluation.

Neat idea. I think this is probably true.

Curated and popular this week
Relevant opportunities