Comment author: Ben_West  (EA Profile) 18 October 2017 01:59:12PM 1 point [-]

Do you know how they measured altruism? It seems like maybe they are using "altruism" as a synonym for the "relationships" questionnaire?

Comment author: Milan_Griffes 18 October 2017 10:18:54PM 1 point [-]

Update: I checked with the study author and he confirmed that "relationships" on p. 5 is the same as "social effects" in Table 5.

Comment author: Ben_West  (EA Profile) 18 October 2017 01:59:12PM 1 point [-]

Do you know how they measured altruism? It seems like maybe they are using "altruism" as a synonym for the "relationships" questionnaire?

Comment author: Milan_Griffes 18 October 2017 02:33:38PM *  0 points [-]

I think the "altruism" measure is an aggregate of some of the "persisting effects questionnaire" questions. (p. 5)

Not sure if it maps directly to the relationships portion of that questionnaire, but I bet it does (all of the other categories on p. 5 cleanly map to results in Table 5, so by elimination "relationships" = "altruistic / positive social effects" )

Comment author: Evan_Gaensbauer 03 October 2017 03:10:49AM *  3 points [-]

Presumably it's because they thought that either this sort of drug policy reform, or, more likely, they don't think an announcement for conferences exclusive to what is still only a minor cause in the effective altruism community justifies its own post on the EA Forum.

Based on our investigation so far, US drug policy reform appears to be an impactful and tractable cause area.

Some users might just not visit the Forum often enough to have heard of Enthea's work before, so you could edit the post and add some hyperlinks to your other posts on the EA Forum so everyone will know the context of this post.

Comment author: Milan_Griffes 03 October 2017 03:42:11PM *  1 point [-]

Thanks, that makes sense.

Re: impact, I added a link to our cost-effectiveness research.

Re: tractability – we have good reason to think that a ballot initiative would be tractable, but unfortunately we can't share the details publicly due to our arrangement with a partner.

Comment author: Milan_Griffes 03 October 2017 01:22:58AM 2 points [-]

Minor thing: it'd be helpful if people who downvoted commented with their reason why.

Comment author: MichaelPlant 16 September 2017 01:21:38PM 1 point [-]

Hello Milan!

Your model assumes that everyone in the UK who might benefit from treatment would seek treatment. Your model assumes that everyone who receives treatment would benefit from treatment.

FWIW, in my model I don't assume either of those things. I assume an average counterfactual effect (counter to no rescheduling) of 0.1 HALYs for the 10m in the UK affected by depression or anxiety, not that they all get treatment or everyone benefits from the treatment (to be fair, I specify this in an edit of 14/08/2017 and you might have read it beforehand).

I don't mention replicability, but then I am assuming the rescheduling only brings a slight improvement (in the latter, more optimistic estimate I discuss whether this might be higher than 0.1 HALYs). I also mention the confusing possibility that treating some people with psychedelics might free up health care resources for other treatments.

I don't include costs of treatment, as I'm assuming this is an EA-funded campaign where our job, and what the money goes to, is changing the law and then allowing normal health care distribution to occur in the new scenario (i.e. in the US = insurer pays, in UK = govt pays).

Hence, looking at your model, I'm not sure why you include the costs of treatment, unless you think EA funders are going to be paying for those too. Even if you do think this, we should really want to have two seperate models, one for "cost of changing the law, assuming health practices then change aoccrdingly" and another for "cost effectiveness to EA funders to provide psychedelic therapy if it's available". As an aside, your model is really thorough, and I'm grateful to you for having put it together, good stuff!

This may also sound picky, but what we want to know (1) what is the most suitable model is for any give intervention, so if we're disagreeing with each other, we want to know why we're disagreeing, not just that we're disagreeing. Hence I was asking where and why you disagreed with my model.

You might reply your model is separate (campaign lobbying in UK vs ballot iniative and treatment funding in the US(?)) but, we also want to know (2) whether some new intervention is more cost-effective than all other current interventions an EA could fund (on one or more moral theory). If it's not more cost-effective then, all things considered, it would be bad to fund it. That's why I also asked if, and why, you think your drug policy reform strategy is more cost-effective than the one I proposed.

As it stands, we are perhaps comparing apples and oranges: you seem to have bundled treatment in with a policy change, and assumed this policy change will almost certainly occur depending on the polling numbers. I've just looked at policy change and estimated how much we could spend on it to change public/policy opinion and it still be more effective than AMF, assuming AMF is the current most cost-effective intervention. Hence we may need to get on the same page on this first.

Comment author: Milan_Griffes 16 September 2017 05:29:55PM *  0 points [-]

FWIW, in my model I don't assume either of those things. I assume an average counterfactual effect (counter to no rescheduling) of 0.1 HALYs for the 10m in the UK affected by depression or anxiety, not that they all get treatment or everyone benefits from the treatment (to be fair, I specify this in an edit of 14/08/2017 and you might have read it beforehand).

I see, thanks for clarifying. I think an average counterfactual effect of 0.1 HALY is very large (using the assumptions from our model, it implies a 1.20 HALY per treatment improvement in people who try and respond to the treatment: 0.1 average HALY / (0.57 people who seek treatment * 0.44 treatment-seekers who would try psilocybin treatment * 0.33 treatment-takers who respond to treatment).

With a DALY weight for major depression of 0.65, this implies that 1 psilocybin treatment alleviates major depression for 2 years, which is very optimistic. How are you deriving the 0.1 figure?

I don't mention replicability, but then I am assuming the rescheduling only brings a slight improvement

As above, I don't think the assumed improvement is slight. We should definitely include a replicability adjustment as these effects are demonstrated in small-N pilot studies.

I'm not sure why you include the costs of treatment, unless you think EA funders are going to be paying for those too

From my comment further up the thread:

"You could think of this analysis as trying to model whether psychedelic treatments for mental health conditions would be cost-effective if they were available today. For example, consider a promising intervention that would entirely cure someone's depression for a year, but costs $10,000,000 per treatment. We probably wouldn't want to run a ballot initiative to increase access to such a intervention, as it wouldn't be cost-effective even if it were easily accessible."

My understanding is that most public health cost-effectiveness modeling includes all costs of treatment, regardless of who's paying.

That's why I also asked if, and why, you think your drug policy reform strategy is more cost-effective than the one I proposed.

I haven't yet thought enough about what strategy makes the most sense. Our model is designed to be largely strategy-agnostic, as most of the costs are costs-of-treatment.

assumed this policy change will almost certainly occur depending on the polling numbers.

Sort of. I think a lot of the tractability question here hinges on what the polling looks like, which is what we're planning to look into next.

Comment author: ThomasSittler 15 September 2017 08:34:52AM 0 points [-]

The well-being improvement estimates seem to come from small pilot studies with no control group, showing very large impacts. I don't have enough background to guess how large these impacts are relative to other known treatments or placebo. The smoking impacts come from Johnson et al. 2017 (N = 15), the depression impacts come from Carhart-Harris et al. 2016 (N = 12).

Comment author: Milan_Griffes 15 September 2017 02:26:27PM *  0 points [-]

It's true that these effects all come from small-N pilot studies. Each effect size is discounted substantially by a replicability adjustment (best-guess input is an 80% discount).

Most of the studies considered didn't have a control group, though the PTSD study (Mithoefer et al. 2010) did. We included a placebo-effect adjustment for that study.

Interestingly, the depression study participants (Carhart-Harris et al. 2016) had all failed to respond to other depression treatments, so it's plausible that placebo effects were less strong in this population.

Similarly, most (all?) of the smoking study participants (Johnson et al. 2014) were longtime smokers who had made multiple unsuccessful attempts to quit in the past. Plausible that placebo effects were less strong in the population as well.

Comment author: MichaelPlant 15 September 2017 08:53:03AM 0 points [-]

Hello Miles and thanks for all this, good to see it's getting discussed.

As you probably saw, I produced a cost-effectiveness model of drug policy campaigning in the final post of my (rather long) series on the subject. In that, I suggest it's plausible drug policy reform, again just by allowing the use of psychedelics to treat mental health, could be in the range of $166/HALYs (Happiness adjusted life years), which would make it some 300 times more cost effective than you suggest.

It would be really helpful if you could say where and why you disagree with my model, given that, if you think drug policy reform is a promising intervention, my analysis suggest it's much more promising that yours does! For simplicity, let's assume HALYs and DALYs are just different types of apples, then I'd want to know why you think the structure of my model is wrong.

Comment author: Milan_Griffes 15 September 2017 02:15:37PM *  2 points [-]

I haven't engaged closely with your model, but here are some differences that immediately stand out:

  • Your analysis models the a change that impacts the entire UK, whereas ours models a change that impacts California.
  • Your model assumes that everyone in the UK who might benefit from treatment would seek treatment.
  • Your model assumes that everyone who receives treatment would benefit from treatment.
  • Your model doesn't include a replicability adjustment, to discount effect sizes due to the limited amount of evidence.
  • As far as I can tell, your model doesn't include costs of treatment, only costs of rescheduling.
Comment author: ThomasSittler 15 September 2017 08:36:42AM 1 point [-]

the model is to be read as if the initiative polls well

Have you thought about some cheap ways to get more information on how this is likely to poll (even poor quality info) ?

Comment author: Milan_Griffes 15 September 2017 02:09:27PM 0 points [-]

Yes. Two projects that seem promising here: (1) a systematic review of recent public opinion polls on psychedelics, (2) running Google Surveys on possible ballot initiative texts: https://www.google.com/analytics/surveys/

Comment author: Michael_S 14 September 2017 10:52:08PM *  5 points [-]

Hey; I made some comments on this on the doc, but I thought it was worth bringing them to the main thread and expanding.

First of all, I'm really happy to sea other EAs looking at ballot measures. They're a potentially very high EV method of passing policy/raising funding. They're particularly high value per dollar when spending on advertising is limited/nothing since the increased probability of passage from getting a relatively popular measure on the ballot is far more than the increased probability from spending the same amount advertising for it.

Also, am I correct in interpreting that you assume 100% chance of passage in your model conditional on good polling? Polling can help, but ballot measure polling does have a lot of error (in both directions). So even a popular measure in polling is hardly guarantee of passage (http://themonkeycage.org/2011/10/when-can-you-trust-polling-about-ballot-measures/).

Finally, in your EV estimates, you seem to be focus on the individual treatment cost of the intervention, which overwhelms the cost of the ballot measure. I don't think this is getting at the right question when it comes to running a ballot measure. I believe the gains from the ballot measure should be the estimated sum of the utility gains from people being able to purchase the drugs multiplied by the probability of passage; the costs should be how much it would cost to run the campaign. On the doc, you made the point that Givewell doesn't include leverage on other funding in their estimates, but when it comes to ballot measures, leverage is exactly what you're trying to produce, so I think an estimate is important.

Comment author: Milan_Griffes 15 September 2017 01:54:09AM *  1 point [-]

Thanks for the comments!

am I correct in interpreting that you assume 100% chance of passage in your model conditional on good polling?

No, the best-guess input is an 80% chance of passage, conditional on good polling and sufficient funding (see row 81). What "good" means here is a little underspecified – an initiative that polls at 70% favorability would have a much higher probability of passing than one that polls at 56%.

you seem to be focus on the individual treatment cost of the intervention, which overwhelms the cost of the ballot measure.

Right. You could think of this analysis as trying to model whether psychedelic treatments for mental health conditions would be cost-effective if they were available today. For example, consider a promising intervention that would entirely cure someone's depression for a year, but costs $10,000,000 per treatment. We probably wouldn't want to run a ballot initiative to increase access to such a intervention, as it wouldn't be cost-effective even if it were easily accessible.

In response to Introducing Enthea
Comment author: JanBrauner 09 August 2017 09:09:13AM 1 point [-]

Seems interesting, how can one stay updated?

Comment author: Milan_Griffes 10 September 2017 05:50:19AM 0 points [-]

There's an atom feed on the site now, by the way: https://enthea.net/feeds/all.atom.xml

View more: Next