Comment author: Gregory_Lewis 31 October 2017 06:35:12PM *  0 points [-]

I prefer to keep discussion on the object level, rather offering adverse impressions of one another's behaviour (e.g. uncharitable, aggressive, censorious etc.)[1] with speculative diagnoses as to the root cause of these ("perhaps some poor experience with a cryonics enthusiast").

To recall the dialectical context: the implication upthread was a worry that the EA community (or EA leadership) are improperly neglecting the metal health cause area, perhaps due to (in practice) some anti-weirdness bias. To which my counter-suggestion was that maybe EA generally/leaders thereof have instead made their best guess that the merits of this area isn't more promising than those cause areas they already attend to.

I accept that conditional on some recondite moral and empirical matters, mental health interventions look promising. Yet that does not distinguish mental health beyond many other candidate cause areas, e.g.:

  • Life extension/cryonics
  • Pro-life advocacy/natural embryo loss mitigation
  • Immigration reform
  • Improving scientific norms etc.

All generally have potentially large scale, sometimes neglected, but less persuasive tractability. In terms of some hypothetical dis aggregated EA resource (e.g. people, money), I'd prefer it to go into one of the 'big three' than any of these other areas, as my impression is the marginal returns for any of these three is greater than one of those. In other senses there may not be such zero sum dynamics (i.e. conditional on Alice only wanting to work in mental health, better that she work in EA-style mental mental), yet I aver this doesn't really apply to which topics the movement gives relative prominence to (after all, one might hope that people switch from lower- to higher-impact cause areas, as I have attempted to do).

Of course, there remains value in exploration: if in fact EA writ large is undervaluing mental health, they would want to know about it and change tack What I hope would happen if I am wrong in my determination of mental health is that public discussion of the merits would persuade more and more people of the merits of this approach (perhaps I'm incorrigible, hopefully third parties are not), and so it gains momentum from a large enough crowd of interested people it becomes its own thing with similar size and esteem to areas 'within the movement'. Inferring from the fact that this has not yet happened that the EA community is not giving a fair hearing is not necessarily wise.

[1]: I take particular exception to the accusations of censoriousness (from Plant) and wanting to 'shut down discussion' [from Plant and yourself]. In what possible world is arguing publicly on the internet a censorious act? I don't plot to 'run the mental health guys out of the EA movement', I don't work behind the scenes to talk to moderators to get rid of your contributions, I don't downvote remarks or posts on mental health, and so on and so forth for any remotely plausible 'shutting down discussion' behaviour. I leave adverse remarks I could make to this apophasis.

Comment author: MikeJohnson 04 November 2017 08:22:13PM *  1 point [-]

I prefer to keep discussion on the object level

I'm not seeing object-level arguments against mental health as an EA cause area. We have made some object-level arguments for, and I'm working on a longer-form description of what QRI plans in this space. Look for more object-level work and meta-level organizing over the coming months.

I'd welcome object-level feedback on our approaches. It didn't seem like your comments above were feedback-focused, but rather they seemed motivated by a belief that this was not "a good direction for EA energy to go relative to the other major ones." I can't rule that out at this point. But I don't like seeing a community member just dishing out relatively content-free dismissiveness on people at a relatively early stage in trying to build something new. If you don't see any good interventions here, and don't think we'll figure out any good interventions, it seems much better to just let us fail, rather than actively try to pour cold water on us. If we're on the verge of using lots of community resources on something that you know to be unworkable, please pour the cold water. But if your argument boils down to "this seems like a bad idea, but I can't give any object-level reasons, but I really want people to know I think this is a bad idea" then I'm not sure what value this interaction can produce.

But, that said, I'd also like to apologize if I've come on too strong in this back-and-forth, or if you feel I've maligned your motives. I think you seem smart, honest, invested in doing good as you see it, and are obviously willing to speak your mind. I would love to channel this into making our ideas better! In trying to do something new, there's approximately a 100% chance we'll make a lot of mistakes. I'd like to enlist your help in figuring out where the mistakes are and better alternatives. Or, if you'd rather preemptively write off mental health as a cause area, that's your prerogative. But we're in this tent together, and although all the evidence I have suggests we have significantly different (perhaps downright dissonant) cognitive styles, perhaps we can still find some moral trade.

Best wishes, Mike

Comment author: MikeJohnson 02 November 2017 09:42:38PM 2 points [-]

This is a very clear description of some cool ideas. Thanks to you and Caspar for doing this!

Comment author: Gregory_Lewis 28 October 2017 09:25:08AM 0 points [-]

Respectfully, I take 'challenging P' to require offering considerations for ¬P. Remarks like "I worry you're using a fully-general argument" (without describing what it is or how my remarks produce it), "I don't think your analogy is very solid" (without offering dis-analogies) don't have much more information than simply "I disagree".

1) I'd suggest astronomical stakes considerations imply at that one of the 'big three' do have extremely large marginal returns. If one prefers something much more concrete, I'd point to the humane reforms improving quality of life for millions of animals.

2) I don't think the primacy of the big three depends in any important way on recondite issues of disability weights or population ethics. Conditional on a strict person affecting view (which denies the badness of death) I would still think the current margin of global health interventions should offer better yields. I think this based on current best estimates of disability weights in things like the GCPP, and the lack of robust evidence for something better in mental health (we should expect, for example, Enthea's results to regress significantly, perhaps all the way back to the null).

On the general point: I am dismissive of mental health as a cause area insofar as I don't believe it to be a good direction for EA energy to go relative to the other major ones (and especially my own 'best bet' of xrisk). I don't want it to be a cause area as it will plausibly compete for time/attention/etc. with other things I deem more important. I'm no EA leader, but I don't think we need to impute some 'anti-weirdness bias' (which I think is facially implausible given the early embrace of AI stuff etc) to explain why they might think the same.

Naturally, I may be wrong in this determination, and if I am wrong, I want to know about it. Thus having enthusiasts go into more speculative things outside the currently recognised cause areas improves likelihood of the movement self-correcting and realising mental health should be on a par with (e.g.) animal welfare as a valuable use of EA energy.

Yet anointing mental health as a cause area before this case has been persuasively made would be a bad approach. There are many other candidates for 'cause area No. n+1' which (as I suggested above) have about the same plausibility as mental health. Making them all recognised 'cause areas' seems the wrong approach. Thus the threshold should be higher.

Comment author: MikeJohnson 28 October 2017 03:02:30PM 1 point [-]

Hi Gregory,

We have never interacted before this, at least to my knowledge, and I worry that you may be bringing some external baggage into this interaction (perhaps some poor experience with some cryonics enthusiast...). I find your "let's shut this down before it competes for resources" attitude very puzzling and aggressive, especially since you show zero evidence that you understand what I'm actually attempting to do or gather support for on the object-level. Very possibly we'd disagree on that too, which is fine, but I'm reading your responses as preemptively closed and uncharitable (perhaps veering toward 'aggressively hostile') toward anything that might 'rock the EA boat' as you see it.

I don't think this is good for EA, and I don't think it's working off a reasonable model of the expected value of a new cause area. I.e., you seem to be implying the expected cause area would be at best zero, but more probably negative, due to zero-sum dynamics. On the other hand, I think a successful new cause area would more realistically draw in or internally generate at least as many resources as it would consume, and probably much more -- my intuition is that at the upper bound we may be looking at something as synergistic as a factorial relationship (with three causes, the total 'EA pie' might be 321=6; with four causes the total 'EA pie' might be 432*1=24). More realistically, perhaps 4+3+2+1 instead of 3+2+1. This could be and probably is very wrong-- but at the same time I think it's more accurate than a zero-sum model.

At any rate, I'm skeptical that we can turn this discussion into something that will generate value to either of us or to EA, so unless you have any specific things you'd like to discuss or clarify, I'm going to leave things here. Feel free to PM me questions.

Comment author: Gregory_Lewis 27 October 2017 07:32:52PM 0 points [-]

1-3 looks general, and can in essence be claimed to apply to any putative cause area not currently thought to be a good candidate. E.g.

1) Current anti-aging interventions are pretty bad on average. 2) There could be low hanging fruit behind things that look 'too weird to try'. 3) EA may be in position to signal boost weird things that have plausible chance of working.

Mutatis mutandis criminal justice reform, improving empathy, human enhancement, and so on. One could adjudicate these competing areas by evidence that some really do have these low hanging fruit. Yet it remains unclear that (for example) things like psilocybin data gives more a boost than (say) cryonics. Naturally I don't mind if enthusiasts pick some area and give it a go, but appeals to make it a 'new cause area' based on these speculative bets look premature by my lights: better to pick winners based on which of the disparate fields shows the greatest progress, such that one forecasts similar marginal returns to the 'big three'.

(Given GCR/x-risks, I think the 'opportunities' for saving quite a lot of lives - everyone's - are increasing. I agree that ignoring that - which one shouldn't - it seems likely status quo progress should exhaust preventable mortality faster than preventable ill-health. Yet I don't think we are there yet.)

Comment author: MikeJohnson 27 October 2017 11:03:22PM 1 point [-]

I worry that you're also using a fully-general argument here, one that would also apply to established EA cause areas.

This stands out at me in particular:

Naturally I don't mind if enthusiasts pick some area and give it a go, but appeals to make it a 'new cause area' based on these speculative bets look premature by my lights: better to pick winners based on which of the disparate fields shows the greatest progress, such that one forecasts similar marginal returns to the 'big three'.

There's a lot here that I'd challenge. E.g., (1) I think you're implicitly overstating how good the marginal returns on the 'big three' actually are, (2) you seem to be doubling down on the notion that "saving lives is better than improving lives" or that "the current calculus of EA does and should lean toward reduction of mortality, not improving well-being", which I challenged above, (3) I don't think your analogy between cryonics (which, for the record, I'm skeptical on as an EA cause area) and e.g., Enthea's collation of research on psilocybin seems very solid.

I would also push back on how dismissive "Naturally I don't mind if enthusiasts pick some area and give it a go, but appeals to make it a 'new cause area' based on these speculative bets look premature by my lights" sounds. Enthusiasts are the ones that create new cause areas. We wouldn't have any cause areas, save for those 'silly enthusiasts'. Perhaps I'm misreading your intended tone, however.

Comment author: Gregory_Lewis 27 October 2017 12:19:46AM 0 points [-]

I don't think mental health has comparably good interventions to either of these, even given the caveats you note. Cost per QALY or similar for treatment looks to have central estimates much higher than these, and we should probably guess mental health interventions in poor countries have more regression to the mean to go.

Some hypothetical future intervention could be much better, but looking for these isn't that neglected, and such progress looks intractable given we understand the biology of a given common mental illness much more poorly than a typical NTD.

Comment author: MikeJohnson 27 October 2017 06:33:26PM 1 point [-]

I don't think mental health has comparably good ... [c]ost per QALY or similar.

Some hypothetical future intervention could be much better, but looking for these isn't that neglected, and such progress looks intractable given we understand the biology of a given common mental illness much more poorly than a typical NTD.

I think the core argument for mental health as a new cause area is that (1) yes, current mental health interventions are pretty bad on average, but (2) there could be low-hanging fruit locked away behind things that look 'too weird to try', and (3) EA may be in a position to signal-boost the weird things ('pull the ropes sideways') that have a plausible chance of working.

Using psilocybin as an adjunct to therapy seems like a reasonable example of some low-hanging fruit that's effective, yet hasn't been Really Tried, since it is weird. And this definitely does not exhaust the set of weird & plausible interventions.

I'd also like to signal-boost @MichaelPlant's notion that "A more general worry is that effective altruists focus too much on saving lives rather than improving lives.." At some point, we'll get to hard diminishing returns on how many lives we can 'save' (delay the passing of) at reasonable cost or without significant negative externalities. We may be at that point now. If we're serious about 'doing the most good we can do' I think it's reasonable to explore a pivot to improving lives -- and mental health is a pretty key component of this.

Comment author: MikeJohnson 25 October 2017 03:53:27AM *  4 points [-]

Hi Kevin,

I think it may be useful to frame your critiques in terms of causal stories -- e.g., how strategy or structural condition X, fails to achieve goal Y, that organization Z has explicitly endorsed. Offering a gears-level model of what you think is happening, and why that's bad, is probably the best way to (1) change peoples minds, if they're wrong, and (2) allow other people to change your mind, if you're wrong.

A few more specific things that I think are worth clarifying or pushing back on:

Welfare vs exploitation framing: You note the distinction between the pro-welfare vs anti-exploitation wings of animal advocacy, and suggest that the dominance of the pro-welfare wing has created some discontent in people with alternative value systems. I think that's a fair comment, but I'd also suggest (as an observer who is not associated with the organizations you mentioned) that the welfare-centric approach may have good reasons for popularity in the marketplace of ideas. Personally, as a valence realist, I believe that caring about animal welfare is much more philosophically defensible than caring about animal exploitation, because I think welfare is more 'real' (better definable; less of a leaky reification; hews closer to what actually has value) than exploitation/justice. I certainly could be wrong and it could be there are solid reasons why I should care more about alternative framings, but I'd need to see good philosophical arguments for this.

Democratisation / accountability at ACE: I should note that I'm not affiliated with ACE whatsoever, but I have been following them as an organization. I too have some qualms about some things they've written, but it seems my qualms run in the opposite direction of yours. :) I.e., I think equity, inclusion, and diversity can be good things, but I also believe organizations have a limited 'complexity budget', and by requiring of themselves an explicit focus on these things, ACE may be watering down their core goal of helping animals. However, I would also add (1) I'm glad ACE exists, (2) my impression is they’re doing a fine job, and (3) I don't see myself as having much standing (‘skin in the game’) to critique ACE.

This is not to say your concerns are baseless, but it is to note there are people who seem to share your goals (‘being good to animals’ is a non-trivial reason why I’m doing the work I’m doing, and I assume you feel the same), yet would pull in exactly the opposite direction you would.

Probably the most effective moral trade here is that we should just let ACE be ACE.

It could be that this isn’t the best approach, and that EAA orgs should ‘pay more attention to other perspectives’. But I think the burden of proof is on those who would make this assertion to be very clear about (1) what exactly their perspective is, (2) what exactly their perspective entails, practically and philosophically, (3) whether they have any ‘skin in the game’ in relevant ways, (4) what’s uniquely ethical or effective about these perspectives, among the countless perspectives out there, and by implication (5) why EAAs (such as ACE) should change their methods and/or goals to accommodate them.

Comment author: e19brendan 23 October 2017 04:34:25PM 5 points [-]

Can you say more about the "revealed constraints" here? What would be the appropriate preconditions for "starting the party?" I think it can and should be done - we've embraced frontline cost-effectiveness in doing good today, and we've embraced initiatives oriented towards good in the far future even in the absence of clear interventions; even so, global mental health hasn't quite fit into either of those EA approaches, despite being a high-burden problem that is extremely neglected and arguably tractable.

Mental health interventions will become more cost-effective on an absolute scale, as we advance knowledge and implement better, and on a relative scale, as we largely overcome the burden of communicable diseases like malaria. The EA community should rally around the possibility of accelerating the development and dissemination of mental health interventions. It is quite exciting to see the work of some EAs in this space, and I think EAs could bring real value to the academic and nonprofit leaders involved in global mental health. It may be that working on this issue is more valuable at this point than merely funding this cause, but that's a broader strategy discussion. I'm excited to join other EAs in building the case for the relevance of mental health to the movement.

Comment author: MikeJohnson 24 October 2017 08:38:32PM *  4 points [-]

Can you say more about the "revealed constraints" here? What would be the appropriate preconditions for "starting the party?" I think it can and should be done - we've embraced frontline cost-effectiveness in doing good today, and we've embraced initiatives oriented towards good in the far future even in the absence of clear interventions; even so, global mental health hasn't quite fit into either of those EA approaches, despite being a high-burden problem that is extremely neglected and arguably tractable.

Right, I think an obvious case can be made that mental health is Important; making the case that it's also Tractable and Neglected requires more nuance but I think this can be done. E.g., few non-EA organizations are 'pulling the ropes sideways', have the institutional freedom to think about this as an actual optimization target, or are in a position to work with ideas or interventions that are actually upstream of the problem. My intuition is that mental health is hugely aligned with what EAs actually care about, and is much much more tractable and neglected than the naive view suggests. To me, it's a natural fit for a top-level cause area.

The problem I foresee is that EA hasn't actually added a new Official Top-Level Cause Area since... maybe EA was founded? And so I don't expect to see much of a push from the EA leadership to add mental health as a cause area -- not because they don't want it to happen, but because (1) there's no playbook for how to make it happen, and (2) there may be local incentives that hinder doing this.

More specifically: mental health interventions that actually work are likely to be weird- e.g., Michael D. Plant's ideas about drug legalization is a little weird; Enthea's ideas about psilocybin is more weird; QRI's valence research is very weird. Now, at EAG there was a talk suggesting that we 'Keep EA Weird'. But I worry that's a retcon, that weird things have been grandfathered into EA but institutional EA is not actually very weird, and despite lots of funding, it has very little funding for Actually Weird Things. Looking at what gets funded ('revealed preferences') I see support for lots of conventionally-worthy things and some appetite for moderately weird things, but almost none for things that are sufficiently weird that they could seed a new '10x+' cause area ("zero-to-one weird").

*Note to all EA leadership reading this: I would LOVE LOVE LOVE to be proven wrong here!

So, my intuition is that EAs who want this to happen will need to organize, make some noise, 'start the party', and in general nurture this mental-health-as-cause-area thing until it's mature enough that 'core EA' orgs won't need to take a status hit to fund it. I.e., if we want EA to rally around mental health, it's literally up to people like us to make that happen.

I think if we can figure out good answers to these questions we'd have a good shot:

  • Why do you think mental health is Neglected and Tractable?

  • Why us, why now, why hasn't it already been done?

  • Which threads & people in EA do you think could be rallied under the banner of mental health?

  • Which people in 'core EA' could we convince to be a champion of mental health as an EA cause area?

  • Who could tell us What It Would Actually Take to make mental health a cause area?

  • What EA, and non-EA, organizations could we partner with here? Do we have anyone with solid connections to these organizations?

(Anyone with answers to these questions, please chime in!)

Comment author: MikeJohnson 18 October 2017 11:38:59PM *  2 points [-]

QRI is very interested and working hard on the mental health cause area; notable documents are Quantifying Bliss and Wireheading Done Right - more to come soon. There are also other good things in this space from e.g. Michael D. Plant, Spencer Greenberg, and perhaps Enthea.

On the tribalism point, I agree tribalism causes a lot of problems; I'd also agree with what I take you to be saying that in some senses it may be a load-bearing part of an Evolutionary Stable Strategy (ESS) which may prove troublesome to tinker with. Finally, I agree that mental health is a potentially upstream factor in some of the negative-sum presentations of tribalism.

I would say it's unclear at this point whether the EA movement has the plasticity required to make mental health an Official Cause Area -- I believe the leadership is interested but "revealed constraints" seem to tell a mixed story. I'm certainly hoping it'll happen if enough people get together to 'start the party'.

(Personal opinions; not necessarily shared by others at QRI)

Comment author: MikeJohnson 14 August 2017 02:53:09PM 2 points [-]

Hi Michael,

This is fantastic work, thanks for all the effort and thought that went into these posts. Your overall case seems solid to me-- or at minimum, I think yours is 'the argument to beat'.

One thought that I had while reading:

Drug policy reform may also allow us to better understand current pain medications and develop new treatments and uses. Your focus here is on decriminalizing existing drugs such as psilocybin, opioids, and MDMA, because you believe (with substantial evidence) that these drugs have nontrivial therapeutic potential, despite their sometimes substantial drawbacks. This seems reasonable, especially in the case of drugs with fairly benign risk profiles (e.g. psilocybin).

I do worry about some of the long-term side-effects associated with certain drugs, however, and it seems to me an interesting 'unknown unknown' here is if it's possible to develop new substances, or novel brain stimulation modalities, that allow us access to the upsides of such drugs, without suffering from the downsides.

E.g., in the case of MDMA, the not-uncommon long-term effects of chronic use include heightened anxiety & cognitive impairment, which seem very serious. But at the same time, there doesn't seem to be any 'law of the universe' mandating that the pleasant feelings of love & trust elicited by MDMA that are so therapeutically useful for PTSD must be unavoidably linked to brain damage.

I'm not completely sure how this observation interacts with your arguments, but I suspect it generally supports your case, since decriminalization could lower barriers for research into even better & safer options. Quite possibly, this could be one of the major reasons why decriminalization could lead to a better future.

On the other hand, the sword of innovation cuts both ways, as there seem to be a lot of very dangerous, toxic variants of drugs coming from overseas labs that are even less safe than current options (Fentanyl, Captagon, etc). Perhaps this is a case of "Banning dangerous substances as a precautionary principle can have perverse effects if it causes people to take a more dangerous drugs instead," and decriminalization would help mitigate this phenomenon. But I must admit to some uncertainty & worry here as to second-order effects.

Anyway, I think this is worth pursuing further. OpenPhil might be interested? I think probably Nick Beckstead might be a good contact there.

In response to Introducing Enthea
Comment author: MikeJohnson 10 August 2017 11:23:49PM *  5 points [-]

Hi Milan,

I'm glad to see this sort of project. You may enjoy my colleague Andres's summary of the Psychedelic Science 2017 conference. He notes that:

It should not come as a surprise to anyone who has been paying attention that there is a psychedelic renaissance underway. Bearing extreme world-wide counter-measures against it, in so far as psychedelic and empathogenic compounds meet the required evidentiary standards of mainstream psychopharmacology as safe and effective treatments for mental illness (and they do), they will be a staple of tomorrow’s tools for mental health. It’s not a difficult gamble: the current studies being made around the world are merely providing the scientific backing of what was already known in the 60s (for psychedelics) and 80s (for MDMA). I.e. That psychedelic medicine (people love to call it that way) in the right set and setting produces outstanding clinically-relevant effect sizes.

In short, it does seem increasingly like psychedelics aren't just for edgy recreational use, but could be part of some useful medical tradition that can measurably and reliably help people. But it does seem like it would be helpful to have answers to the following questions: 1. How do these things work? If we think they do good things, then what's a gears-level account of how they do good? 2. Are there tradeoffs, and what are they? Are there ways of getting the good without the bad?

Anyway, thanks for doing this!

View more: Next