Comment author: MikeJohnson 27 October 2017 11:03:22PM 1 point [-]

I worry that you're also using a fully-general argument here, one that would also apply to established EA cause areas.

This stands out at me in particular:

Naturally I don't mind if enthusiasts pick some area and give it a go, but appeals to make it a 'new cause area' based on these speculative bets look premature by my lights: better to pick winners based on which of the disparate fields shows the greatest progress, such that one forecasts similar marginal returns to the 'big three'.

There's a lot here that I'd challenge. E.g., (1) I think you're implicitly overstating how good the marginal returns on the 'big three' actually are, (2) you seem to be doubling down on the notion that "saving lives is better than improving lives" or that "the current calculus of EA does and should lean toward reduction of mortality, not improving well-being", which I challenged above, (3) I don't think your analogy between cryonics (which, for the record, I'm skeptical on as an EA cause area) and e.g., Enthea's collation of research on psilocybin seems very solid.

I would also push back on how dismissive "Naturally I don't mind if enthusiasts pick some area and give it a go, but appeals to make it a 'new cause area' based on these speculative bets look premature by my lights" sounds. Enthusiasts are the ones that create new cause areas. We wouldn't have any cause areas, save for those 'silly enthusiasts'. Perhaps I'm misreading your intended tone, however.

Comment author: Gregory_Lewis 28 October 2017 09:25:08AM 1 point [-]

Respectfully, I take 'challenging P' to require offering considerations for ¬P. Remarks like "I worry you're using a fully-general argument" (without describing what it is or how my remarks produce it), "I don't think your analogy is very solid" (without offering dis-analogies) don't have much more information than simply "I disagree".

1) I'd suggest astronomical stakes considerations imply at that one of the 'big three' do have extremely large marginal returns. If one prefers something much more concrete, I'd point to the humane reforms improving quality of life for millions of animals.

2) I don't think the primacy of the big three depends in any important way on recondite issues of disability weights or population ethics. Conditional on a strict person affecting view (which denies the badness of death) I would still think the current margin of global health interventions should offer better yields. I think this based on current best estimates of disability weights in things like the GCPP, and the lack of robust evidence for something better in mental health (we should expect, for example, Enthea's results to regress significantly, perhaps all the way back to the null).

On the general point: I am dismissive of mental health as a cause area insofar as I don't believe it to be a good direction for EA energy to go relative to the other major ones (and especially my own 'best bet' of xrisk). I don't want it to be a cause area as it will plausibly compete for time/attention/etc. with other things I deem more important. I'm no EA leader, but I don't think we need to impute some 'anti-weirdness bias' (which I think is facially implausible given the early embrace of AI stuff etc) to explain why they might think the same.

Naturally, I may be wrong in this determination, and if I am wrong, I want to know about it. Thus having enthusiasts go into more speculative things outside the currently recognised cause areas improves likelihood of the movement self-correcting and realising mental health should be on a par with (e.g.) animal welfare as a valuable use of EA energy.

Yet anointing mental health as a cause area before this case has been persuasively made would be a bad approach. There are many other candidates for 'cause area No. n+1' which (as I suggested above) have about the same plausibility as mental health. Making them all recognised 'cause areas' seems the wrong approach. Thus the threshold should be higher.

Comment author: Buck 28 October 2017 12:35:03AM *  5 points [-]

I am disinclined to be sympathetic when someone's problem is that they posted so many bad arguments all at once that they're finding it hard to respond to all the objections.

Comment author: Gregory_Lewis 28 October 2017 02:03:33AM *  9 points [-]

Regarding the terrible incentive gradients mentioned by Claire above, I think discussion is more irenic if people resist, insofar as possible, to impute bad epistemic practices to certain people, and even to try and avoid identifying the individual with the view or practice you take to be mistaken, even though they in fact advocate it.

As a concrete example (far from alone, and selected not because it is 'particularly bad', but rather because it comes from a particularly virtuous discussant) the passage up-thread seems to include object level claims on the epistemic merits of a certain practice, but also implies an adverse judgement about the epistemic virtue of the person it is replying to:

As a side note, I find the way you're using social science quite frustrating. You keep claiming that social science supports many of your particular beliefs, and then other people keep digging into the evidence and pointing out the specific reason that the evidence you've presented isn't very convincing. But it takes a lot of time to rebut all of your evidence that way, much more time than it takes for you to link to another bad study. [my emphasis]

The 'you-locutions' do the work of imputing, and so invite subsequent discussion about the epistemic virtue of the person being replied to (e.g. "Give them a break, this mistake is understandable given some other factors"/ "No, this is a black mark against them as a thinker, and the other factors are not adequate excuse").

Although working out the epistemic virtue of others can be a topic with important practical applications (but see discussion by Askell and others above about 'buzz talk'), the midst of a generally acrimonious discussion on a contentious topic is not the best venue. I think a better approach is a rewording that avoids the additional implications:

I think there's a pattern of using social science data which is better avoided. Suppose one initially takes a set of studies to support P. Others suggest studies X, Y and Z (members of this set) do not support P after all. If one agrees with this, it seems better to clearly report a correction along the lines of "I took these 5 studies to support P, but I now understand 3 of these 5 do not support P", rather than offering additions to the set of studies that support P.

The former allows us to forecast how persuasive additional studies are (i.e. if all of the studies initially taken to support P do not in fact support P on further investigation, we may expect similar investigation to reveal the same about the new studies offered). Rhetorically, it may be more persuasive to sceptics of P, as it may allay worries that sympathy to P is tilting the scales in favour of reporting studies that prima facie support P.

The rewording can take longer (but I am not rewording myself, rather a better writer), but even if so I expect other benefits will outweigh it.

Comment author: MikeJohnson 27 October 2017 06:33:26PM 1 point [-]

I don't think mental health has comparably good ... [c]ost per QALY or similar.

Some hypothetical future intervention could be much better, but looking for these isn't that neglected, and such progress looks intractable given we understand the biology of a given common mental illness much more poorly than a typical NTD.

I think the core argument for mental health as a new cause area is that (1) yes, current mental health interventions are pretty bad on average, but (2) there could be low-hanging fruit locked away behind things that look 'too weird to try', and (3) EA may be in a position to signal-boost the weird things ('pull the ropes sideways') that have a plausible chance of working.

Using psilocybin as an adjunct to therapy seems like a reasonable example of some low-hanging fruit that's effective, yet hasn't been Really Tried, since it is weird. And this definitely does not exhaust the set of weird & plausible interventions.

I'd also like to signal-boost @MichaelPlant's notion that "A more general worry is that effective altruists focus too much on saving lives rather than improving lives.." At some point, we'll get to hard diminishing returns on how many lives we can 'save' (delay the passing of) at reasonable cost or without significant negative externalities. We may be at that point now. If we're serious about 'doing the most good we can do' I think it's reasonable to explore a pivot to improving lives -- and mental health is a pretty key component of this.

Comment author: Gregory_Lewis 27 October 2017 07:32:52PM 0 points [-]

1-3 looks general, and can in essence be claimed to apply to any putative cause area not currently thought to be a good candidate. E.g.

1) Current anti-aging interventions are pretty bad on average. 2) There could be low hanging fruit behind things that look 'too weird to try'. 3) EA may be in position to signal boost weird things that have plausible chance of working.

Mutatis mutandis criminal justice reform, improving empathy, human enhancement, and so on. One could adjudicate these competing areas by evidence that some really do have these low hanging fruit. Yet it remains unclear that (for example) things like psilocybin data gives more a boost than (say) cryonics. Naturally I don't mind if enthusiasts pick some area and give it a go, but appeals to make it a 'new cause area' based on these speculative bets look premature by my lights: better to pick winners based on which of the disparate fields shows the greatest progress, such that one forecasts similar marginal returns to the 'big three'.

(Given GCR/x-risks, I think the 'opportunities' for saving quite a lot of lives - everyone's - are increasing. I agree that ignoring that - which one shouldn't - it seems likely status quo progress should exhaust preventable mortality faster than preventable ill-health. Yet I don't think we are there yet.)

Comment author: thebestwecan 27 October 2017 01:20:53PM *  0 points [-]

I wouldn't concern yourself much with downvotes on this forum. People use downvotes for a lot more than the useful/not useful distinction they're designed for (most common other reason is to just signal against views they disagree with when they see an opening). I was recently talking to someone about what big improvements I'd like to see in the EA community's online discussion norms, and honestly if I could either remove bad comment behavior or remove bad liking/voting behavior, it'd actually be the latter.

To put it another way, though I'm still not sure exactly how to explain this, I think no downvotes and one thoughtful comment explaining why your comment is wrong (and no upvotes on that comment) should do more to change your mind than a large number of downvotes on your comment.

I'm really still in favor of just removing downvotes from this forum, since this issue has been so persistent over the years. I think there would be downsides, but the hostile/groupthink/dogpiling environment that the downvoting behavior facilitates is just really really terrible.

Comment author: Gregory_Lewis 27 October 2017 05:26:35PM 2 points [-]

I previously defended keeping down-votes, I confess I'm not so sure now.

A fairly common trait is people conflate some viewpoint independent metric of 'quality' with 'whether I like this person of the view they espouse'. I'm sure most users have voting patterns that line up with these predictors pretty strongly, although there is some residual signal from quality: I imagine a view where one has a pretty low threshold for upvoting stuff sympathetic to ones view, and a very high one for upvoting non-sympathetic, and vice versa for downvotes.

I'm not sure how the dynamic changes if you get rid of downvotes though. Assuredly there's a similar effect where people just refrain to upvote your stuff and slavishly upvote your opponents. There probably is some value in 'nuking' really low quality remarks to save everyone time. Unsure.

Comment author: MichaelPlant 25 October 2017 07:13:51PM *  2 points [-]

FWIW, my impression of EA leadership is that they (correctly) find that mental health isn't the best target for currently existing people due to other things in global health

Can you say what you think is more valuable? If i'm looking at GW's top charities, the options are AMF or SCI. AMF is about saving lives, rather than improving lives, so that's a moral question as to how you trade those off. I'm not really sure how to think of the happiness impact of SCI. GW seem to argue it's worthwhile because it increases income for the recipient, but I'm pretty sceptical increases in income, even at low levels, improve aggergrate happiness (see this paper on Give Directly that found it didn't increase overall happiness)

Comment author: Gregory_Lewis 27 October 2017 12:19:46AM 1 point [-]

I don't think mental health has comparably good interventions to either of these, even given the caveats you note. Cost per QALY or similar for treatment looks to have central estimates much higher than these, and we should probably guess mental health interventions in poor countries have more regression to the mean to go.

Some hypothetical future intervention could be much better, but looking for these isn't that neglected, and such progress looks intractable given we understand the biology of a given common mental illness much more poorly than a typical NTD.

Comment author: MikeJohnson 24 October 2017 08:38:32PM *  4 points [-]

Can you say more about the "revealed constraints" here? What would be the appropriate preconditions for "starting the party?" I think it can and should be done - we've embraced frontline cost-effectiveness in doing good today, and we've embraced initiatives oriented towards good in the far future even in the absence of clear interventions; even so, global mental health hasn't quite fit into either of those EA approaches, despite being a high-burden problem that is extremely neglected and arguably tractable.

Right, I think an obvious case can be made that mental health is Important; making the case that it's also Tractable and Neglected requires more nuance but I think this can be done. E.g., few non-EA organizations are 'pulling the ropes sideways', have the institutional freedom to think about this as an actual optimization target, or are in a position to work with ideas or interventions that are actually upstream of the problem. My intuition is that mental health is hugely aligned with what EAs actually care about, and is much much more tractable and neglected than the naive view suggests. To me, it's a natural fit for a top-level cause area.

The problem I foresee is that EA hasn't actually added a new Official Top-Level Cause Area since... maybe EA was founded? And so I don't expect to see much of a push from the EA leadership to add mental health as a cause area -- not because they don't want it to happen, but because (1) there's no playbook for how to make it happen, and (2) there may be local incentives that hinder doing this.

More specifically: mental health interventions that actually work are likely to be weird- e.g., Michael D. Plant's ideas about drug legalization is a little weird; Enthea's ideas about psilocybin is more weird; QRI's valence research is very weird. Now, at EAG there was a talk suggesting that we 'Keep EA Weird'. But I worry that's a retcon, that weird things have been grandfathered into EA but institutional EA is not actually very weird, and despite lots of funding, it has very little funding for Actually Weird Things. Looking at what gets funded ('revealed preferences') I see support for lots of conventionally-worthy things and some appetite for moderately weird things, but almost none for things that are sufficiently weird that they could seed a new '10x+' cause area ("zero-to-one weird").

*Note to all EA leadership reading this: I would LOVE LOVE LOVE to be proven wrong here!

So, my intuition is that EAs who want this to happen will need to organize, make some noise, 'start the party', and in general nurture this mental-health-as-cause-area thing until it's mature enough that 'core EA' orgs won't need to take a status hit to fund it. I.e., if we want EA to rally around mental health, it's literally up to people like us to make that happen.


I think if we can figure out good answers to these questions we'd have a good shot:

  • Why do you think mental health is Neglected and Tractable?

  • Why us, why now, why hasn't it already been done?

  • Which threads & people in EA do you think could be rallied under the banner of mental health?

  • Which people in 'core EA' could we convince to be a champion of mental health as an EA cause area?

  • Who could tell us What It Would Actually Take to make mental health a cause area?

  • What EA, and non-EA, organizations could we partner with here? Do we have anyone with solid connections to these organizations?

(Anyone with answers to these questions, please chime in!)

Comment author: Gregory_Lewis 25 October 2017 06:08:44PM 3 points [-]

FWIW, my impression of EA leadership is that they (correctly) find that mental health isn't the best target for currently existing people due to other things in global health, and it isn't the best thing for future people, due to dominance of X risk etc. I don't see a huge 'gap in the market' for marginal efforts re global mental health for really outsized impact.

Openphil funds a variety of things outside the 'big cause areas' (criminal justice, open science, education, etc.), so there doesn't seem a huge barrier to this cause area getting traction.

Funding weird stuff is a bit tricky, as only a tiny minority of weird things are worthwhile, even ex ante: most are meritless. I guess you want to select from a propitious reference class, and to look for some clear forecast indicators that can allow it to be promptly dropped from the portfolio. It doesn't strike me as crazy that there's no current weird project candidate that clears the bar as being worth speculative investment.

In response to S-risk FAQ
Comment author: aspencer 26 September 2017 03:00:33PM 1 point [-]

This sentence in your post caught my attention: " Even if the fraction of suffering decreases, it's not clear whether the absolute amount will be higher or lower."

To me, it seems like suffering should be measured by suffering / population, rather than by the total amount of suffering. The total amount of suffering will grow naturally with the population, and suffering / population seems to give a better indication of the severity of the suffering (a small group suffering a large amount is weighted higher than a large group suffering a small amount, as I intuitively think is correct).

My primarily concern with this (simplistic) method of measuring the severity of suffering is that it ignores the distribution of suffering within a population (i.e there could be a sub population with a large amount of suffering). However, I don't think that's a compelling enough reason to discount working to minimize the fraction of suffering rather than absolute suffering.

Are there compelling arguments for why we should seek to minimize total suffering?

In response to comment by aspencer on S-risk FAQ
Comment author: Gregory_Lewis 28 September 2017 06:08:11AM 1 point [-]

If I understand right, the view you're proposing is sort of like the 'average view' of utilitarianism. The objective is to minimize the average level of suffering across a population.

A common challenge to this view (shared with average util) is that it seems you can make a world better by adding lives which suffer, but suffer less than the average. In some hypothetical hellscape where everyone is getting tortured, adding further lives where people get tortured slightly less severely should make the world even worse, not better.

Pace the formidable challenges of infinitarian ethics, I generally lean towards total views. I think the intuition you point to (which I think is widely shared) in that larger degrees of suffering should 'matter more' is perhaps better accommodated in something like prioritarianism, whereby improving the well-being of the least well off is given extra moral weight to its utilitarian 'face value'. (FWIW, I generally lean towards pretty flat footed utilitarianism, as there some technical challenges with prioritarianism, and it seems hard to distinguish the empirical from the moral matters: there are evolutionary motivations (H/T Carl Shulman) why there should be extremely severe pain, so maybe a proper utilitarian accounting makes relieving these extremes worth very large amounts of more minor suffering).

Aside: in population ethics there's a well-worn problem of aggregation, as suggested by the repugnant conclusion: lots and lots of tiny numbers when put together can outweigh a big numbers, so total views have challenges such as: "Imagine A where 7 billion people live lives of perfect bliss, versus B where these people suffer horrendous torture, but TREE(4) people with lives that are only just barely worth living". B is far better than A, yet it seems repulsive. (The usual total view move is to appeal to scope insensitivity and that our intuitions here are ill-suited to tracking vast numbers. I don't think perhaps more natural replies (e.g. 'discount positive wellbeing if above zero but below some threshold close to it'), come out in the wash).

Unfortunately, the 'suffering only' suggested as a potential candidate in the FAQ (i.e. discount 'positive experiences', and only work to reduce suffering) seems to compound these, as in essence one can concatenate these problems of population ethics with the counter-intuitiveness of this discounting of positive experience (virtually everyone's expressed and implied preferences indicate positive experiences have free-standing value, as they are willing to trade off between negative and positive).

The aggregation challenge akin to the repugnant conclusion (which I think I owe to Carl Shulman) goes like this. Consider A: 7 billion people suffering horrendous torture. Now consider B: TREE(4) people enjoying lifelong eudaimonic bliss with the exception of each suffering a single pinprick. On a total suffering view A >>> B, yet this seems common-sensically crazy.

The view seems to violate two intuitions, the first the aggregation issue (i.e. TREE(4) pinpricks is more morally important than 7 billion cases of torture), but also the discounting of positive experience - the 'suffering only counts view' is indifferent to the difference of TREE(4) instances of lifelong eudaimonic bliss between the scenarios. If we imagine world C where no one exists a total util view gets the intuitively 'right' answer (i.e. B > C > A), whilst the suffering view gets most of the pairwise comparisons intuitively wrong (i.e. C > A > B)

Comment author: SoerenMind  (EA Profile) 21 July 2017 11:55:26AM 2 points [-]

Trivial objection, but the y=0 axis also gets transformed so the symmetries are preserved. In maths, symmetries aren't usually thought of as depending on some specific axis. E.g. the symmetry group of a cube is the same as the symmetry group of a rotated version of the cube.

Comment author: Gregory_Lewis 21 July 2017 12:28:45PM *  0 points [-]

Mea culpa. I was naively thinking of super-imposing the 'previous' axes. I hope the underlying worry still stands given the arbitrarily many sets of mathematical objects which could be reversibly mapped onto phenomenological states, but perhaps this betrays a deeper misunderstanding.

Comment author: Gregory_Lewis 21 July 2017 08:50:04AM *  0 points [-]

Aside:

Essentially, the STV is an argument that much of the apparent complexity of emotional valence is evolutionarily contingent, and if we consider a mathematical object isomorphic to a phenomenological experience, the mathematical property which corresponds to how pleasant it is to be that experience is the object’s symmetry.

I don't see how this can work given (I think) isomorphism is transitive and there are lots of isomorphisms between sets of mathematical objects which will not preserve symmetry.

Toy example. Say we can map the set of all phenomenological states (P) onto 2D shapes (S), and we hypothesize their valence corresponds to their symmetry along the y=0 plane. Now suppose an arbitrary shear transformation applied to every member of S, giving S!. P (we grant) is isomorphic to S. Yet S! is isomorphic to S, and therefore also isomorphic to P; and the members of S and S! which are symmetrical differ. So which set of shapes should we use?

Comment author: Fluttershy 08 March 2017 01:44:16PM *  2 points [-]

I'd like to respond to your description of what some people's worries about your previous proposal were, and highlight how some of those worries could be addressed, hopefully without reducing how helpfully ambitious your initial proposal was. Here goes:

the risk of losing flexibility by enforcing what is an “EA view” or not

It seems to me like the primary goal of the panel in the original proposal was to address instances of people lowering the standard of trustworthiness within EA and imposing unreasonable costs (including unreasonable time costs) on individual EAs. I suspect that enumerating what sorts of things "count" as EA endeavors isn't a strictly necessary prerequisite for forming such a panel.

I can see why some people held this concern, partly because "defining what does and doesn't count as an EA endeavor" clusters in thing-space with "keeping an eye out for people acting in untrustworthy and non-cooperative ways towards EAs", but these two things don't have to go hand in hand.

the risk of consolidating too much influence over EA in any one organisation or panel

Fair enough. As with the last point, the panel would likely consolidate less unwanted influence over EA if it focused solely on calling out sufficiently dishonestly harmful behavior by anyone who self-identified as an EA, and made no claims as to whether any individuals or organizations "counted" as EAs.

the risk of it being impossible to get agreement, leading to an increase in politicisation and squabbling

This seems like a concern that's good, in that a bit harder for me to address satisfactorily. Hopefully, though, there would some clear-cut cases the panel could choose to consider, too; the case of Intentional Insights' poor behavior was eventually quite clear, for one. I would guess that the less clear cases would tend to be the ones where a clear resolution would be less impactful.

In response, we toned back the ambitions of the proposed ideas.

I'd have likely done the same. But that's the wrong thing to do.

In this case, the counterfactual to having some sort of panel to call out behavior which causes unreasonable amounts of harm to EAs is relying on the initiative of individuals to call out such behavior. This is not a sustainable solution. Your summary of your previous post puts it well:

There’s very little to deal with people representing EA in ways that seem to be harmful; this means that the only response is community action, which is slow, unpleasant for all involved, and risks unfairness through lack of good process.

Community action is all that we had before the Intentional Insights fiasco, and community action is all that we're back to having now.

I didn't get to watch the formation of the panel you discuss, but it seems like a nontrivial amount of momentum, which was riled up by the harm Intentional Insights caused EA, went into its creation. To the extent that that momentum is no longer available because some of it was channeled into the creation of this panel, we've lost a chance at building a tool to protect ourselves against agents and organizations who would impose costs on, and harm EAs and EA overall. Pending further developments, I have lowered my opinion of everyone directly involved accordingly.

Comment author: Gregory_Lewis 09 March 2017 01:36:54PM 5 points [-]

FWIW, as someone who contributed to the InIn document, I approve of (and recommended during discussion) the less ambitious project this represents.

View more: Prev | Next