Comment author: MichaelPlant 11 December 2017 10:39:47AM 1 point [-]

Good you set this up. Is the plan to gather the postings from both the 80k board and the fbook group so that everything is on the EA work club?

Comment author: [deleted] 02 December 2017 03:13:18PM *  4 points [-]

Thanks for the post! There's actually a lot of existing literature on these topics.

Regarding the effect of cash on happiness, Haushofer, Reisinger & Shapiro (2015) have a paper called "Your Gain Is My Pain: Negative Psychological Externalities of Cash Transfers".

If you are in fact skeptical of the meaningfulness of your income as a metric, you should be similarly skeptical of the meaningfulness of variations in income of people in poor countries

Maybe I misunderstand you, but the whole point of cash transfers is that money matters more for quality of life when you are poor. See 80,000 Hours' review.

The (lack of) productivity effects of (primary) education in developing countries have also been studied, although I'm less familiar with that literature. However, Haushofer & Shapiro (2016) find that as a result of the cash transfers, "education expenditures increase by USD 1 PPP", while overall expenditure on non-durable goods increases by 36 USD PPP. (Table V).

Comment author: MichaelPlant 07 December 2017 03:06:59PM 1 point [-]

Like Tom, I'm a bit uncertain as to the target or upshot of your argument. Are you claiming that GD's wealth transfers go to status goods and therefore they won't increase happiness? If so, then Tom pointing out not very much money goes on education would seem to undermine that, unless you think the rest of the expenditure is status goods too.

Another argument you could make to undermine cash transfers is that the non-comparative part of their effect (presuming it exists, which is probably does) is just quite small or short lived. In the 'Your Gain Is My Pain' paper on p32 they show the effect only lasts a few months. I discuss this in my EAG London talk which I'm hoping will go up soon. Basically I don't cash transfers do nearly as well in terms of life satisfaction as mental health interventions. So we should just fund MH interventions instead, in as much as we're concerned about the happiness of recipients.

Comment author: Elizabeth 03 December 2017 04:53:14PM 0 points [-]

You're the second person to argue for this (other was on my personal blog), and I hear the argument. I think there's a slippery slope of what to control for here- if I include sleep, I'd also want to look at how happy people were when meditating relative to the activity it displaced.

Comment author: MichaelPlant 04 December 2017 01:22:11AM 0 points [-]

FYI, I note this is my comment above:

Presumably the appropriate counterfactual is how pleasant is meditation vs whatever they would have been doing instead with that time (e.g. watching tv?).

If 1 hour of TV is as fun as 1 hour of mindfulness, you should just ignore the effect on mindfulness for that hour of the person's day and look at it's effects on the rest of the person's life, where the person is probably somewhat happier.

Comment author: MichaelPlant 04 December 2017 12:37:17AM *  2 points [-]

Hello Elizabeth, thanks for writing this up. I think this is important work so please take all my points belows as friendly suggestions for improving the methodology so we can get a better answer or just clarificatory questions because I don't know what something is (I always find it quite hard to understand other people's CEA models).

Saying MBSR will have an effect for 38 years after treatment seems extremely generous. Do you have any data either way on how long the benefits of mindfulness last for? I've seen stuff saying CBT works for 5 years on depression(/anxiety) without much of a drop, but 38 years is very long.

What is 'time cost of initial work'?

What does 'negative years of life after treatment' refer to?

The effects of MBSR on deprssion/anxiety (or is it just anxiety? I haven't check the studies yet) you report are much weaker than I expected. The 7% number suggests that just 7% of the 'DALY-badness' of anxiety has been removed, suggesting it makes a dent in, rather than 'cures', the condition. Do you know what's going on here? Is MBSR partially effective (and how does this compare to CBT)? Are these perhaps studies on low anxiety people that got completed cured? Something else? I would think the effect, while it lasts, would have a much higher impact.

Time cost for continued practice seems odd to me. First, it's pretty implausible every person who went on an MBSR course would do 1 hour's practice each day (and very implausible if you assume they will do this for a next 39 years!). Second, you seem to be assuming the DALY weight of each hour meditating is 0.5, which is roughly as bad as it is to be anxious anyway, no? Surely time meditating can't be that painful. Unless you think meditation is actually unpleasant, something people suffer through to get less stressed when not meditating, I'd remove that part of the CEA. Meditation seems neutral/good IME. The appropriate counterfactual is how pleasant is meditation vs whatever they would have been doing instead with that time (e.g. watching tv?).

Your model also seems to assume that, if not for the treatment, the person wouldn't have had MBSR at all. Given the spread of mindfulness practice worldwide, I think this is better thought of as "if we fund this intervention, how much earlier will it cause 1 average person to start practising mindfulness than they otherwise would have?" If they person would have been an avid mindfulness-er 5 years later anyway, the effect is just 5 years. There's also the possible counterfactual that teaching this one person caused them to speed up the spread of mindfulness because they pass it on to their friends. And there's the possibility they would have used something else, such as CBT, to treat their depression/anxiety, anyway rather than left it untreated. Or that their depression/anxiety would have ended naturally. I'm unsure how to work through these counterfactuals, but it ought to be flagged even if you ultimately say "I'm just going to leave aside these counterfactual effects as too complicated".

Another worry comes when I ask myself "would this be a good thing for EAs to fund?" It seems anyone with access to the internet could self-teach mindfulness if they really wanted to. Hence the relevant obstacles are that people don't want to do it or aren't aware of it. I doubt there are hordes of people who know about MBSR and would do it but are lacking the funds to pay for the course themself. In the developed world, people could probably cough up $300 themselves. It seems a bit weird for EAs to be paying for the medical treatments of other people in the developed world. Suppose, instead, this is a medical treatment to be offered the depressed/anxious in the developing world. Then my concern is one of cultural barriers and that take up of mindfulness would be quite low (intuitively, this seems like a bigger problem for mindfulness than CBT).

If the true obstacles aren't money but awareness or motivation, that suggests the better things for EAs to do might be paying for public campaigns that advertise mindfulness, e.g. via Developed Media International. My concern is then neglectedness: there are(/will be) companies trying to market mindfulness to people for a profit. If this is true, EAs might want to leave this to the market to provide and do something else. I'm not quite sure how to think about this either.

In response to What consequences?
Comment author: MichaelPlant 28 November 2017 09:12:53PM 4 points [-]

A potential objection here is that the Austrian physician could in no way have foreseen that the infant they were called to tend to would later become a terrible dictator, so the physician should have done what seemed best given the information they could uncover. But this objection only highlights the difficulty presented by cluelessness. In a very literal sense, a physician in this position is clueless about what action would be best. Assessing only proximate consequences would provide some guidance about what action to take, but this guidance would not necessarily point to the action with the best consequences in the long run.

I think this example undermines, rather than supports, your point. Of course it's possible the baby would have grown up to be Hitler. It's also possible the baby would have grown up to be a great scientist. Hence, from the perspective of the doctor, who is presumably working on expected value and has no reason to think one special case is more likely than the other, these presumably just do cancel out. Hence the doctors looks the obvious causes. This seems like a case of what Greaves calls simple cluelessness.

A couple of general comments. There is already an academic literature of cluelessness and it's known to some EAs. It would be helpful therefore if you make it clear what you're doing that's novel. I don't mean this in a disparaging way. I simply can't tell if you're disagreeing with Greaves et al. or not. If you are, that's potentially very interesting and I want to know what the disagreement exactly is so I can assess it and see if I want to take your side. If you're not presenting a new line of thought, but just summarising or restating what others have said (perhaps in an effort to bring this information to new audiences, or just for your own benefit) you should say that instead so that people can better decided how closely to read it.

Additionally, I think it's unhelpful to (re)invent new terminology without a good reason. I can't tell the clear different between proximate, indirect and long-run consequences. I would much have preferred it if you'd explained cluelueness using Greaves' set up and then progressed from there as appropriate.

Comment author: MichaelPlant 26 November 2017 10:53:12PM 1 point [-]

Thanks very much for this. I just want to add a twist to this:

Counterintuitively, this suggests that you should stay away from new technologies: it is very likely that someone will try “machine learning for X” relatively soon, so it is unlikely to be neglected.

EAs don't have stay away from new tech. You could plan to have impact by getting rich via being the first to build cutting edge tech and then giving your money away; basically doing a variant of 'earn to give'. In this case your company wouldn't have done much good directly - because what you call the 'time advantage' would be so tiny - and the value would come from your donations. This presumes the owners of the company you beat wouldn't have given their money away.

Comment author: Halstead 29 October 2017 11:51:52PM *  6 points [-]

Hi Greg, thanks for this post, it was very good. I thought it would help future discussion to separate these claims, which leave your argument ambiguous:

  1. You should give equal weight to your own credences and those of epistemic peers on all propositions for which you and they are epistemic peers.
  2. Claims about the nature of the community of epistemic peers and our ability to reliably identify them.

In places, you seem to identify modesty with 1, in others with the conjunction of 1 and a subset of claims in 2. 1 doesn't seem sufficient on its own for modesty, for if 1 is true but I have no epistemic peers or can't reliably identify them, then I should pay lots of attention to my own inside view of an issue. Similarly, if EAs have no epistemic peers or superiors, then they should ignore everyone else. This is compatible with conciliationism but seems immodest. The relevant claim in 2 seems to be that for most people, including EAs, with beliefs about practically important propostions, there are epistemic peers and superiors who can be reliably identified.

This noted, I wonder how different the conjunction of 1 and 2 is to epistemic chauvinism. It seems to me that I could accept 1 and 2, but demote people from my epistemic peer group with respect to a proposition p if they disagree with me about p. If I have read all of the object-level arguments on p and someone else has as well and we disagree on p, then demotion seems appropriate at least in some cases. To give an example, I've read and thought less about vagueness less than lots of much cleverer philosophers who hold a view called supervaluationism, which I believe to be extremely implausible. I believe I can explain why they are wrong with the object-level arguments about vagueness. I received the evidence that they disagree. Very good, I reply, they are not my epistemic peers with respect to this question for object level reasons x, y, and z. (Note that my reasons for demoting them are the object-level reasons; they are not that I believe that supervaluationism is false. Generally, the fact that I believe p is usually not my reason to believe that p.) This is entirely compatible with the view that I should be modest with respect to my epistemic peers.

In this spirit, I find Scott Sumner's quote deeply strange. If he thinks that "there is no objective reason to favor my view over Krugman's", then he shouldn't believe his view over Krugman's (even though he (Sumner) does). If I were in Sumner's shoes after reasoning about p and reading the object level reasons about p, then I would EITHER become agnostic or demote krugman from my epistemic peer group.

Comment author: MichaelPlant 30 October 2017 04:20:52PM 2 points [-]

Gregory, thanks for writing this up. Your writing style is charming and I really enjoy reading the many deft turns of phrase.

Moving on to the substance, I think I share JH's worries. What seems missing from your account is why people have the credences they have. Wouldn't it be easiest just to go and assess the object level reasons people have for their credences? For instance, with your Beatrice and Adam example, one (better?) way to make progress on finding out whether it's an oak or not is ask them for their reasons, rather than ask them to state their credences and take those on trust. If Beatrice says "I am tree expert but I've left my glasses at home so can't see the leaves" (or something) whereas Adam gives a terrible explanation ("I decided every fifth tree I see must be an oak tree"), that would tell us quite a lot.

Perhaps, we should defer to others either when we don't know what their reasons are but need to make a decision quickly, or we think they have the same access to object levels reasons as we do (potential example: two philosophers who've read everything but still disagree).

Comment author: Gregory_Lewis 28 October 2017 09:25:08AM 1 point [-]

Respectfully, I take 'challenging P' to require offering considerations for ¬P. Remarks like "I worry you're using a fully-general argument" (without describing what it is or how my remarks produce it), "I don't think your analogy is very solid" (without offering dis-analogies) don't have much more information than simply "I disagree".

1) I'd suggest astronomical stakes considerations imply at that one of the 'big three' do have extremely large marginal returns. If one prefers something much more concrete, I'd point to the humane reforms improving quality of life for millions of animals.

2) I don't think the primacy of the big three depends in any important way on recondite issues of disability weights or population ethics. Conditional on a strict person affecting view (which denies the badness of death) I would still think the current margin of global health interventions should offer better yields. I think this based on current best estimates of disability weights in things like the GCPP, and the lack of robust evidence for something better in mental health (we should expect, for example, Enthea's results to regress significantly, perhaps all the way back to the null).

On the general point: I am dismissive of mental health as a cause area insofar as I don't believe it to be a good direction for EA energy to go relative to the other major ones (and especially my own 'best bet' of xrisk). I don't want it to be a cause area as it will plausibly compete for time/attention/etc. with other things I deem more important. I'm no EA leader, but I don't think we need to impute some 'anti-weirdness bias' (which I think is facially implausible given the early embrace of AI stuff etc) to explain why they might think the same.

Naturally, I may be wrong in this determination, and if I am wrong, I want to know about it. Thus having enthusiasts go into more speculative things outside the currently recognised cause areas improves likelihood of the movement self-correcting and realising mental health should be on a par with (e.g.) animal welfare as a valuable use of EA energy.

Yet anointing mental health as a cause area before this case has been persuasively made would be a bad approach. There are many other candidates for 'cause area No. n+1' which (as I suggested above) have about the same plausibility as mental health. Making them all recognised 'cause areas' seems the wrong approach. Thus the threshold should be higher.

Comment author: MichaelPlant 28 October 2017 10:59:37AM *  2 points [-]

Just to chip in.

I agree that, if you care about the far future, mental health (along with poverty, physical and pretty much anything apart from X-risk focused interventions) will look at least look like a waste of time. Further analysis may reveal this to be a bit more complicated, but this isn't the time for such complicated, further analysis.

I don't want it to be a cause area as it will plausibly compete for time/attention/etc

I think this probably isn't true, just because those interested in current-human vs far-future stuff are two different audiences. It's more a question of whether, in as much people are going to focus on current stuff, would do more good if they focused on mental health over poverty. There's a comment about moral trade to be made here.

I also find the apparent underlying attitude here unsettling. It's sort of 'I think your views are stupid and I'm confident I know best so I just want to shut them out of the conversation rather than let others make up their own mind' approach. On a personal level, I find this thinking (which, unless i'm paranoid, I've encountered in the EA world before) really annoying. I say some stuff in the area in this post on moral inclusivity.

I also think both of you being too hypothetical about mental health. Halstead and Snowden have a new report where they reckon Strong Minds is $225/DALY, which is comparable to AMF if you think AMF's live saving is equivalent to 40 years of life-improving treatments.

Drugs policy reform I consider to be less at the 'this might be a good idea but we have no reason to think so' stage and more at the 'oh wow, if this is true it's really promising and we should look into it to find out if it is true' stage. I'm unclear what the bar is to be annointed an 'official cause' or who we should allow to be in charge of such this censorious judgements.

Comment author: MichaelPlant 27 October 2017 10:29:09AM 2 points [-]

I'm really not sure why my comment was so heavily downvoted without explanation. I'm assuming people think discussion of inclusion issues is a terrible idea. Assuming that is what I've been downvoted for, that makes me feel disappointed in the online EA community and increases my belief this is a problem.

I tried to avoid things that have already been discussed heavily and publicly in the community

I think this may be part of the problem in this context. Some EAs seem to take the attitude (i'm exaggerating a bit for effect) that if there was a post on the internet about it once, it's been discussed. This itself is pretty unwelcoming and exclusive, and it penalises people who haven't been in the community for multiple or haven't spend many hours reading around internet posts. My subjective view is that this topic is under-discussed relative to how much I feel it should be discussed.

Comment author: MichaelPlant 28 October 2017 12:54:40AM *  6 points [-]

So many different boxes to reply to! I'll do one reply for everything here.

My main reflection is that either 1. I really haven't personally had much discussion of inclusivity in my time in the EA movement (and this may just be an outlier/coincidence) or 2. I'm just much more receptive to this sort of chat than the average EA. I live among Oxford students and this probably gives me a different reference point (e.g. people do sometimes introduce themselves with their pronouns here). I forget how disconcertingly social justice-y I found the University when I first moved here.

Either way, the effect is I really haven't felt like I've had too many discussion in EA about diversity. It's not like it's my favourite topic or anything.

Comment author: vipulnaik 27 October 2017 02:02:08PM 7 points [-]

I'm not sure why you brought up the downvoting in your reply to my reply to your comment, rather than replying directly to the downvoted comment. To be clear, though, I did not downvote the comment, ask others to downvote the comment, or hear from others saying they had downvoted the comment.

Also, I could (and should) have been clearer that I was focusing only on points that I didn't see covered in the post, rather than providing an exhaustive list of points. I generally try to comment with marginal value-add rather than reiterating things already mentioned in the post, which I think is sound, but for others who don't know I'm doing that, it can be misleading. Thank you for making me notice that.


I think this may be part of the problem in this context. Some EAs seem to take the attitude (i'm exaggerating a bit for effect) that if there was a post on the internet about it once, it's been discussed.

In my case, I was basing it on stuff explicitly, directly mentioned in the post on which I am commenting, and a prominently linked post. This isn't "there was a post on the internet about it once" this is more like "it is mentioned right here, in this post". So I don't think my comment is an example of this problem you highlight.

Speaking to the general problem you claim happens, I think it is a reasonable concern. I don't generally endorse expecting people to have intricate knowledge of years' worth of community material. People who cite previous discussions should generally try to link as specifically as possible to them, so that others can easily know what they're talking about without having had a full map of past discussions.

But imo it's also bad to bring up points as if they are brand new, when they have already been discussed before, and especially when others in the discussion have already explicitly linked to past discussions of those points.

Comment author: MichaelPlant 28 October 2017 12:51:31AM 1 point [-]

I'm not sure why you brought up the downvoting in your reply to my reply to your comment

Sorry. That was a user error.

View more: Prev | Next