Comment author: Lee_Sharkey 07 February 2017 09:27:23AM *  2 points [-]

Hi Tom,

Great to hear that it's been suggested. By the looks of it, it may be an area better suited to an Open Philanthropy Project-style approach, being primarily a question of policy and having a sparser evidence base and impact definition difficulties. I styled my analysis around OPP's approach (with some obvious shortcomings on my part).

I could have done better in the analysis to distinguish between the various types of pain. As you say, they are not trivial distinctions, especially when it comes to treatment with opioids.

I'd be interested to hear your take on the impact of pain control on the nature of medicine and the doctor-patient dynamic. What trends are you concerned about hastening exactly?

Comment author: Elizabeth 08 February 2017 04:04:55AM 0 points [-]

I'm concerned in almost the opposite direction- that having the doctor as gatekeeper to something the patient legitimately needs, with the threat of taking it away if the patient doesn't look sick enough, corrupts the doctor-patient relationship and the healing process.

Comment author: Elizabeth 02 February 2017 09:34:19PM *  3 points [-]

I'm super happy to see people taking this seriously. Why the emphasis on opioids? My understanding is they're bad for chronic pain because you acclimate so quickly, and they usually don't affect pain that is purely neurological. Cannibidiol works better for many people, is overwhelmingly safer (http://www.nytimes.com/roomfordebate/2016/04/26/is-marijuana-a-gateway-drug/overdoses-fell-with-medical-marijuana-legalization), and can be grown at home. Kratom has a reputation for being good, although I know less about it.

Comment author: Elizabeth 16 January 2017 05:53:25PM 5 points [-]

A list of ethical and practical concerns the EA movement has with Intentional Insights: http://effective-altruism.com/ea/12z/concerns_with_intentional_insights/ .

Gleb Tsipursky has also repeatedly said he will leave the EA movement.

Comment author: Elizabeth 16 January 2017 05:53:14PM 4 points [-]

A list of ethical and practical concerns the EA movement has with Intentional Insights: http://effective-altruism.com/ea/12z/concerns_with_intentional_insights/ .

Gleb Tsipursky has also repeatedly said he will leave the EA movement.

Comment author: Elizabeth 22 December 2016 05:03:27PM 0 points [-]

Hypothesis: there's lots of good, informal meta work to be done, like telling convincing your aunt to donate to GiveWell rather than Heiffer International or your company to do a cash fundraiser rather than a canned food drive. But the marginal returns diminish real quickly: once you've convinced all the relatives that are amenable, it is really hard to convince the holdouts or find new relatives. But the remaining work isn't just lower expected value, it has much slower, more ambiguous feedback loops, so it's easy to miss the transition.

Object level work is hard, and there are few opportunities to do it part time. Part time meta work is easy to find and sometimes very high value. My hypothesis is that when people think about doing direct work full time, these facts conspire to make meta work the default choice. In fact full time meta work is the most difficult thing, because of poor feedback loops, and the easiest to be actively harmful with, because you risk damaging the reputation of EA or charity as a whole.

I think we need to flip the default so that people look to object work, not meta, when they have exhausted their personal low hanging fruit.

Comment author: Robert_Wiblin 19 December 2016 10:11:29PM 2 points [-]

The question is under what conditions you can break a pledge, as it's ambiguous.

I think 'this pledge no longer accomplishes the underlying goal which motivated my past self to take it' is a generally acceptable reason, and rightly so. Your past self would have wanted to write in such an exit clause if they had anticipated it (or had the flexibility), so there's no breakdown in cooperation.

Comment author: Elizabeth 19 December 2016 11:41:17PM 2 points [-]

I think I have a model where this makes sense: if you made a promise to another person, that's essentially an asset they have, and you could trade something they wanted more in exchange for being released from the promise. You view the GWWC pledge as making a promise to your past self and/or the world at large, so if something comes along that is a better trade for the world, you feel free to take it.

Does that sound right?

Comment author: Robert_Wiblin 09 December 2016 10:47:22PM 2 points [-]

Firstly: I think we should use the interpretation of the pledge that produces the best outcome. The use GWWC and I apply is completely mainstream use of the term pledge (e.g. you 'pledge' to stay with the person you marry, but people nonetheless get divorced if they think the marriage is too harmful to continue).

A looser interpretation is better because more people will be willing to participate, and each person gain from a smaller and more reasonable push towards moral behaviour. We certainly don't want people to be compelled to do things they think are morally wrong - that doesn't achieve an EA goal. That would be bad. Indeed it's the original complaint here.

Secondly: An "evil future you" who didn't care about the good you can do through donations probably wouldn't care much about keeping promises made by a different kind of person in the past either, I wouldn't think.

Thirdly: The coordination thing doesn't really matter here because you are only 'cooperating' with your future self, who can't really reject you because they don't exist yet (unlike another person who is deciding whether to help you).

One thing I suspect is going on here is that people on the autism spectrum interpret all kinds of promises to be more binding than neurotypical people do (e.g. https://www.reddit.com/r/aspergers/comments/46zo2s/promises/). I don't know if that applies to any individual here specifically, but I think it explains how some of us have very different intuitions. But I expect we will be able to do more good if we apply the neurotypical intuitions that most people share.

Of course if you want to make it fully binding for yourself, then nobody can really stop you.

Comment author: Elizabeth 16 December 2016 02:22:39AM 1 point [-]

" I think we should use the interpretation of the pledge that produces the best outcome. "

Why not write the pledge that has the best outcome? If pledging the behavior for life produces better outcomes, I think it's worth thinking about why.

Comment author: Eric_Bruylant 06 December 2016 11:01:40PM 0 points [-]

I like this! Especially if combined with a Schelling day for doing the thinking (possibly one winter and one summer?).

Comment author: Elizabeth 09 December 2016 10:01:42PM 0 points [-]
Comment author: Robert_Wiblin 04 December 2016 11:47:32PM *  6 points [-]

I've taken the pledge because I think it's a morally good thing to do and it's useful to have commitment strategies to help you live up to what you think is right. I expect to follow through, because I expect to believe that keeping the pledge is the right thing for me to do.

If it turns out to be bad, I will no longer do it, because there's no point having a commitment device to prompt you to follow through on something you don't think you should do. That's the only sensible way to act.

Comment author: Elizabeth 05 December 2016 04:08:50PM 8 points [-]

What is the situation where:

  1. Giving is the correct thing to do
  2. You wouldn't give (or would give less) if you hadn't signed the pledge
  3. You will would give (more) because you have signed the pledge.

I think a disconnect here is that for many people, including myself, saying "I will do this for life" literally means "I will do this for life", with the compromise position being "I will do this unless it will end my life." It's not a commitment device, it's a commitment, and if you take it giving less than 10% becomes morally wrong, even if absent the pledge giving 10% would be a bad idea.

Comment author: Kit 04 December 2016 09:27:36PM 5 points [-]

Does anyone have specific proposals for what kind of public pledge you would prefer to make, or ask people to make? Including guesses as to who would take or not take such a pledge would be helpful for assessing whether a change would be net positive.

I don't expect CEA to implement changes to the Giving What We Can Pledge any time soon due to the substantial momentum cost but think we should focus on actionable statements to best understand what's going on here.

Comment author: Elizabeth 05 December 2016 04:01:16PM 4 points [-]

"I pledge to spend N hours/year evaluating how I could do the most good in the world and what the personal cost to me would be, and publish my results."

The N hours is still a cost rather than a result, which I dislike. I think the ultimate goal would be a moral aesthetic sense on when you've researched "enough", and pledge to satisfy that. But this gets you one of the main advantages of the GWWC pledge, that it prompts you to donate and to think about your donation, without the cost of locking you in to a numbers. Yes, the pledge is fine with you donating more, but there is no mechanism for deciding when you should do so.

View more: Next