Comment author: Elizabeth 24 June 2017 05:23:01PM 0 points [-]

I think costly signaling is the wrong phrase here. Costly signaling is about gain for the signaler. This seems better modeled as people trying to indirectly purchase the good "rich people donate lots to charity.". Similar to people who are unwilling to donate to the government (so they don't think the government is better at spending money than they are) but do advocate for higher taxes (meaning they think the government is better at spending money than other people are). They're trying to purchase the good "higher taxes for everyone".

Comment author: Peter_Hurford  (EA Profile) 27 April 2017 02:03:01AM 2 points [-]

It's worth noting that it's all pretty fungible anyway. GiveWell could have just as easily claimed the money was going toward an incubation grant and then put more incubation grant money toward AMF.

Comment author: Elizabeth 27 April 2017 02:48:52AM 4 points [-]

This seems like an excellent reason to have someone uninvolved with an existing large organization administer the fund.

Comment author: BenHoffman 27 April 2017 01:21:08AM *  2 points [-]

On the other hand, it does seem worthwhile to funnel money through different intermediaries sometimes if only to independently confirm that the obvious things are obvious, and we probably don't want to advocate contrarianism for contrarianism's sake. If Elie had given the money elsewhere, that would have been strong evidence that the other thing was valuable and underfunded relative to GW top charities (and also worrying evidence about GiveWell's ability to implement its founders' values). Since he didn't, that's at least weak evidence that AMF is the best global poverty funding opportunity we know about.

Overall I think it's good that Elie didn't feel the need to justify his participation by doing a bunch of makework. This is still evidence that channeling this through Elie probably gives a false impression of additional optimizing power, but I think that should have been our strong prior anyhow.

Comment author: Elizabeth 27 April 2017 01:37:08AM 4 points [-]

If Elie had given the money elsewhere, that would have been strong evidence that the other thing was valuable and underfunded relative to GW top charities.

Only if GiveWell and the EA Fund are both supposed to be perfect expressions of Elie's values. GiveWell has a fairly specific mission which includes not just high expected value but high certainty (compared to the rest of the field, which is a low bar). EA Funds was explicitly supposed to be more experimental. Like you say below, if organizers don't think you can beat GiveWell, encourage donating to GiveWell.

Comment author: Elizabeth 26 April 2017 11:48:48PM 9 points [-]

I'm shocked that no one has commented on Elie Hassenfeld distributing 100% of money to GiveWell's top charity. Even if he didn't run GiveWell, this just seems like an extra step between giving to GiveWell. But given that one of the main arguments for the funds was to let smaller projects get funded quickly and with less overhead, giving 100% to one enormous charity with many large donors is clearly failing at a goal.

I would guess that $300k simply isn't worth Elie's time to distribute in small grants, given the enormous funds available via GoodVentures and even GiveWell direct and directed donations. It seems to me the obvious thing is to is have the fund managed by someone who has the time to do so, rather than make another way to give money to GiveWell.

Comment author: Jeff_Kaufman 26 April 2017 08:46:19PM 2 points [-]

it now looks like you're criticizing a reasonable post, at least on mobile

When I look at this on mobile I see:

This doesn't look confusing to me, but does it to you? Or do you see something else?

(If the layout makes it look like replies to deleted comments are replies to the post, that's a problem we should and can fix.)

Comment author: Elizabeth 26 April 2017 11:40:14PM 1 point [-]

i can see it clearly now, not sure if I was inattentive or something went wrong the first time I loaded the page.

Comment author: BenHoffman 24 April 2017 04:06:11AM 9 points [-]

On (1) I agree that GiveWell's done a huge public service by making many parts of decisionmaking process public, letting us track down what their sources are, etc. But making it really easy for an outsider to audit GiveWell's work, while an admirable behavior, does not imply that GiveWell has done a satisfactory audit of its own work. It seems to me like a lot of people are inferring the latter from the former, and I hope by now it's clear what reasons there are to be skeptical of this.

On (3), here's why I'm worried about increasing overt reliance on the argument from "believe me":

The difference between making a direct argument for X, and arguing for "trust me" and then doing X, is that in the direct case, you're making it easy for people to evaluate your assumptions about X and disagree with you on the object level. In the "trust me" case, you're making it about who you are rather than what is to be done. I can seriously consider someone's arguments without trusting them so much that I'd like to give them my money with no strings attached.

"Most effective way to donate" is vanishingly unlikely to be generically true for all donors, and the aggressive pitching of these funds turns the supposed test of whether there's underlying demand for EA Funds into a test of whether people believe CEA's assurances that EA Funds is the right way to give.

Comment author: Elizabeth 24 April 2017 01:49:09PM 2 points [-]

Do you think "trust me" arguments are inherently invalid, or that in this case sufficient evidence hasn't been presented?

In response to comment by [deleted] on Effective altruism is self-recommending
Comment author: Benito 23 April 2017 06:36:42PM 0 points [-]

This comment is confusing. The content is entirely common knowledge around here, it doesn't respond to any of the main claims of the post it responds to, and it's very long. Why did you post it?

Comment author: Elizabeth 24 April 2017 01:46:10PM *  3 points [-]

Fyi, original comment was deleted and it now looks like you're criticizing a reasonable post, at least on mobile.

Comment author: Lee_Sharkey 07 February 2017 09:27:23AM *  2 points [-]

Hi Tom,

Great to hear that it's been suggested. By the looks of it, it may be an area better suited to an Open Philanthropy Project-style approach, being primarily a question of policy and having a sparser evidence base and impact definition difficulties. I styled my analysis around OPP's approach (with some obvious shortcomings on my part).

I could have done better in the analysis to distinguish between the various types of pain. As you say, they are not trivial distinctions, especially when it comes to treatment with opioids.

I'd be interested to hear your take on the impact of pain control on the nature of medicine and the doctor-patient dynamic. What trends are you concerned about hastening exactly?

Comment author: Elizabeth 08 February 2017 04:04:55AM 0 points [-]

I'm concerned in almost the opposite direction- that having the doctor as gatekeeper to something the patient legitimately needs, with the threat of taking it away if the patient doesn't look sick enough, corrupts the doctor-patient relationship and the healing process.

Comment author: Elizabeth 02 February 2017 09:34:19PM *  4 points [-]

I'm super happy to see people taking this seriously. Why the emphasis on opioids? My understanding is they're bad for chronic pain because you acclimate so quickly, and they usually don't affect pain that is purely neurological. Cannibidiol works better for many people, is overwhelmingly safer (, and can be grown at home. Kratom has a reputation for being good, although I know less about it.

Comment author: Elizabeth 16 January 2017 05:53:25PM 5 points [-]

A list of ethical and practical concerns the EA movement has with Intentional Insights: .

Gleb Tsipursky has also repeatedly said he will leave the EA movement.

View more: Next