Comment author: vipulnaik 17 January 2017 09:42:38PM 1 point [-]

"So if I could be expected to work 4380 hours over 2016-2019, earn $660K (95%: $580K to $860K) and donate $160K, that’s an expected earnings of $150.68 per hour worked. [...] I consider my entire earnings to be the altruistic value of this project."

What about taxes?

Comment author: Peter_Hurford  (EA Profile) 18 January 2017 04:23:53AM 0 points [-]

Yeah, that's a good point, since it scales with my income. I should include that in the model.

Comment author: Ben_Todd 13 January 2017 04:43:01PM 2 points [-]

Hey Peter,

Quick comments on the value of a vote stuff.

First, the figures in our post should not be taken as "estimates of the value of a vote". Rather, we point to various ways you could make such an estimate, and show that with plausible assumptions, you get very high figures. We're not saying these are the figures we believe.

Second, the figures were in terms of "US social value", which can be understood as something like "the value of making a random American $1 wealthier.

You seem to be measuring the value of your time in "GiveWell dollars" i.e. the value of donations to top recommended GiveWell charities.

To convert between the two is tricky, but it's something like:

  • How much better is it to make the global poor wealthier vs. Americans (suppose 30x)
  • How much better is SCI than cash transfers? (suppose 5x)

In total that gives you 150x difference.

So $1m of US social value ~ $6700 GiveWell dollars.

Comment author: Peter_Hurford  (EA Profile) 16 January 2017 06:40:32PM 0 points [-]

Thanks Ben, I revised my estimate in light of your comment! Hopefully I also phrased 80K's conclusion more correctly.

Comment author: Peter_Hurford  (EA Profile) 16 January 2017 04:19:14PM 4 points [-]

Cool, I always love work surfacing an otherwise unknown donation opportunity! I also find your initial framework compelling and think it motivates some of my donations, for example with SHIC.

Under "Reservations about the donation", I think it's worth mentioning the possibility that the threat is misperceived and the Trump administration turns out to not pose any significant risk to the integrity or existence of those datasets.

Comment author: JBeshir 12 January 2017 02:01:50PM 10 points [-]

One very object-level thing which could be done to make longform, persistent, not hit-and-run discussion in this particular venue easier: Email notifications of comments to articles you've commented in.

There doesn't seem to be a preference setting for that, and it doesn't seem to be default, so it's only because I remember to come check here repeatedly that I can reply to things. Nothing is going to be as good at reaching me as Facebook/other app notifications on my phone, but email would do something.

Comment author: Peter_Hurford  (EA Profile) 15 January 2017 06:01:21AM 2 points [-]
Comment author: Brian_Tomasik 14 January 2017 05:45:08PM 3 points [-]

I'm surprised to hear that people see criticizing EA as incurring social costs. My impression was that many past criticisms of EA have been met with significant praise (e.g., Ben Kuhn's). One approach for dealing with this could be to provide a forum for anonymous posts + comments.

Comment author: Peter_Hurford  (EA Profile) 14 January 2017 09:25:30PM 4 points [-]

I think it really depends on who you criticize. I perceive criticizing particular people or organizations as having significant social costs (though I'm not saying whether those costs are merited or not).

Comment author: JBeshir 13 January 2017 10:08:12AM *  4 points [-]

This definitely isn't the kind of deliberate where there's an overarching plot, but it's not distinguishable from the kind of deliberate where a person sees a thing they should do or a reason to not write what they're writing and knowingly ignores it, though I'd agree in that I think it's more likely they flinched away unconsciously.

It's worth noting that while Vegan Outreach is not listed as a top charity it is listed as a standout charity, with their page here:

I don't think it is good to laud positive evidence but refer to negative evidence only via saying "there is a lack of evidence", which is what the disclaimers do- in particular there's no mention of the evidence against there being any effect at all. Nor is it good to refer to studies which are clearly entirely invalid as merely "poor" while still relying on their data. It shouldn't be "there is good evidence" when there's evidence for, and "the evidence is still under debate" when there's evidence against, and there shouldn't be a "gushing praise upfront, provisos later" approach unless you feel the praise is still justified after the provisos. And "have reservations" is pretty weak. These are not good acts from a supposedly neutral evaluator.

Until the revision in November 2016, the VO page opened with: "Vegan Outreach (VO) engages almost exclusively in a single intervention, leafleting on behalf of farmed animals, which we consider to be among the most effective ways to help animals.", as an example of this. Even now I don't think it represents the state of affairs well.

If in trying to resolve the matter of whether it has high expected impact or not, you went to the main review on leafleting (, you'd find it began with "The existing evidence on the impact of leafleting is among the strongest bodies of evidence bearing on animal advocacy methods.".

This is a very central Not Technically a Lie (; the example of a not-technically-a-lie in that post being using the phrase "The strongest painkiller I have." to refer to something with no painkilling properties when you have no painkillers. I feel this isn't something that should be taken lightly:

"NTL, by contrast, may be too cheap. If I lie about something, I realize that I'm lying and I feel bad that I have to. I may change my behaviour in the future to avoid that. I may realize that it reflects poorly on me as a person. But if I don't technically lie, well, hey! I'm still an honest, upright person and I can thus justify visciously misleading people because at least I'm not technically dishonest."

The disclaimer added now helps things, but good judgement should have resulted in an update and correction being transparently issued well before now.

The part which strikes me as most egregious was in the deprioritising of updating a review on what was described in a bunch of places as the most cost effective (and therefore most effective) intervention. I can't see any reason for that, other than that the update would have been negative.

There may not have been conscious intent behind this- I could assume that this was as a result of poor judgement rather than design- but it did mislead the discourse on effectiveness, that already happened, and not as a result of people doing the best thing given information available to them but as a result of poor decisions given this information. Whether it got more donations or not is unclear- it might have tempted more people into offsetting, but on the other hand each person who did offsetting would have paid less because they wouldn't have actually offset themselves.

However something like this is handled is also how a bad actor would be handled, because a bad actor would be indistinguishable from this; if we let this by without criticism and reform, then bad actors would also be let by without criticism and reform.

I think when it comes to responding to some pretty severe stuff of this sort, even if you assume the people made them in good faith and just made some rationality failings, more needs to be said than "mistakes were made, we'll assume you're doing the best you can to not make them again". I don't have a grand theory of how people should react here, but it needs to be more than that.

My inclination is to at the least frankly express how severe I think it is- even if it's not the nicest thing I could say.

Comment author: Peter_Hurford  (EA Profile) 13 January 2017 08:04:42PM 2 points [-]

in particular there's no mention of the evidence against there being any effect at all.

To be clear, it's inaccurate to describe the studies as showing evidence of no effect. All of the studies are consistent with a range of possible outcomes that include no effect (and even negative effect!) but they're also consistent with positive effect.

That isn't to say that there is a positive effect.

But it isn't to say there's a negative effect either.

I think it is best to describe this as a "lack of evidence" one way or another.


I don't think it is good to laud positive evidence but refer to negative evidence only via saying "there is a lack of evidence", which is what the disclaimers do

I don't think there's good evidence that anything works in animal rights and if ACE suggests anything anywhere to the contrary I'd like to push against it.

Comment author: JBeshir 11 January 2017 07:47:21PM 3 points [-]

Copying my post from the Facebook thread:

Some of the stuff in the original post I disagree on, but the ACE stuff was pretty awful. Animal advocacy in general has had severe problems with falling prey to the temptation to exaggerate or outright lie for a quick win today. especially about health, and it's disturbing that apparently the main evaluator for the animal rights wing of the EA movement has already decided to join it and throw out actually having discourse on effectiveness in favour of plundering their reputation for more donations today. A mistake is a typo, or leaving something up accidentally, or publishing something early by accident, and only mitigation if corrective action was taken once detected. This was at the minimum negligence, but given that it's been there for years without making the trivial effort to fix it should probably be regarded as just a lie. ACE needs replacing with a better and actually honest evaluator.

One of the ways this negatively impacted the effectiveness discourse: During late 2015 there was an article written arguing for ethical offsetting of meat eating (, but it used ACE's figures, and so understated the amounts people needed to donate by possibly multiple orders of magnitude.

More concerning is the extent to which the (EDIT: Facebook) comments on this post and the previously cited ones go ahead and justify even deliberate lying, "Yes, but hypothetically lying might be okay under some circumstances, like to save the world, and I can't absolutely prove it's not justified here, so I'm not going to judge anyone badly for lying", as with Bryd's original post as well. The article sets out a pretty weak case for "EA needs stronger norms against lying" aside for the animal rights wing, but the comments basically confirm it.

I know that answering "How can we build a movement that matches religious movements in output (, how can we grow and build effectiveness, how can we coordinate like the best, how can we overcome that people think that charity is a scam?" with "Have we considered /becoming pathological liars/? I've not proven it can't work, so let's assume it does and debate from there" is fun and edgy, but it's also terrible.

I can think of circumstances where I'd void my GWWC pledge; if they ever pulled any of this "lying to get more donations" stuff, I'd stick with TLYCS and a personal commitment but leave their website.

Comment author: Peter_Hurford  (EA Profile) 11 January 2017 08:03:33PM 9 points [-]

I'm involved with ACE as a board member and independent volunteer researcher, but I speak for myself. I agree with you that the leafleting complaints are legitimate -- I've been advocating more skepticism toward the leafleting numbers for years. But I feel like it's pretty harsh to think ACE needs to be entirely replaced.

I don't know if it's helpful, but I can promise you that there's no intentional PR campaign on behalf of ACE to over-exaggerate in order to grow the movement. All I see is an overworked org with insufficient resources to double check all the content on their site.

Judging the character of the ACE staff through my interactions with them, I don't think there was any intent to mislead on leaflets. I'd put it more as negligence arising from over-excitement from the initial studies (despite lots of methodological flaws), insufficient skepticism, and not fully thinking through how things would be interpreted (the claim that leafleting evidence is the strongest among AR is technically true). The one particular sentence, among the thousands on the site, went pretty much unnoticed until Harrison brought it up.

Comment author: LaurenMcG  (EA Profile) 10 January 2017 08:36:08PM 2 points [-]

Has anyone calculated a rough estimate for the value of an undergraduate student's hour? Assume they attend a top UK university, currently are unemployed, and plan to pursue earning to give. Thanks in advance for any info or links!

Comment author: Peter_Hurford  (EA Profile) 11 January 2017 05:54:46PM 2 points [-]

I think as an undergraduate you have to be very sensitive about marginal time, since that can vary drastically. When you're young, you're at the height of being able to invest in yourself, so I'd make that the number one priority as long as you are able to afford it.

Comment author: cafelow  (EA Profile) 09 January 2017 07:26:32PM 0 points [-]

I guess we'll find out :)

Comment author: Peter_Hurford  (EA Profile) 09 January 2017 10:35:44PM 1 point [-]

Not if Linch can't get more volunteers!

Comment author: cafelow  (EA Profile) 09 January 2017 02:14:39AM 1 point [-]

Sounds great Linch. My only thought is... if this is supposed to inform the usual rank-and-file GWWC members about whether and how to approach talking to others, you should try to get a fairly normal distribution of GWWC members to be the experimentees. My guess is that Peter Hurford may well be more convincing to his friends than the average GWWC member.

Comment author: Peter_Hurford  (EA Profile) 09 January 2017 06:18:02PM 1 point [-]

My guess is that Peter Hurford may well be more convincing to his friends than the average GWWC member.

I don't know about that -- my friends are not very EA inclined and I'm not really any less awkward about talking about GWWC than anyone else I know.

View more: Next