Comment author: TruePath 11 May 2017 07:59:50AM -2 points [-]

Before I should admit my bias here. I have a pet peeve about posts about mental illness like this. When I suffered from depression and my friend killed himself over it there was nothing that pissed me off more than people passing on the same useless facts and advice to get help (as if that magically made it betteR) with the self-congratulatory attitude that they had done something about the problem and could move on. So what follows may be a result of unjust irritation/anger but I do really believe that it causes harm when we past on truisms like that and think of ourselves as helping...either by making those suffering feel like failures/hopeless/misunderstood (just get help and it's all good) or causing us to believe we've done our part. Maybe this is just irrational bias I don't know.


While I like the motivation I worry that this article does more to make us feel better that 'something is being done' than it does anything for EA community members with these problems. Indeed, I worry that sharing what amounts to fairly obvious truisms that any google search would reveal actually saps our limited moral energy/consideration for those with mental illness (ohh good we've done our part).

Now I'm sure the poster would defend this piece by saying well maybe most EA people with these afflictions won't get any new information from this but some might not and it's good to inform them. Yes, if informing them were cost free it would. However, there is still a cost in terms of attention, time, pushing readers away from other issues. Indeed, unless you honestly believe that information about every mental illness ought to be posted on every blog around the world it seems we ought to analyze how likely this content on this site is to be useful. I doubt EA members suffer these diseases at a much greater rate than the population in general while I suspect they are informed about these issues at a much greater rate making this perhaps the least effect place to advertise this information.

I don't mean to downplay these diseases. They are serious problems and to the extent there is something we can do with a high benefit/cost ratio we should. So maybe a post identifying media that is particularly likely to serve afflicted individuals who would benefit from this and urging readers to submit this information would be helpful.

Comment author: TruePath 11 May 2017 07:37:31AM 0 points [-]

This feels like nitpicking that gives the impression of undermining Singer's original claim when in reality the figures support them. I have no reason to believe Singer was claiming that of all possible charitable donations trauchoma is the most effective, merely to give the most stunningly large difference in cost effectiveness between charitable donations used for comparable ends (both about blindness so no hard comparisons across kinds of suffering/disability).

I agree that within the EA community and when presenting EA analysis of cost-effectiveness it is important to be upfront with the full complexity of the figures. However, <b>Singer's purpose at TED isn't to carefully pick the most cost effective donations but to force people to confront the fact that cost effectiveness matters.</b>. While those of us already in EA might find a statement like "We prevent 1 year of blindness for every 3 surgeries done which on average cost..." perfectly compelling the audience members who aren't yet persuaded simply tune out. After all it's just more math talk and they are interested in emotional impact. The only way to convince them is to ignore getting the numbers perfectly right and focus on the emotional impact of choosing to help a blind person in the US get a dog rather than many people in poor countries avoid blindness.

Now it's important that we don't simplify in misleading ways but even with the qualifications here it is obvious that it still costs orders of magnitude more to train a dog than prevent blindness via this surgery. Moreover, once one factors in considerations like pain, the imperfect replacement for eyes provided by a dog, etc.. the original numbers are probably too favorable to dog training as far as relative cost effectiveness goes.

This isn't to say that your point here isn't important regarding people inside EA making estimates or givewell analysis or the like. I'm just pointing out that it's important to distinguish the kind of thing being done at a TED talk like this from that being done by givewell. So long as when people leave the TED talk their research leaves the big picture in place (dogs >>>> trauchoma surgery) it's a victory.

Comment author: TruePath 12 January 2017 12:46:14PM 1 point [-]

As for the issue of acquiring power/money/influence and then using it to do good it is important to be precise here and distinguish several questions:

1) Would it be a good thing to amass power/wealth/etc.. (perhaps deceptively) and then use those to do good?

2) Is it a good thing to PLAN to amass power/wealth/etc.. with the intention of "using it to do X" where X is a good thing.

2') Is it a good thing to PLAN to amass power/wealth/etc.. with the intention of "using it to do good".

3) Is it a good idea to support (or not object) to others who profess to be amassing wealth/power/etc.. to do good

Once broken down this way it is clear that while 1 is obviously true 2 and 3 aren't. Lacking the ability to perfectly bind one's future self means there is always the risk that you will instead use your influence/power for bad ends. 2' raises further concerns as to whether what you believe to be good ends really are good ends. This risk is compounded in 3 by the possibility that the people are simply lying about the good ends.

Once we are precise in this way it is clear that it isn't the in principle approval of amassing power to do good that is at fault but rather the trustworthiness/accuracy of those who undertake such schemes that is the problem.

Having said this some degree of amassing power/influence as a precursor to doing good is probably required. The risks simply must be weighed against the benefits.

Comment author: Denkenberger 17 December 2016 01:54:04PM 1 point [-]

I agree that QALYs are more robust, and I guess it was an earlier version of the paper where we noted that using QALYs would likely produce similar comparison of cost-effectiveness to global poverty interventions. But we wanted to keep this analysis simple, and most people (though perhaps not most EAs) think in terms of saving lives. Also, two definitions of a global catastrophic risk are based on number of lives lost (I believe 10 million according to the book Global Catastrophic Risks and 10% of human population according to Open Philanthropy).

Comment author: TruePath 12 January 2017 12:00:29PM 0 points [-]

That is good to know and I understand the motivation to keep the analysis simple.

As far as the definition go that is a reasonable definition of the term (our notion of catastrophe doesn't include an accumulation of many small utility losses) so is a good criteria for classifying the charity objective. I only meant to comment on QALYs as a means to measure effectiveness.

WTF is with the votedown. I nicely and briefly suggested that another metric might be more compelling (though the author's point about mass appeal is a convincing rebuttal). Did the comment come off as simply bitching rather than a suggestion/observation?

Comment author: vipulnaik 12 January 2017 06:24:38AM 14 points [-]

The post does raise some valid concerns, though I don't agree with a lot of the framing. I don't think of it in terms of lying. I do, however, see that the existing incentive structure is significantly at odds with epistemic virtue and truth-seeking. It's remarkable that many EA orgs have held themselves to reasonably high standards despite not having strong incentives to do so.

In brief:

  • EA orgs' and communities' growth metrics are centered around numbers of people and quantity of money moved. These don't correlate much with epistemic virtue.
  • (more speculative) EA orgs' donors/supporters don't demand much epistemic virtue. The orgs tend to hold themselves to higher standards than their current donors.
  • (even more speculative; not much argument offered) Even long-run growth metrics don't correlate too well with epistemic virtue.
  • Quantifying (some aspects of) quality and virtue into metrics seems to me to have the best shot at changing the incentive structure here.

The incentive structure of the majority of EA-affiliated orgs has centered around growth metrics related to number of people (new pledge signups, number of donors, number of members), and money moved (both for charity evaluators and for movement-building orgs). These are the headline numbers they highlight in their self-evaluations and reports, and these are the numbers that people giving elevator pitches about the orgs use ("GiveWell moved more than $100 million in 2015" or "GWWC has (some number of hundreds of millions) in pledged money"). Some orgs have slightly different metrics, but still essentially ones that rely on changing the minds of large numbers of people: 80,000 Hours counts Impact-Adjusted Significant Plan Changes, and many animal welfare orgs count numbers of converts to veganism (or recruits to animal rights activism) through leafleting.

These incentives don't directly align with improved epistemic virtue! In many cases, they are close to orthogonal. In some cases, they are correlated but not as much as you might think (or hope!).

I believe the incentive alignment is strongest in cases where you are talking about moving moderate to large sums of money per donor in the present, for a reasonable number of donors (e.g., a few dozen donors giving hundreds of thousands of dollars). Donors who are donating those large sums of money are selected for being less naive (just by virtue of having made that much money) and the scale of donation makes it worth their while to demand high standards. I think this is related to GiveWell having relatively high epistemic standards (though causality is hard to judge).

With that said, the organizations I am aware of in the EA community hold themselves to much higher standards than (as far I can make out) their donor and supporter base seems to demand of them. My guess is that GiveWell could have been a LOT more sloppy with their reviews and still moved pretty similar amounts of money as long as they produced reviews that pattern-matched a well-researched review. (I've personally found their review quality improved very little from 2014 to 2015 and much more from 2015 to 2016; and yet I expect that the money moved jump from 2015 to 2016 will be less, or possibly even negative). I believe (with weaker confidence) that similar stuff is true for Animal Charity Evaluators in both directions (significantly increasing or decreasing review quality won't affect donations that much). And also for Giving What We Can: the amount of pledged money doesn't correlate that well with the quality or state of their in-house research.

The story I want to believe, and that I think others also want to believe, is some version of a just-world story: in the long run epistemic virtue ~ success. Something like "Sure, in the short run, taking epistemic shortcuts and bending the truth leads to more growth, but in the long run it comes back to bite you." I think there's some truth to this story: epistemic virtue and long-run growth metrics probably correlate better than epistemic virtue and short-run growth metrics. But the correlation is still far from perfect.

My best guess is that unless we can get a better handle on epistemic virtue and quantify quality in some meaningful way, the incentive structure problem will remain.

Comment author: TruePath 12 January 2017 11:43:28AM *  2 points [-]

The idea that EA charities should somehow court epistemic virtue among their donors seems to me to be over-asking in a way that will drastically reduce their effectiveness.

No human behaves like some kind of Spock stereotype making all their decisions merely by weighing the evidence. We all respond to cheerleading and upbeat pronouncements and make spontaneous choices based on what we happen to see first. We are all more likely to give when asked in ways which make us feel bad/guilty for saying no or when we forget that we are even doing it (annual credit card billing).

If EA charities insist on cultivating donations only in circumstances where the donors are best equipped to make a careful judgement, e.g., eschewing 'Give Now' impulse donations, fundraising parties with liquor and peer pressure and insist on reminding us each time another donation is about to be deducted from our account, they will lose out on a huge amount of donations. Worse, because of the role of overhead in charity work, the lack of sufficient donations will actually make such charities bad choices.

Moreover, there is nothing morally wrong with putting your organization's best foot forward or using standard charity/advertising tactics. Despite the joke it's not morally wrong to make a good first impression. If there is a trade off between reducing suffering and improving epistemic virtue there is no question which is more important and if that requires implying they are highly effective so be it.

I mean it's important charities are incentivized to be effective but imagine if the law required every charitable solicitation to disclose the fraction of donations that went into fundraising and overhead. It's unlikely the increased effectiveness that resulted would make up for the huge losses that forcing people to face the unpleasant fact that even the best charities can only send a fraction of their donation to the intended beneficiaries.

What EA charities should do, however, is pursue a market segmentation strategy. Avoid any falsehoods (as well as annoying behavior likely to result in substantial criticism) when putting a good face on their situation/effectiveness and make sure detailed truthful and complete data and analysis is available for those who put in the work to look for it.

Everyone is better off this way. No on is lied to. The charities get more money and can do more with it. The people who decide to give for impulsive or other less than rational reasons can feel good about themselves rather than feeling guilty they didn't put more time into their charitable decisions. The people who care about choosing the most effective evidence backed charitable efforts can access that data and feel good about themselves for looking past the surface. Finally, by having the same institution chase both the smart and dumb money the system works to funnel the dumb money toward smart outcomes (charities which lose all their smart money will tend to wither or at least change practices).

Comment author: TruePath 12 January 2017 11:15:59AM 0 points [-]

It seems to me that a great deal of this supposed 'problem' is simply the unsurprising and totally human response to feeling that an organization you have invested in (monetarily, emotionally or temporally) is under attack and that the good work it does is in danger of being undermined. EVERYONE on facebook engages in crazy justificatory dances when their people are threatened.

It's a nice ideal that we should all nod and say 'yes that's a valid criticism' when our baby is attacked but it's not going to happen. There is nothing we can do about this aspect so let's instead simply focus on avoiding the kind of unjustified claims that generated the trouble.

Of course, it is entirely possible that some level of deception is necessary to run a successful charity. I'm sure a degree of at least moral coercion is, e.g., asking people for money in circumstances it would look bad not to give. However, I'm confident this can be done in the same way traditional companies deceive, i.e. by merely creating positive associations and downplaying negative ones rather than outright lying.

Comment author: TruePath 17 December 2016 02:03:21AM 0 points [-]

Lives saved is a very very weird and mostly useless metric. At the very least try and give an estimate in QALYs (quality adjusted life years) since very few people actually value saving life per say (e.g. stopping someone who is about to die of cancer from dying a few minutes earlier).

Given that many non-deaths from food scarcity are probably pretty damn unpleasant this would probably be a more compelling figure.

Comment author: TruePath 02 December 2016 06:21:05PM 0 points [-]

This doesn't actually provide anything like a framework to evaluate Cause X candidates. Indeed, I would argue it doesn't even provide a decent guide to finding plausible Cause X candidates.

Only the first methodology (expanding the moral sphere) identifies a type of moral claim that we have historically looked back on and found to be compelling. The second and third methods just list typical ways people in the EA community claim to have found Cause X. Moreover, there is good reason for thinking that successfully finding something that qualifies as Cause X will require coming up with something that isn't an obvious candidate.

Comment author: TruePath 02 December 2016 04:19:21PM 2 points [-]

I think this post is confused on a number of levels.

First, as far as ideal behavior is concerned integrity isn't a relevant concept. The ideal utilitarian agent will simply always behave in the manner that optimizes expected future utility factoring in the effect that breaking one's word or other actions will have on the perceptions (and thus future actions) of other people.

Now the post rightly notes that as a limited human agent we aren't truly able to engage in this kind of analysis. Both because of our computational limitations and our inability to perfectly deceive it is beneficial to adopt heuristics about not lying, stabbing people in the back etc.. (which we may judge to be worth abandoning in exceptional situations).

However, the post gives us no reason to believe it's particular interpretation of integrity "being straightforward" is the best such heuristic. It merely asserts the author's belief that this somehow works out to be the best.

This brings us to the second major point, even though the post acknowledges the very reason for considering integrity is that, "I find the ideal of integrity very viscerally compelling, significantly moreso than other abstract beliefs or principles that I often act on." the post proceeds to act as if it was considering what kind of integrity like notion would be appropriate to design into (or socially construct) in some alternative society of purely rational agents.

Obviously, the way we should act depends hugely on the way in which others will interpret our actions and respond to them. In the actual world WE WILL BE TRUSTED TO THE EXTENT WE RESPECT THE STANDARD SOCIETAL NOTIONS OF INTEGRITY AND TRUST. It doesn't matter if some other alternate notion of integrity might have been better to have if we don't show integrity in the traditional manner we will be punished.

In particular, "being straightforward" will often needlessly imperil people's estimation of our integrity. For example, consider the usual kinds of assurances we give to friends and family that we "will be there for them no matter what" and that "we wouldn't ever abandon them." In truth pretty much everyone, if presented with sufficient data showing their friend or family member to be a horrific serial killer with every intention of continuing to torture and kill people, would turn them in even in the face of protestations of innocence. Does that mean that instead of saying "I'll be there for you whatever happens" we should say "I'll be there for you as long as the balance of probability doesn't suggest that supporting you will cost more than 5 QALYs" (quality adjusted life years)?

No, because being straightforward in that sense causes most people to judge us as weird and abnormal and thereby trust us less. Even though everyone understands at some level that these kind of assurances are only true ceterus parabus actually being straightforward about that fact is unusual enough that it causes other people to suspect that they don't understand our emotions/motivations and thus give us less trust.

In short: yes, the obvious point that we should adopt some kind of heuristic of keeping our word and otherwise modeling integrity is true. However, the suggestion that this nice simple heuristic is somehow the best one is completely unjustified.