Comment author: TruePath 29 August 2017 06:59:00AM *  0 points [-]

I'm a huge supporter of drug policy reform and try to advocate it as much as I can in my personal life. Originally, I was going to post here suggesting we need a better breakdown of particular issues which are particularly ripe for policy reform (say reforming how drug weights are calculated) and the relative effectiveness of various interventions (lobbying, ads, lectures etc..).

However, on reflection I think there might be good reasons not to get involved in this project.

Probably the biggest problem for both EA and drug policy reform is the perception that the people involved are just a bunch of weirdos (we're emotionally stunted nerds and they are a bunch of stoners). This perception reduces donations to EA causes (you don't get the same status boost if its weird) and stops people from listening to the arguments of people in dpr.

If EA is seen as being a big supporter of DPR efforts this risks making the situation worse for both groups. I can just imagine an average lesswrong contributor being interviewed on TV as to why he supports dpr and when the reporter asks him how this affects him personally he starts enthusiastically explaining his use of nootropics and the public dismisses the whole thing as just weird druggies trying to make it legal to get high. This doesn't mean those of us who believe in EA can't quietly donate to dpr organizations but it probably does prevent us from doing what EA does best, determining the particular interventions that work best at a fine grained level and doing that.

This makes me skeptical this is a particularly good place to intervene. If we are going to work in policy change at all we should pick an area where we can push for very particular effective issues without the risk of backlash (to both us and dpr organiations).

Comment author: TruePath 29 July 2017 05:41:55AM *  3 points [-]

This is, IMO, a pretty unpersuasive argument. At least if you are willing, like me, to bite the bullet that SUFFICIENTLY many small gains in utility could make up for a few large gains. I don't even find this particularly difficult to swallow. Indeed, I can explain away our feeling that somehow this shouldn't be true by appealing to our inclination to (as a matter of practical life navigation) to round down sufficiently small hurts to zero.

Also I would suggest that many of the examples that seem problematic are delibrately rigged so the overt description (a world with many people with a small amount of positive utility) presents the situation one way while the flavor text is phrased so as to trigger our empathetic/whats it like response as if it it didn't satisfy the overt description. For instance if we remove the flavor about it being a very highly overpopulated world and simply said consider a universe with many many beings each with a small amount of utility then finding that superior no longer seems particularly troubling. It just states the principle allowing addition of utilities in the abstract. However, sneak in the flavor text that the world is very overcrowded and the temptation is to imagine a world which is ACTIVELY UNPLEASANT to be in, i.e., one in which people have negative utility.

More generally, I find these kind of considerations far more compelling at convincing me I have very poor intuitions for comparing the relative goodness/badness of some kinds of situations and that I better eschew any attempt to rely MORE on those intuitions and dive into the math. In particular, the worst response I can imagine is to say: huh, wow I guess I'm really bad at deciding which situations are better or worse in many circumstances, indeed, one can find cases where A seems better than B better than C better than A considered pairwise, guess I'll throw over this helpful formalism and just use my intuition directly to evaluate which states of affairs are preferable.

In response to Open Thread #37
Comment author: TruePath 23 July 2017 12:06:33PM 0 points [-]

I'm wondering if it is technically possible to stop pyroclastic flows from volcanoes (particularly ones near population centers like vesuivious) by building barriers and if so if its an efficient use of resources. Not quite world changing but it is still a low risk and high impact issue and there are US cities that are near volcanoes.

I'm sure someone has thought of this before and done some analysis.

Comment author: TruePath 26 June 2017 07:33:29AM 4 points [-]

I simply don't believe that anyone is really (when it comes down to it) a presentist or a necessitist.

I don't think anyone is willing to actually endorse making choices which eliminate the headache of an existing person at the cost of bringing an infant into the world who will be tortured extensively for all time (but no one currently existing will see it and be made sad).

More generally, these views have more basic problems than anything considered here. Consider, for instance, the problem of personal identity. For either presentism or necessitism to be true there has to be a PRINCIPLED fact of the matter about when I become a new person if you slowly modify my brain structure until it matches that of some other possible (but not currently actual) person. The right answer to these Thesus's ship style worries is to shrug and say there isn't any fact of the matter but the presentist can't take that line because there are huge moral implications to where we draw the line for them.

Moreover, both these views have serious puzzles about what to say about when an individual exists. Is it when they actually generate qualia (if not you risk saying that the fact they will exist in the future actually means they exist now)? How do we even know when that happens?

Comment author: TruePath 11 May 2017 07:59:50AM 1 point [-]

Before I should admit my bias here. I have a pet peeve about posts about mental illness like this. When I suffered from depression and my friend killed himself over it there was nothing that pissed me off more than people passing on the same useless facts and advice to get help (as if that magically made it betteR) with the self-congratulatory attitude that they had done something about the problem and could move on. So what follows may be a result of unjust irritation/anger but I do really believe that it causes harm when we past on truisms like that and think of ourselves as helping...either by making those suffering feel like failures/hopeless/misunderstood (just get help and it's all good) or causing us to believe we've done our part. Maybe this is just irrational bias I don't know.

--

While I like the motivation I worry that this article does more to make us feel better that 'something is being done' than it does anything for EA community members with these problems. Indeed, I worry that sharing what amounts to fairly obvious truisms that any google search would reveal actually saps our limited moral energy/consideration for those with mental illness (ohh good we've done our part).

Now I'm sure the poster would defend this piece by saying well maybe most EA people with these afflictions won't get any new information from this but some might not and it's good to inform them. Yes, if informing them were cost free it would. However, there is still a cost in terms of attention, time, pushing readers away from other issues. Indeed, unless you honestly believe that information about every mental illness ought to be posted on every blog around the world it seems we ought to analyze how likely this content on this site is to be useful. I doubt EA members suffer these diseases at a much greater rate than the population in general while I suspect they are informed about these issues at a much greater rate making this perhaps the least effect place to advertise this information.

I don't mean to downplay these diseases. They are serious problems and to the extent there is something we can do with a high benefit/cost ratio we should. So maybe a post identifying media that is particularly likely to serve afflicted individuals who would benefit from this and urging readers to submit this information would be helpful.


Comment author: TruePath 11 May 2017 07:37:31AM 0 points [-]

This feels like nitpicking that gives the impression of undermining Singer's original claim when in reality the figures support them. I have no reason to believe Singer was claiming that of all possible charitable donations trauchoma is the most effective, merely to give the most stunningly large difference in cost effectiveness between charitable donations used for comparable ends (both about blindness so no hard comparisons across kinds of suffering/disability).

I agree that within the EA community and when presenting EA analysis of cost-effectiveness it is important to be upfront with the full complexity of the figures. However, <b>Singer's purpose at TED isn't to carefully pick the most cost effective donations but to force people to confront the fact that cost effectiveness matters.</b>. While those of us already in EA might find a statement like "We prevent 1 year of blindness for every 3 surgeries done which on average cost..." perfectly compelling the audience members who aren't yet persuaded simply tune out. After all it's just more math talk and they are interested in emotional impact. The only way to convince them is to ignore getting the numbers perfectly right and focus on the emotional impact of choosing to help a blind person in the US get a dog rather than many people in poor countries avoid blindness.

Now it's important that we don't simplify in misleading ways but even with the qualifications here it is obvious that it still costs orders of magnitude more to train a dog than prevent blindness via this surgery. Moreover, once one factors in considerations like pain, the imperfect replacement for eyes provided by a dog, etc.. the original numbers are probably too favorable to dog training as far as relative cost effectiveness goes.

This isn't to say that your point here isn't important regarding people inside EA making estimates or givewell analysis or the like. I'm just pointing out that it's important to distinguish the kind of thing being done at a TED talk like this from that being done by givewell. So long as when people leave the TED talk their research leaves the big picture in place (dogs >>>> trauchoma surgery) it's a victory.

Comment author: TruePath 12 January 2017 12:46:14PM 1 point [-]

As for the issue of acquiring power/money/influence and then using it to do good it is important to be precise here and distinguish several questions:

1) Would it be a good thing to amass power/wealth/etc.. (perhaps deceptively) and then use those to do good?

2) Is it a good thing to PLAN to amass power/wealth/etc.. with the intention of "using it to do X" where X is a good thing.

2') Is it a good thing to PLAN to amass power/wealth/etc.. with the intention of "using it to do good".

3) Is it a good idea to support (or not object) to others who profess to be amassing wealth/power/etc.. to do good

Once broken down this way it is clear that while 1 is obviously true 2 and 3 aren't. Lacking the ability to perfectly bind one's future self means there is always the risk that you will instead use your influence/power for bad ends. 2' raises further concerns as to whether what you believe to be good ends really are good ends. This risk is compounded in 3 by the possibility that the people are simply lying about the good ends.

Once we are precise in this way it is clear that it isn't the in principle approval of amassing power to do good that is at fault but rather the trustworthiness/accuracy of those who undertake such schemes that is the problem.


Having said this some degree of amassing power/influence as a precursor to doing good is probably required. The risks simply must be weighed against the benefits.

Comment author: Denkenberger 17 December 2016 01:54:04PM 1 point [-]

I agree that QALYs are more robust, and I guess it was an earlier version of the paper where we noted that using QALYs would likely produce similar comparison of cost-effectiveness to global poverty interventions. But we wanted to keep this analysis simple, and most people (though perhaps not most EAs) think in terms of saving lives. Also, two definitions of a global catastrophic risk are based on number of lives lost (I believe 10 million according to the book Global Catastrophic Risks and 10% of human population according to Open Philanthropy).

Comment author: TruePath 12 January 2017 12:00:29PM 0 points [-]

That is good to know and I understand the motivation to keep the analysis simple.

As far as the definition go that is a reasonable definition of the term (our notion of catastrophe doesn't include an accumulation of many small utility losses) so is a good criteria for classifying the charity objective. I only meant to comment on QALYs as a means to measure effectiveness.


WTF is with the votedown. I nicely and briefly suggested that another metric might be more compelling (though the author's point about mass appeal is a convincing rebuttal). Did the comment come off as simply bitching rather than a suggestion/observation?

Comment author: vipulnaik 12 January 2017 06:24:38AM 14 points [-]

The post does raise some valid concerns, though I don't agree with a lot of the framing. I don't think of it in terms of lying. I do, however, see that the existing incentive structure is significantly at odds with epistemic virtue and truth-seeking. It's remarkable that many EA orgs have held themselves to reasonably high standards despite not having strong incentives to do so.

In brief:

  • EA orgs' and communities' growth metrics are centered around numbers of people and quantity of money moved. These don't correlate much with epistemic virtue.
  • (more speculative) EA orgs' donors/supporters don't demand much epistemic virtue. The orgs tend to hold themselves to higher standards than their current donors.
  • (even more speculative; not much argument offered) Even long-run growth metrics don't correlate too well with epistemic virtue.
  • Quantifying (some aspects of) quality and virtue into metrics seems to me to have the best shot at changing the incentive structure here.

The incentive structure of the majority of EA-affiliated orgs has centered around growth metrics related to number of people (new pledge signups, number of donors, number of members), and money moved (both for charity evaluators and for movement-building orgs). These are the headline numbers they highlight in their self-evaluations and reports, and these are the numbers that people giving elevator pitches about the orgs use ("GiveWell moved more than $100 million in 2015" or "GWWC has (some number of hundreds of millions) in pledged money"). Some orgs have slightly different metrics, but still essentially ones that rely on changing the minds of large numbers of people: 80,000 Hours counts Impact-Adjusted Significant Plan Changes, and many animal welfare orgs count numbers of converts to veganism (or recruits to animal rights activism) through leafleting.

These incentives don't directly align with improved epistemic virtue! In many cases, they are close to orthogonal. In some cases, they are correlated but not as much as you might think (or hope!).

I believe the incentive alignment is strongest in cases where you are talking about moving moderate to large sums of money per donor in the present, for a reasonable number of donors (e.g., a few dozen donors giving hundreds of thousands of dollars). Donors who are donating those large sums of money are selected for being less naive (just by virtue of having made that much money) and the scale of donation makes it worth their while to demand high standards. I think this is related to GiveWell having relatively high epistemic standards (though causality is hard to judge).

With that said, the organizations I am aware of in the EA community hold themselves to much higher standards than (as far I can make out) their donor and supporter base seems to demand of them. My guess is that GiveWell could have been a LOT more sloppy with their reviews and still moved pretty similar amounts of money as long as they produced reviews that pattern-matched a well-researched review. (I've personally found their review quality improved very little from 2014 to 2015 and much more from 2015 to 2016; and yet I expect that the money moved jump from 2015 to 2016 will be less, or possibly even negative). I believe (with weaker confidence) that similar stuff is true for Animal Charity Evaluators in both directions (significantly increasing or decreasing review quality won't affect donations that much). And also for Giving What We Can: the amount of pledged money doesn't correlate that well with the quality or state of their in-house research.

The story I want to believe, and that I think others also want to believe, is some version of a just-world story: in the long run epistemic virtue ~ success. Something like "Sure, in the short run, taking epistemic shortcuts and bending the truth leads to more growth, but in the long run it comes back to bite you." I think there's some truth to this story: epistemic virtue and long-run growth metrics probably correlate better than epistemic virtue and short-run growth metrics. But the correlation is still far from perfect.

My best guess is that unless we can get a better handle on epistemic virtue and quantify quality in some meaningful way, the incentive structure problem will remain.

Comment author: TruePath 12 January 2017 11:43:28AM *  2 points [-]

The idea that EA charities should somehow court epistemic virtue among their donors seems to me to be over-asking in a way that will drastically reduce their effectiveness.

No human behaves like some kind of Spock stereotype making all their decisions merely by weighing the evidence. We all respond to cheerleading and upbeat pronouncements and make spontaneous choices based on what we happen to see first. We are all more likely to give when asked in ways which make us feel bad/guilty for saying no or when we forget that we are even doing it (annual credit card billing).

If EA charities insist on cultivating donations only in circumstances where the donors are best equipped to make a careful judgement, e.g., eschewing 'Give Now' impulse donations, fundraising parties with liquor and peer pressure and insist on reminding us each time another donation is about to be deducted from our account, they will lose out on a huge amount of donations. Worse, because of the role of overhead in charity work, the lack of sufficient donations will actually make such charities bad choices.

Moreover, there is nothing morally wrong with putting your organization's best foot forward or using standard charity/advertising tactics. Despite the joke it's not morally wrong to make a good first impression. If there is a trade off between reducing suffering and improving epistemic virtue there is no question which is more important and if that requires implying they are highly effective so be it.

I mean it's important charities are incentivized to be effective but imagine if the law required every charitable solicitation to disclose the fraction of donations that went into fundraising and overhead. It's unlikely the increased effectiveness that resulted would make up for the huge losses that forcing people to face the unpleasant fact that even the best charities can only send a fraction of their donation to the intended beneficiaries.


What EA charities should do, however, is pursue a market segmentation strategy. Avoid any falsehoods (as well as annoying behavior likely to result in substantial criticism) when putting a good face on their situation/effectiveness and make sure detailed truthful and complete data and analysis is available for those who put in the work to look for it.

Everyone is better off this way. No on is lied to. The charities get more money and can do more with it. The people who decide to give for impulsive or other less than rational reasons can feel good about themselves rather than feeling guilty they didn't put more time into their charitable decisions. The people who care about choosing the most effective evidence backed charitable efforts can access that data and feel good about themselves for looking past the surface. Finally, by having the same institution chase both the smart and dumb money the system works to funnel the dumb money toward smart outcomes (charities which lose all their smart money will tend to wither or at least change practices).

Comment author: TruePath 12 January 2017 11:15:59AM 0 points [-]

It seems to me that a great deal of this supposed 'problem' is simply the unsurprising and totally human response to feeling that an organization you have invested in (monetarily, emotionally or temporally) is under attack and that the good work it does is in danger of being undermined. EVERYONE on facebook engages in crazy justificatory dances when their people are threatened.

It's a nice ideal that we should all nod and say 'yes that's a valid criticism' when our baby is attacked but it's not going to happen. There is nothing we can do about this aspect so let's instead simply focus on avoiding the kind of unjustified claims that generated the trouble.

Of course, it is entirely possible that some level of deception is necessary to run a successful charity. I'm sure a degree of at least moral coercion is, e.g., asking people for money in circumstances it would look bad not to give. However, I'm confident this can be done in the same way traditional companies deceive, i.e. by merely creating positive associations and downplaying negative ones rather than outright lying.

View more: Next