Comment author: MichaelPlant 11 March 2017 01:38:21PM 2 points [-]

Lovisa, have you looked into Basic Needs? http://www.basicneeds.org/

When I spoke to Eric Gastfriend about the Harvard EA report a while ago I asked why Strong Minds and Basic Needs weren't on the list. As far as I recall they just hadn't looked at them, rather than that they'd looked at them and then decided they were bad options.

I'd also be really curious to have someone do a cost-effectiveness comparison for Action for Happiness. http://www.actionforhappiness.org/ The thought is that it might be more effective, if happiness is your goal, to fund broad but shallow happiness education programmes for the general public, rather than funding deep mental health interventions for a few people.

I have no idea how the numbers would come out and would probably be biased (disclaimer: I know some of the people at both orgs and might do some work for Action for Happiness at some point). Hence it would be great to get some fresh eyes on the topic.

Comment author: JamesSnowden 11 March 2017 02:56:24PM 4 points [-]

I would deprioritise looking at BasicNeeds (in favour of StrongMinds). They use a franchised model and aren't able to provide financials for all their franchisees. This makes it very difficult to estimate cost-effectiveness for the organisation as a whole.

The GWWC research page is out of date (it was written before StrongMinds' internal RCT was released) and I would now recommend StrongMinds above BasicNeeds on the basis of greater levels of transparency, and focus on cost-effectiveness.

Comment author: JamesSnowden 23 February 2017 09:34:17PM *  10 points [-]

Thanks Holden. This seems reasonable.

A high impact foundation recently (and helpfully) sent me their grant writeups, which are a treasure trove of useful information. I asked them if I could post them here and was (perhaps naively) surprised that they declined.

They made many of the same points as you re: the limited usefulness of broad feedback, potential reputation damage, and (given their small staff size) cost of responding. Instead, they share their writeups with a select group of likeminded foundations.

I still think it would be much better if they made their writeups public, but almost entirely because it would be useful for the reader.

It's a shame that the expectation of responding to criticism can disincentivise communication in the first place.

(Views my own, not my employer's)

Comment author: BenHoffman 02 February 2017 10:58:01PM 3 points [-]

This seems like it describes the most relevant considerations when thinking about how the pledge directly affects your own future actions. I think there's another angle worth considering, and that's the pledge as a report about your likely future actions.

You might be reluctant to take the pledge, not just out of a worry that you'll bind your future self to a wrong decision, but out of a worry that if your future self acts differently, people in the meantime will have made decisions based on your assurance. It's quite difficult to model others well enough to figure out whether they'll take the optimal action as you see it, but potentially easy to decide whether to believe their promises. This makes coordination easier.

A couple of years ago, a friend was considering a relocation that improved their expected lifetime impact substantially. Moving would potentially have put their personal finances under strain, so I offered to lend them a few thousand dollars if money should happen to be tight for a while. They found this offer sufficiently reassuring that they were happy to go ahead with the move without delay. I felt that the offer was morally binding on me barring severe unforeseen circumstances, but the point of my promise was neither to coercively bind my future self, nor simply to determine my future self's course of action by establishing some momentum. The point was to accurately report my future willingness to lend to my friend, with high confidence.

If it had turned out to be somewhat harder than anticipated to lend my friend the money, I would have considered myself obliged to work hard to figure out a solution. I don't think this was especially related to the fact that the behavior I was making a promise about was mine. If I ever make an assurance to someone, and they end up harming themselves because it turned out to be a false assurance, I consider myself at least somewhat obliged to try to make them whole.

Giving What We Can, for example, uses the number of people who have taken the pledge as a measurement of their impact. Giving What We Can itself and potential GWWC donors make decisions about whether promoting the pledge is a good use of resources, based on both the observed behavior of pledgers, and some beliefs about how serious pledgers' intent is. When considering publicly pledging, you should consider not just its effect on you, but that you're either providing accurate information or misinformation to those who are paying attention.

For this reason, I think that serious pledges should not be entered into unless the pledge is either easy (i.e. you predict with high confidence that you would do the pledged behavior anyway) or very important (you predict that taking the pledge gives you options much more valuable than the ones that might otherwise be available). An example of an easy pledge would be assuring a future houseguest that coffee will be available (if you regularly stock coffee). An example of a very important pledge might be marriage, in which by promising to stick with someone, you get them to promise the same to you - though many people delay getting married until the promise feels easy as well.

Comment author: JamesSnowden 03 February 2017 02:39:58PM 1 point [-]

I agree this seems relevant.

One slight complication is that donors to GWWC might expect a small proportion of people to renege on the pledge.

12

The Giving What We Can Pledge: self-determination vs. self-binding

These are personal reflections and don’t reflect any official stance of CEA or Giving What We Can (I work for CEA). I’ve talked to some of my colleagues. Some of them have had similar thoughts, others haven’t. When I was 21 (I’m now 27), I took the Giving What We... Read More
Comment author: Jeff_Kaufman 21 December 2016 05:30:25PM 1 point [-]

(My method gives an estimate of 0.0022 per dollar to GiveDirectly, so if GiveWell is estimating 0.0049 then my bottom line numbers are roughly 2x too high.)

Comment author: JamesSnowden 23 December 2016 11:10:51AM 1 point [-]

It seems like you're assuming that the GiveDirectly money would have gone only to the M-Pesa-access side of the (natural) experiment, but they categorized areas based on whether they had M-Pesa access in 2008-2010, not 2012-2014 when access was much higher.

Ah yes - that kind of invalidates what I was trying to do here.

I didn't notice that GiveWell had an estimate for this, and checking now I still don't see it. Where's this estimate from?

It came from the old GiveWell cost-effectiveness analysis excel sheet (2015). "Medians - cell V14". Actually looking at the new one the equivalent figures seems to be 0.26% so you're right! (Although this is the present value of total increases in current and future consumption).

Comment author: JamesSnowden 21 December 2016 04:16:03PM *  2 points [-]

Thanks for this Jeff - a very informative post.

The study doesn't appear to control for cash transfers received through access to M-Pesa. I was thinking about how much of the 0.012 increase in ln(consumption) was due to GiveDirectly cash transfers.

Back of the envelope:

  • M-Pesa access raises ln(consumption) by 0.012 for 45% of population (c.20m people).
  • 0.012 * 20m = 234,000 unit increases in ln(consumption)

  • GiveDirectly gave c.$9.5m in cash transfers between 2012-14 to people with access to M-Pesa. [1]

  • GiveWell estimate each $ to GiveDirectly raises ln(consumption) by 0.0049
  • 9.5m * 0.0049 = 46,000 unit increases in ln(consumption)

So GiveDirectly accounted for (very roughly) a fifth of the 0.012 increase in ln(consumption) due to M-Pesa.

[1] this is an overestimate as it assumes all transfers went to Kenya and none to Uganda

(Done in haste - may have got my sums / methodology wrong)

Comment author: Denkenberger 17 September 2016 01:20:54PM *  1 point [-]

By this argument, someone who is risk-averse should buy insurance, even though you lose money in expectation. Most of the time, this money is wasted. Interestingly, X risk research is like buying insurance for humanity as a whole. It might very well be wasted, but the downside of not having such insurance is so much worse than the cost of insurance that it makes sense (if you are risk neutral and especially if you are risk-averse).

Edit: And actually some forms of global catastrophic risks are surprisingly likely, for instance a 10% global agricultural shortfall has about an 80% probability this century. So preparation for this would most likely not be wasted.

Comment author: JamesSnowden 19 September 2016 08:29:02AM 0 points [-]

I agree. Although some forms of personal insurance are also rational. Eg health insurance in the US because the downside of not having it is so bad. But don't insure your toaster.

Comment author: Rick 16 September 2016 07:02:37PM 1 point [-]

I just want to push back against your statement that "economists believe that risk aversion is irrational". In development economics in particular, risk aversion is often seen as a perfectly rational approach to life, especially in cases where the risk is irreversible.

To explain this, I just want to quickly point out that, from an economic standpoint, there's no correct formal way of measuring risk aversion among utils. Utility is an ordinal, not cardinal, measure. Risk aversion is something that is applied to real measures, like crop yields, in order to better estimate people's revealed preferences - in essence, risk aversion is a way of taking utility into account when measuring non-utility values.

So, to put this in context, let's say you are a subsistence farmer, and have an expected yield of X from growing Sorghum or a tuber, and you know that you'll always roughly get a yield X (since Sorghum and many tubers are crazily resilient), but now someone offers you an 'improved Maize' growth package that will get you an expected yield of 2X, but there's a 10% chance that you're crops will fail completely. A rational person at the poverty line should always choose the Sorghum/tuber. This is because that 10% chance of a failed crop is much, much worse than could be revealed by expected yield - you could starve, have to sell productive assets, etc. Risk aversion is a way of formalizing the thought process behind this perfectly rational decision. If we could measure expected utility in a cardinal way, we would just do that, and get the correct answer without using risk aversion - but because we can't measure it cardinally, we have to use risk aversion to account for things like this.

As a last fun point, risk aversion can also be used to formalize the idea of diminishing marginal utility without using cardinal utility functions, which is one of the many ways that we're able to 'prove' that diminishing marginal utility exists, even if we can't measure it directly.

Comment author: JamesSnowden 19 September 2016 08:27:13AM *  0 points [-]

I agree that dmu over crop yields is perfectly rational. I mean a slightly different thing. Risk aversion over utilities. Which is why people fail the Allais pradadox. Rational choice theory is dominated by expected utility theory (exceptions Buchak, McClennen) which suggests risk aversion over utilities is irrational. Risk aversion over utilities seems pertinent here because most moral views don't have dmu of people's lives.

Comment author: JamesSnowden 16 September 2016 05:19:02PM 2 points [-]

In normative decision theory, risk aversion means a very specific thing. It means using a different aggregating function from expected utility maximisation to combine the value of disjunctive states.

Rather than multiplying the realised utility in each state by the probability of that state occurring, these models apply a non-linear weighting to each of the states which depends on the global properties of the lottery, not just what happens in that state.

Most philosophers and economists agree risk aversion over utilities is irrational because it violates the independence axiom / sure-thing principle which is one of the foundations of objective / subjective expected utility theory.

One way a person could rationally have seemingly risk averse preferences is by placing a higher value on the first bit of good they do than on the second bit of good they do, perhaps because doing some good makes you feel better. This would technically be selfish.

But I'm pretty sure this isn't what most people who justify donating to global poverty out of risk aversion actually mean. They generally mean something like "we should place a lot of weight on evidence because we aren't actually very good at abstract reasoning". This would mean their subjective probability that an x-risk intervention is effective is very low. So it's not technically risk aversion. It's just having a different subjective probability. This may be an epistemic failure. But there's nothing selfish about it.

I wrote a paper on this a while back in the context of risk aversion justifying donating to multiple charities. This is a shameless plug. https://docs.google.com/document/d/1CHAjFzTRJZ054KanYj5thWuYPdp8b3WJJb8Z4fIaaR0/edit#heading=h.gjdgxs

Comment author: michaelchen 06 August 2016 01:27:02AM 0 points [-]

Did you change the wordmark and the color of the logo? I'm curious as to the thought process behind that.

Comment author: JamesSnowden 11 August 2016 03:04:45PM 1 point [-]

We wanted to differentiate the website slightly from the eaglobal site while maintaining brand coherency so went for a slightly different shade of blue which feels a bit 'calmer'.

Not wedded to it though and may change back. Which do you prefer?

View more: Next