Comment author: MichaelPlant 09 August 2017 05:10:25PM 0 points [-]

ah, hadn't spotted that. Must have been caused by copying and pasting it across from a word doc. Can you suggest a way I can reformat it so clicking citations in text takes you down to the footnotes and clicking them in the footnotes takes you to where they are in the text?

Comment author: Dan_Keys 09 August 2017 05:31:51PM 1 point [-]

It might be possible to fix in a not-too-tedious way, by using find-replace in the source code to edit all of the broken links (and anchors?) at once.

Comment author: Ajeya 04 December 2016 10:47:23PM *  2 points [-]

It seems like "deeply committed" is doing a lot of work there. In the last EA survey, it seemed like the median donation from a person who identified as "EA", listed "earning to give" as their career, was not a student, and believed they should give now rather than give later was $1933. At typical starting software engineer salaries (which I would guess is a typical career for a median "earning to give" EA), this represents a 1-5% donation. This suggests the pledge would increase the donations of over 50% of EAs who list their primary career path as earning to give (so the argument that the mental effort needed to keep the pledge would distract from their careers doesn't apply). Link to analysis here:

Edit: Speaking for myself only, not my employer.

Comment author: Dan_Keys 05 December 2016 01:14:55AM 7 points [-]

It appears that this analysis did not account for when people became EAs. It looked at donations in 2014, among people who in November 2015 were nonstudent EAs on an earning to give path. But less than half of those people were nonstudent EAs on an earning to give path at the start of 2014.

In fact, less than half of the people who took the Nov 2015 survey were EAs at the start of 2014. I've taken a look at the dataset, and among the 1171 EAs who answered the question about 2014 donations:
40% first got involved in EA in 2013 or earlier
21% first got involved in EA in 2014
28% first got involved in EA in 2015
11% did not answer the question about when they got involved in EA

This makes all of the analyses of median 2014 donation extremely misleading, unless they're limited to pre-2014 EAs (which they generally have not been).

I'm hoping that the next EA survey will do better with this issue. I believe the plan is to wait until January in order to ask about 2016 donations, which is a good start. Hopefully they will also focus on pre-2016 EAs when looking at typical donation size, since the survey will include a bunch of new EAs who we wouldn't necessarily expect to see donating within their first few months as an EA.

(Also speaking for myself only, not my employer.)

Comment author: Dan_Keys 12 September 2016 11:17:20PM 1 point [-]

If the prospective employee is an EA, then they are presumably already paying lots of attention to the question "How much good would I do in this job, compared with the amount of good I would do if I did something else instead?" And the prospective employee has better information than the employer about what that alternative would be and how much good it would do. So it's not clear how much is added by having the employer also consider this.

Comment author: ChrisCundy 30 July 2016 07:24:28PM *  3 points [-]

That is a very good point, and ties in to vipulnaik's point below about starting the survey collection time just after the start of a year so that donation information can be recorded for the immediately preceding year.

I've quickly run the numbers and the median donation in 2014 for the 467 people who got involved in 2013 or earlier was $1,500, so significantly higher than that for EAs overall. This is not including people who didn't say what year they got involved, so probably cuts a few people out who did get involved before 2014 but can't remember. Also if we have constant attrition from the EA movement then you'd expect the pre-2014 EAs to be more committed as a whole

This is a very good point and is making me lean towards vipulnaik's suggestion for future surveys, as this problem will be just as pressing if the movement continues to grow at the rate it has done.

Comment author: Dan_Keys 30 July 2016 11:08:58PM 0 points [-]

Thanks for looking this up quickly, and good point about the selection effect due to attrition.

I do think that it would be informative to see the numbers when also limited to nonstudents (or to people above a certain income, or to people above a certain age). I wouldn't expect to see much donated from young low- (or no-) income students.

Comment author: Dan_Keys 30 July 2016 04:00:40AM 6 points [-]

For the analysis of donations, which asked about donations in 2014, I'd like to see the numbers for people who became EAs in 2013 or earlier (including the breakdowns for non-students and for donations as % of income for those with income of $10,000 or more).

37% of respondents first got involved with EA in 2015, so their 2014 donations do not tell us much about the donation behavior of EAs. Another 24% first got involved with EA in 2014, and it's unclear how much their 2014 donations tell us given that they only began to be involved in EA midyear.

Comment author: Dan_Keys 26 April 2016 08:05:55AM 7 points [-]

My guess (which, like Michael's, is based on speculation and not on actual information from relevant decision-makers) is that the founders of Open Phil thought about institutional philosophy before they looked in-depth at particular cause areas. They asked themselves questions like:

How can we create a Cause Agnostic Foundation, dedicated to directing money wherever it will do the most good, without having it collapse into a Foundation For Cause X as soon as its investigations conclude that currently the highest EV projects are in cause area x?

Do we want to create a Cause Agnostic Foundation? Would it be a bad thing if a Cause Agnostic Foundation quickly picked the best cause and then transformed into the Foundation For Cause X?

Apparently they concluded that it was worth creating a (stable) Cause Agnostic Foundation, and that this would work better if they directed significant amounts of resources towards several different cause areas. I can think of several arguments for this conclusion:

  1. Spreading EA Ideas. It's easier to spread the ideas behind effective altruism (and to create a world where more resources are devoted to attempts at effective altruism) if there is a prominent foundation which is known for the methodology that it uses to choose causes rather than for its support of particular causes. And that works best if the foundation gives to several different cause areas.

  2. Diminishing Returns to Prestige. Donations can provide value by conferring prestige, not just by transferring money, and prestige can have sharply diminishing returns to amount donated. e.g., Giving to your alma mater, whether it's $10 or $10,000, lets them say that a higher percentage of alumni are donors. One might hope that this prestige benefit (with diminishing returns) would apply to many of the grants from a Cause Agnostic Foundation, and that it will be well-regarded enough to bring other people's attention to the causes & organizations that it supports.

  3. Ability to Pivot. If a foundation focuses on just one or two cause areas (and hires people to work on those cause areas, publicizes its reasons for supporting those cause areas, builds connections with other organizations in those cause areas, etc.) that can make it hard for it to keep an open mind about cause areas and potentially pivot to a different cause area which starts looking more promising a few years later.

  4. Learning. We can learn more if we pursue several different cause areas than if we just focus on one or two. This can include things like: getting better at cause prioritization by doing it a lot, getting better at evaluating organizations by dealing with some organizations that are in cause areas where progress is relatively easy to track, and learning how to interact with governments in the context of criminal justice reform and then being better able to pursue projects involving government in other cause areas.

  5. Hits. A foundation which practices hits-based-giving can tolerate a lot of risk, but they may need to have at least some visible hits over the years in order to remain institutionally strong. Diversifying across cause areas can help that happen.

My sense is that this is an incomplete list; there are other arguments like these.

It's worth noting that many of these lines of reasoning are specific to a foundation like Open Phil, and would not apply to a single wealthy donor looking to donate his or her own money.

Comment author: Jeff_Kaufman 20 February 2016 06:08:47AM *  3 points [-]

it looks like you're using a one-sided t-test to get your p-value.

I agree that a two-sided test would be the right thing to use here, and p-value calculations aren't something I fully understand. Is this calculation one-sided or two-sided?

Comment author: Dan_Keys 20 February 2016 06:50:27AM 2 points [-]

I can't tell what's being done in that calculation.

I'm getting a p-value of 0.108 from a Pearson chi-square test (with cell values 55, 809; 78, 856). A chi-square test and a two-tailed t-test should give very similar results with these data, so I agree with Michael that it looks like your p=0.053 comes from a one-tailed test.

Comment author: Dan_Keys 19 February 2016 02:09:45AM 4 points [-]

A quick search into the academic research on this topic roughly matches the claims in this post.

Meta-analyses by Allen (1991) (pdf, blog post summary) and O'Keefe (1999) (pdf, blog post summary) defined "refutational two-sided arguments" as arguments that include 1) arguments in favor of the preferred conclusion, 2) arguments against the preferred conclusion, and 3) arguments which attempt to refute the arguments against the preferred conclusion. Both meta-analyses found that refutational two-sided arguments were more persuasive than one-sided arguments (which include only the first of those 3 types of arguments), which in turn were more persuasive than nonrefutational two-sided arguments (which include the first 2 of those 3 types of arguments).

So: surveying both sides of the argument, and making the case for why one side holds more weight than the other, does seem to lead to more convincing writing.

These results are at a fairly broad level of generality. I don't know if any research has looked at questions like whether it matters if you include the strongest arguments against the preferred conclusion (vs. only including straw man arguments) or if it matters if you act as if the arguments against the preferred conclusion have been completely refuted (vs. somewhat outweighed by the arguments in favor of the preferred conclusion).

A quick skim through the list of articles citing Allen and O'Keefe's papers turned up some studies which look for additional sources of variability which might moderate this effect, but I didn't notice any that challenge the general pattern or which get into really good detail on whether normatively good arguments (e.g., non-straw-man, measured conclusions) are more convincing.

Comment author: Dan_Keys 30 January 2015 06:20:27PM 3 points [-]

Have you looked at the history of your 4 metrics (Visitors, Subscribers, Donors, Pledgers) to see how much noise there is in the baseline rates? The noisier they are, the more uncertainty you'll have in the effect size of your intervention.

Could you have the pamphlets only give a url that no one else goes to, and then directly track how many new subscribers/donors/pledgers have been to that url?