Hide table of contents

Summary

  • Median donations were slightly higher than in 2016 and total donations much higher
  • A small number of very large donors account for the majority of the totals donated
  • A majority of EAs report donating less than they would like due to financial constraints

This post explores donation data in the 2018 EA Survey, investigating how much people are donating, where they are donating and what influences their donations.

1891 out of 2607 (73%) self-identified EAs in our sample offered data about their donations. This is a significant increase from the 2017 Survey where we had donation data from 1019 EAs out of 1853 (54.9%).

Totals Donated

As in previous years, there was a very wide range in amounts donated (note the 2018 Survey collects reports on amounts donated in 2017). All amounts are in USD ($).



This marks a significant increase in total amounts donated and median donations compared to 2015 and 2016 data.



As the graph below shows, a very small number of very large donations dwarf the size of most others.



The following histogram shows the number of donations of different sizes being made by EAs in our sample (note the log x axis).



A commenter on a previous EA Survey post asked about the proportion of total EA donations that came from donations of certain sizes and requested a cumulative donations graph (below). Perhaps, even if the largest donations are many times larger than the rest, much of the total might still be coming from a very large number of smaller donors?



As the graph above shows, however, a relatively small portion of total donations recorded in our sample comes from those donating smaller amounts. Individuals donating $1000 or less in 2017 (47.8% of donors in our sample) accounted for $267,942.5 of donations or about 1.5% of the total. Those more in the ‘middle’, donating between $1000 and $10,000 include 40% of donors, and the substantial sum of $2,569,359.59, which is large relative to most EA projects, while still only being 14% of total donations. Conversely donors giving more than $100,000 (0.7% of donors) accounted for 57% of donations.



As the figures in the table above show, a donation of $1000 per year (or 5% of a $20,000 salary) would place one in the top half of EA donors (specifically, the 55th percentile), whereas being in the top 10% of donors would require donating $10,000 and the top 1% >$75,000.

Percentages of Income Donated

We also looked at the percentages of income that EAs were donating, based on the 1798 EAs who disclosed both income and donation data.[1] As in previous years, most EAs were donating significantly less than the 10% Giving What We Can Pledge. However, as the graph below shows, there is a marked ‘bump’ in the donors giving at around the 10% figure, perhaps due to the Giving What We Can Pledge target around this amount, or due to the figure’s wider popularity as a target (e.g. in tithing).



The above graph shows a small number of respondents report donating more than 100% of their income in 2017. In all but three of these cases the sums reported for donations and income were very low (less than a couple of thousands dollars), so we do not view them as particularly implausible, though they may well represent unusual situations.



What Explains Low Donations?

In previous years, readers have commented on the large numbers of EAs who appear to be donating $0 or close to $0 and at their surprise at low donation levels overall. In earlier years, where donations and the Giving What We Can Pledge were emphasised more, compared to now where (directly) impactful career choice and upskilling EAs are emphasised more, this was perhaps a more a cause for surprise and concern. However, it is still worth investigating what explains low levels of donation.

Students

It goes without saying that many EAs are students (45.2% in our sample this year), who might therefore be expected to be donating lower amounts.

True to expectations, we found that non-students donated significantly more than students:



Employment

Similarly, many EAs in our sample, though not students, may not be employed or fully employed. When we exclude both students and those who are not employed full-time, median donations are, again, substantially higher:



Despite this, the percent donated even among full-time employed non-students was very low, with a median of 3.7% income donated. Indeed, only the 80th percentile of full-time employed non-students was donating more than 10% (10.591%) with the 90th percentile donating 14.52%.

Income

The fact that donations are higher among full-time employed non-students may well be expected to be largely due to higher income. Indeed, within the sample as a whole, we found that donations in 2017 were strongly correlated with the average of household and individual income in 2017 ( 0.9232, p<0.001) in the raw data. However, we should note that there are some large outliers driving this very strong relationship.



To account for full-time employed non-students who were nevertheless on a low income, we also looked at full-time employed non-students who reported earning more than $20,000 USD in 2017 (762 out of 1050 full-time employed non-students, of which 747 reported donation data).



While the median donation among this group was substantially higher than for the sample as a whole, the percent donated remains quite far short of the level of the Giving What We Can Pledge.

Self-reported reasons for donating less than desired

This year, for the first time, we asked respondents whether they were donating as much or less than they wanted to.

Across the sample as a whole, a majority (56.99%) reported donating less than they would like to due to financial constraints or some other reason (17.36%), whereas less than a quarter (24.42%) reported donating as much as they would like to.



We also hand-coded the qualitative responses in the “please specify why” box. Responses could be classified as multiple different categories at once. Many people who selected this option also specified financial constraints as a reason, meaning that the total for this category was even higher. Most of the coded categories are fairly self explanatory. However, “Social” refers to social influence or social norms and most responses in this category explicitly referred to a spouse or partner. “Efficacy” refers to concerns about locating an effective charity to donate to or concern than any charities were effective donation targets. The graph below shows the frequency of these other reasons.



Other Influences on Giving

Career

EAs work in a variety of careers and perhaps people aren't donating money, but rather donating their time or taking a lower salary to work at a directly impactful org. Indeed, those working in research roles donate substantially less than their peers and have a lower median income.



Giving What We Can Pledge

The observation that so many EAs in our survey were donating much less than 10% of their income to charity suggests the question: how much are self-reported GWWC Pledge takers donating?



According to this analysis, the median percent of income donated by someone who had taken the GWWC Pledge, in our sample, was 7.22%, short of the 10% target. Nevertheless, this could be influenced by GWWC Pledge takers being students, not employed or only recently having taken the Pledge. We will be exploring these issues in more detail in a dedicated post.

Year First Heard of EA

Median donations, income and percentage of income donated all increase fairly steadily the longer respondents have been in EA. 2010 bucks the trend slightly owing to one very high income-high donation individual and small number of EAs within that year. One intuitive reason why this might be is that those who have more recently heard of EA tend to be younger, more likely to be students (or may be earlier in their careers) and so have a lower income. In addition, new EAs might also be expected to be less likely to be extremely dedicated or willing to donate large sums right away. There may also be some survivorship bias, where those EAs who report first hearing of EA in earlier years, and are still taking the survey in 2018, may be more likely to dedicated, highly involved (and perhaps higher donating) EAs.



This veteran versus newcomer difference is apparent within non-students, pledge takers, those earning to give, and especially among high-income individuals. Without longitudinal data it is difficult to know if this is due to a time effect, or group-switching over time. There is very little difference between newcomer and veteran non-pledge takers, students, other career types, and other income-level individuals. This makes intuitive sense as this group are likely to now either have low incomes or not be expressly committed to donating a significant portion of income, irrespective of the length of time they have been in EA.



(Right click and open in new tab to expand image)

Predictors of Donation

We examined the effect of income, student status, number of years involved with EA, and membership in GWWC as potential predictors of donations. Individual income was the strongest predictor of donation amount, followed by a positive impact of membership in GWWC interacting with income (link to regression table). Before analysis, two outliers with large influence were removed (individual incomes >$5 million), and the data were log transformed, centered and scaled to improve normality. Household income was strongly correlated with individual income (~80%) and was therefore excluded from the model. This model (AIC=3114), was slightly preferred over a model with no interaction between GWWC and income (AIC=3121), and explained about 45% of the variation in the data. The model was also preferred over a simple regression of donation vs income (AIC=3498) which accounted for 37% of the variation in the normalized data.

As previously noted, there was a small negative impact of being a student on donation amount. As seen in the figure below, this effect was slightly mitigated by being a member of GWWC.


There was also a very small negative effect of being aware of EA for a shorter time. There was little correlation between between years of EA involvement and individual income (16%), so the explanation for this effect may have more to do with years of commitment to charitable giving than income-related factors such as student status.

Which Charities are EAs Donating to?

We received information about which specific charities respondents donated to from 494 out of the 1363 EAs who reported donating anything in 2017. Given this, information about totals of donations to specific charities should be treated with caution.

As in previous years, GiveWell charities, led by GiveDirectly, received among the most reported donations. However, unlike previous years, CEA tops the list for total reported donations. The Humane League attracted substantially more funding (mostly explained by one large donor) and donors than in 2016, among those who reported donations. In addition, AMF, GiveDirectly, GiveWell, and MIRI received the largest number of individuals donating to them.





Conclusions

Total EA donations within our sample are dominated by a fairly small number of very large donors. Nevertheless, median donations do seem to be slowly increasing, compared to earlier years. Furthermore, median donations and percentages of income donated are substantially higher when excluding students, those not fully employed or those on a low income.

[1] Unless otherwise stated “income” refers to the average of individual and household income, which were reported separately. These measures were extremely highly correlated and using either of the individual measures made no difference to our analyses where we tested this.

Updates and corrections

Corrected percentage donated values

Added column labels to percentiles tables at request

Credits

This post was written by David Moss, with contributions from Neil Dullaghan and Kim Cuddington.

Analysis conducted by Rethink Priorities staff, David Moss, Neil Dullaghan, Kim Cuddington and Peter Hurford.

Thanks to Peter Hurford and Tee Barnett for editing.

The annual EA Survey is a project of Rethink Charity with analysis and commentary from researchers at Rethink Priorities.

Supporting Documents

Other articles in the 2018 EA Survey Series:

I - Community Demographics & Characteristics

II - Distribution & Analysis Methodology

III - How do people get involved in EA?

IV - Subscribers and Identifiers

VI - Cause Selection

VII- EA Group Membership

VIII- Where People First Hear About EA and Higher Levels of Involvement

IX- Geographic Differences in EA

X- Welcomingness- How Welcoming is EA?

XI- How Long Do EAs Stay in EA?

XII- Do EA Survey Takers Keep Their GWWC Pledge?

Prior EA Surveys conducted by Rethink Charity:

The 2017 Survey of Effective Altruists

The 2015 Survey of Effective Altruists: Results and Analysis

The 2014 Survey of Effective Altruists: Results and Analysis

Comments20
Sorted by Click to highlight new comments since:

Thanks for the writeup!

Questions:

1. Were income numbers pre or post-tax?

2. Do you have a number for average earnings of non-students who are earning to give? $52,000 is a pretty low number for that category.

3. How did the survey define the difference between "earning to give" and "other", if at all?

I'm really looking forward to the dedicated post that will give us numbers on non-students in GWWC; hitting 10% at the median would be nice.

Thanks!

  1. Were income numbers pre or post-tax?

All pre-tax.

  1. Do you have a number for average earnings of non-students who are earning to give?$52,000 is a pretty low number for that category.

The numbers are likely lowered (as they were elsewhere) by a lot of fairly new, lower earning/donating people, who are just starting out on that career path. Median donations for (non-student) E2G were $3000 and $70,000 income. Only above the 63rd percentile in this category were people earning more than $100,000.

  1. How did the survey define the difference between "earning to give" and "other", if at all?

These were just fixed response options without additional definition.

It seems plausible that people who earn on the very high end of the spectrum might not have filled in the survey due to time constraints (selection bias).

Thanks for doing this! Some nitpicking on this graph: https://i.ibb.co/wLd1vSg/donations-income-scatter.png (donations and income)

1) the trendline looks a bit weird. Did you force it to go through (0,0)?

2) Your axis labels initially go up by factors of 100, then the last one only a factor of 10.

Thanks for your comment Elizabeth.

The axis was just mislabelled (one missing 0). We updated the graph to fix that.

As to the trendline, we just used a line of best fit, which assumes a linear relationship. The low R^2 (~30%) of this linear Donations~Income regression explains why it "looks a bit weird". It was used as an easy to interpret visual that depicted a simplified relationship between income and donations but one which demonstrated the correct direction of effect. This does have the disadvantage of being prone to overfitting, and as we noted "there are some large outliers driving this very strong relationship". We might expect a better fit for a nonlinear relationship, however, the later analysis with differing linear responses for different donor groups, was a reasonable fit.

Nice work! One more way of teasing out the origin of relatively low donations is asking about net worth. A person may be a full-time nonstudent and have a good salary, but still have college debt and therefore be hesitant to donate a lot.

Thanks for the suggestion! That seems likely to be at least one of the things that is being picked up by the 'financial constraint' responses.

The linear trend line in https://i.ibb.co/BgBkLZW/regression-graph.png looks like a poor match. Instead I'd model it as there being multiple populations, where one major population has a very steep trendline.

I feel like I might be missing something obvious, but under the "other influences on giving" section, are there really 514 researchers that filled out the survey?

The people who selected 'research' were disproportionately students compared to the other categories. Excluding all students across categories, 251 people selected research, and median income and donations were still significantly lower.

Thanks for these interesting results. I have a minor technical question (which I don't think was covered in the methodology post, nor in the Github repository from a quick review):

How did you select the variables (and interaction term) for the regression model? A priori? Stepwise? Something else?

Thanks Greg. These were selected a priori (though informed by our prior analyses of the data).

Due to missing data there was some difficulty doing stepwise elimination with the complete dataset. We've added a model including all interactions to the regression document. This had a slightly better AIC (3093 vs 3114).

Thanks. I should say that I didn't mean to endorse stepwise when I mentioned it (for reasons Gelman and commenters note here), but that I thought it might be something one might have tried given it is the variable selection technique available 'out of the box' in programs like STATA or SPSS (it is something I used to use when I started doing work like this, for example).

Although not important here (but maybe helpful for next time), I'd caution against using goodness of fit estimators (e.g. AIC going down, R2 going up) too heavily in assessing the model as one tends to end up with over-fitting. I think the standard recommendations are something like:

  • Specify a model before looking at the data, and caveat any further explanations as post-hoc. (which sounds like essentially what you did).
  • Split your data into an exploration and confirmation set, where you play with whatever you like on the former, then use the model you think is best on the latter and report these findings (better, although slightly trickier, are things like k-fold cross validation rather than a single holdout).
  • LASSO, Ridge regression (or related regularisation methods) if you are going to select predictors 'hypothesis free' on your whole data.

(Further aside: Multiple imputation methods for missing data might also be worth contemplating in the future, although it is a tricky judgement call).

Thanks Greg, I appreciate the feedback.

Some of this depends on what our goal is here. Is it to maximize 'prediction' and if so, why? Or is it something else? ... Maybe to identify particularly relevant associations in the population of interest.

For prediction, I agree it’s good to start with the largest amount of features (variables) you can find (as long as they are truly ex-ante) and then do a fancy dance of cross-validation and regularisation, before you do your final ‘validation’ of the model on set-aside data.

But that doesn’t easily give you the ability to make strong inferential statements (causal or not), about things like ‘age is likely to be strongly associated with satisfaction measures in the true population’. Why not? If I understand correctly:

The model you end up with, which does a great job at predicting your outcome

  1. … may have dropped age entirely or “regularized it” in a way that does not yield an unbiased or consistent estimator of the actual impact of age on your outcome. Remember, the goal here was prediction, not making inferences about the relationship between of any particular variable or sets of variables …

  2. … may include too many variables that are highly correlated with the age variable, thus making the age coefficient very imprecise

  3. … may include variables that are actually ‘part of the age effect you cared about, because they are things that go naturally with age, such as mental agility’

  4. Finally, the standard ‘statistical inference’ (how you can quantify your uncertainty) does not work for these learning models (although there are new techniques being developed)

By the way, in this years' post -- or, better yet, see the dynamic document here, in our predictive models we use elastic-net and random-forest modeling approaches with validation (cross-fold validation for tuning on training data, predictive power and model performance measured on set-aside testing data).

For missing data, we do a combination of simple imputations (for continuous variables) and 'coding non-responses as separate categories' (for categorical data).

I know I'm like 1 year late, but do you have the raw data still?

The two tables showing percentiles are missing column labels:

https://i.ibb.co/Vjbv1rc/image.png

https://i.ibb.co/jL0g7t7/table-4.png

Thanks. Updated.

Climate Change is mentioned as 3rd most popular in the Cause Selection section, however in this (Donation Data) section I didn't see anything mentioning Climate Change, I am not sure what conclusion to draw from this. Does this mean the people who say Climate Change is their most prioritized cause don't donate much to it? I couldn't find the GitHub repo that someone else mentioned.

Thanks for asking. Unfortunately, there weren't any climate change specific charities among the 15 specific charities which we included as default options for people to write in their donation amounts. That said, among the "Other" write-in option, there were 42/474 (8.86%) mentions of Cool Earth, so that was clearly a popular choice. There were no other frequently mentioned Climate Change charities.

As it happens, people who selected Climate Change as their top cause area also donated substantially less (median $358).

Curated and popular this week
Relevant opportunities