Comment author: ThomasSittler 22 May 2018 08:39:04PM 4 points [-]

Here's a draft chart of meta-ethics I made recently. When I'm less busy I'll hopefully be publishing it with an accompanying blog post.

Comment author: ThomasSittler 22 May 2018 08:27:54PM *  4 points [-]

Hi Eva! Really enjoyed the podcast. This is somewhat away from the topic of the OP, but I'm curious what you think of Glennerster & Bates' article on generalisability, and how it relates to your own work with AidGrade and your paper on heterogeneous treatment effets. Glennerster & Bates seem to be saying that policy decisions should be (are?) made using a much more fine-grained and theory-informed approach than averaging effect sizes across studies. A study contains much more information than an average effect on an outcome variable. So maybe the problem of effect size heterogeneity is in practise less worrying than we might otherwise have thought, and than the OP suggests?

Of course it's still valuable to have data on that kind of heterogeneity.

I was also wondering, to draw a tentative connection between two of your research interests, about the following. Researcher forecasts (averaged together) predict effect sizes quite well. Could this be precisely because researchers are using detailed contextual knowledge and background theory to build models of the world from existing research? These models of the world then allow them do predict effect sizes better than a "naive" average of raw effect sizes. On this view, field research is still very valuable, even though effect sizes are highly heterogeneous across contexts. This is an optimistic take which I wouldn't full-throatedly endorse. What do you think of it?

Comment author: MichaelPlant 06 May 2018 08:43:29PM 3 points [-]

It seems strange to override what your future self wants to do,

I think you're just denying the possibility of value drift here. If you think it exists, then committment strategies could make sense. if you don't, they won't.

Comment author: ThomasSittler 07 May 2018 09:14:01AM 2 points [-]

Michael -- keen philosopher that you are, you're right ;)

The part you quote does ultimately deny that value drift is something we ought to combat (holding constant information, etc.). That would be my (weakly held) view on the philosophy of things.

In practise though, there may be large gains from compromise between time-slices, compared to the two extremes of always doing what your current self wants, or using drastic commitment devices. So we could aim to get those gains so long as we're unsure about the philosophy.

Comment author: ThomasSittler 06 May 2018 05:20:21PM *  7 points [-]

Thanks for the post. I'm sceptical of lock-in (or, more Homerically, tie-yourself-to-the-mast) strategies. It seems strange to override what your future self wants to do, if you expect your future self to be in an equally good epistemic position. If anything, future you is better informed and wiser...

I know you said your post just aims to provide ideas and tools for how you can avoid value drift if you want to do so. But even so, in the spirit of compromise between your time-slices, solutions that destroy less option value are preferable.

Comment author: Gregory_Lewis 05 May 2018 01:06:42AM *  7 points [-]

It's very easy for any of us to call "EA" as we see it and naturally make claims about the preferences of the community. But this would be very clearly circular. I'd be tempted to defer to the EA Survey. AI was only the top cause of 16% of the EA Survey. Even among those employed full-time in a non-profit (maybe a proxy for full-time EAs), it was the top priority of 11.26%, compared to 44.22% for poverty and 6.46% for animal welfare.

As noted in the fb discussion, it seems unlikely full-time non-profit employment is a good proxy for 'full-time EAs' (i.e. those working full time at an EA organisation - E2Gers would be one of a few groups who should also be considered 'full-time EAs' in the broader sense of the term).

For this group, one could stipulate every group which posts updates to the EA newsletter (I looked at the last half-dozen or so, so any group which didn't have an update is excluded, but likely minor) is an EA group, and toting up a headcount of staff (I didn't correct for FTE, and excluded advisors/founders/volunteers/freelancers/interns - all of these decisions could be challenged) and recording the prevailing focus of the org gives something like this:

  • 80000 hours (7 people) - Far future
  • ACE (17 people) - Animals
  • CEA (15 people) - Far future
  • CSER (11 people) - Far future
  • CFI (10 people) - Far future (I only included their researchers)
  • FHI (17 people) - Far future
  • FRI (5 people) - Far future
  • Givewell (20 people) - Global poverty
  • Open Phil (21 people) - Far future (mostly)
  • SI (3 people) - Animals
  • CFAR (11 people) - Far future
  • Rethink Charity (11 people) - Global poverty
  • WASR (3 people) - Animals
  • REG (4 people) - Far future [Edited after Jonas Vollmer kindly corrected me]
  • FLI (6 people) - Far future
  • MIRI (17 people) - Far future
  • TYLCS (11 people) - Global poverty

Totting this up, I get ~ two thirds of people work at orgs which focus on the far future (66%), 22% global poverty, and 12% animals. Although it is hard to work out the AI | far future proportion, I'm pretty sure it is the majority, so 45% AI wouldn't be wildly off-kilter if we thought the EA handbook should represent the balance of 'full time' attention.

I doubt this should be the relevant metric of how to divvy-up space in the EA handbook. It also seems unclear how clear considerations of representation play in selecting content, or if so what is the key community to proportionately represent.

Yet I think I'd be surprised if it wasn't the case that among those working 'in' EA, the majority work on the far future, and a plurality work on AI. It also agrees with my impression that the most involved in the EA community strongly skew towards the far future cause area in general and AI in particular. I think they do so, bluntly, because these people have better access to the balance of reason, which in fact favours these being the most important things to work on.

Comment author: ThomasSittler 06 May 2018 05:12:13PM *  5 points [-]

+1 for doing a quick empirical check and providing your method.

But the EA newsletter is curated by CEA, no? So it also partly reflects CEA's own priorities. You and others have noted in the discussion below that a number of plausibly full-time EA organisations are not included in your list (e.g. BERI, Charity Science, GFI, Sentience Politics).

I'd also question the view that OpenPhil is mostly focused on the long-term future. When I look at OpenPhil's grant database, and counting

  • Biosecurity and Pandemic Preparedness
  • Global Catastrophic Risks
  • Potential Risks from Advanced Artificial Intelligence
  • Scientific Research

as long-term future focused, I get that 30% of grant money was given to the long-term future.

Comment author: ThomasSittler 04 May 2018 06:55:37PM *  14 points [-]

Now that the formatting has been fixed, a few points on the substance. First of all, this is obviously a huge improvement on the old EA handbook. So much high-quality intellectual content has been produced in the last three years. I think it's very important, as a community-building tool, to have a curated selection of this content, and the new handbook is a welcome step towards that goal.

I think the first seven parts are good choices, and appear in a good order:

  • Introduction to Effective Altruism
  • Efficient Charity — Do Unto Others
  • Prospecting for Gold
  • Crucial Considerations and Wise Philanthropy
  • The Moral Value of Information
  • The Long-Term Future
  • A Proposed Adjustment to the Astronomical Waste Argument

Next in "promising causes", we first get three articles in a row about AI:

  • Three Impacts of Machine Intelligence
  • Potential Risks from Advanced AI
  • What Does (and Doesn’t) AI Mean for Effective Altruism?

I understand that these are good pieces, and you want to showcase the good thinking the EA community has done. But three, out of the eight in "promising causes", seems like a heavy focus on AI. Suppose we condition on long-termism, which plausibly excludes the Global Health and Animal Welfare articles as written, then it's it's three out of six on AI. Additionally, the fact that AI comes first lends it still more importance.

Another asymmetry is that while

  • The long-term future
  • Animal Welfare
  • Global health and development

have a discussion of pros and cons, the three AI articles talk about what to do if you've already decided to focus on AI. This creates the implicit impression that if you accept long-termism, it's a foregone conclusion that AI is a priority, or even the priority.

Next we have:

  • Biosecurity as an EA Cause Area
  • Animal Welfare
  • Effective Altruism in Government
  • Global Health and Development
  • How valuable is movement growth?

I'm especially worried that among these, animal welfare and global health and development are the only two that follow the pro-con structure. This is not because objections are discussed. It's because these pieces feel much more like mass-outreach pieces than the others in the same section. They are

  • more introductory in content
  • in tone, more like a dry memo**, or a broad survey for outsiders

The second point is somewhat nebulous, but, I think, actually very important. While the other cause area articles feel like excerpts from a lively intellectual conversation among EAs, the global health and animal welfare articles do not. This seems unnecessary since EAs have produced a lot of excellent, advanced content in these areas. A non-exhaustive list:

  • The entire GiveWell blog
  • Large parts of Brian Tomasik's website
  • Lewis Bollard's talks at EA global
  • Rachel Glennerster's post "Where economists can learn from philosophers and effective altruism"
  • Tobias Leenaert's talks on cognitive dissonance and behaviour change driving value change
  • Sentience Institute's "Summary of Evidence for Foundational Questions in Effective Animal Advocacy"
  • Toby Ord's "the moral imperative towards cost-effectiveness"

So it should hopefully be possible to significantly improve the handbook for an edition 2.1, to come out soon :)

** (I think Jess did a great job writing these, but they were originally meant for a different context)

Comment author: ThomasSittler 03 May 2018 08:01:02PM 1 point [-]

I'm a bit confused by "There are many other promising areas to work on, and it is difficult to work out which is most promising" as the section heading for "promising causes". It sounds like you're contrasting affecting the long-term future with other promising causes, but the section "promising causes" contains many examples of affecting the long-term future.

Comment author: ThomasSittler 03 May 2018 07:57:54PM 3 points [-]

This looks really good now that some of the formatting errors have been removed. Thanks! :)

Comment author: ThomasSittler 03 May 2018 07:51:06PM *  3 points [-]

Thanks for the post :)

If we make any kind of reasonable assumptions about renting, house price increases and mortgage repayments, it makes a lot of sense for people to save to purchase their own home as soon as possible.

Could you provide a source for this claim? If this were true, we would expect that it's possible to make a lot of money by buying property and renting it out. This would imply that the market for housing is hugely inefficient.

Comment author: Denkenberger 25 April 2018 12:48:48AM 1 point [-]

GWWC says 4.8% per year attrition. If we say the OP data is half life of 5 years and exponential decay, that is 13% attrition per year. That would mean an expected duration of being an EA of eight years. I think I remember reading somewhere that GWWC was only assuming three years of donations, so eight years sounds a lot better to me. Another thought is that the pledge has been compared with marriage, so we could look at the average duration of the marriage. When I looked into this, it appeared to be fairly bimodal, with many ending relatively quickly, but many ending till death do they part. GWWC argues that consistent exponential decay would be too pessimistic. If we believe the 13% per year attrition, that means we need to recruit 13% more people each year just to stay the same size.

Comment author: ThomasSittler 25 April 2018 12:32:04PM 2 points [-]

From your link:

Some members leave Giving What We Can, and therefore can be assumed not to actually donate the money they pledged. Others we lose contact with, so that we don’t know whether they donate the money they pledged. The rate of people leaving has so far been 1.7% of members per year.[9]

Other people lose contact with Giving What We Can. The rate of people going silent has been 4.7% per year (we have counted people as silent if we haven’t had any contact with them for over 2 years) . It seems likely that members who go silent still donate some amount, but it is likely to be less than the amount they pledged. We have assumed that this will be around one-third of their original pledge (for example, if a person pledging the standard 10% of their income has gone silent, we’ve only counted 3.33% of their pledge in this calculation).

Given these numbers, the total of those ceasing donations per year is4.8%.[10] We’ve assumed that this percentage will remain constant over time. This means that after, say, 30 years, each member has a 23% chance of still donating, which we believe is a plausible estimate.

It might be useful to know "no contact for 2 years" means exactly. Not because I'm trying to nitpick. But the way we operationalise these metrics sometimes makes a big difference.

View more: Next