Comment author: Ronja_Lutz 16 May 2018 04:54:19PM 3 points [-]

Hi, thanks for your comment :) Seems like I should have made that clearer! Since what I'm doing is applying Will's approach, the approach is not itself new. I haven't seen it discussed with regards to the worldview-split problem, but since I ended up condensing different "worldviews" into a decision between two theories, it turned out to be basically the same (which is not without problem, for that matter). I still found it valuable to try out this process in practice, and since I am expecting many people to not have read Will's thesis, I hoped this would provide them with an example of such a process. One person told me they found it valuable to use this way of thinking for themselves, and someone else said they were more inclined to read the actual thesis now, so I think there is some value in this article, and the issue might be more about the way I'm framing it. If you have an idea for a framing you would have found more useful, I'd be happy to know. Do you think just adding a sentence or two at the start of the article might do?

Comment author: MichaelPlant 17 May 2018 03:39:34PM 2 points [-]

Yeah, I think it would be good to put the research in context - true for posts here as other pieces of work - so readers know what sort of hat they should be wearing and if this is relevant for them.

Comment author: MichaelPlant 16 May 2018 02:53:44PM 3 points [-]

Hello Ronja. I think it would be helpful to know if you think this is different from other approaches to moral uncertainty and, if so, which ones. I don't know if you take yourself to be doing something novel or providing an example using an existing theory.

Comment author: MichaelPlant 13 May 2018 11:05:26PM 13 points [-]

I appreciate the write up and think founding charities could be a really effective thing to do.

I do wonder if this might be an overly rosey picture for a couple of reasons.

  1. Are there any stories of EAs failing to start charities? If there aren't, that would be a bit strange and I'd want to know why there were no failures. If there are, what happened and why didn't they work? I'm a bit worried about a survivorship effect making it falsely look like starting charities is easy. (On a somewhat releated note, your post may prompt me to finally write up something about my own unsuccessful attempt to start a start up)

  2. One is that some of the charities you mention are offshoots/sister charities of each other - GWWC and 80k, Charity Science Health and Fortify Health. This suggests to me it might be easier to found a second charity than a first one. OPP and GiveWell also fit this mold.

  3. Including AMF is, in some sense a bit odd, because it wasn't (I gather) founded with the intention of being the most effective charity. I say it's odd because, if it hadn't existed, the EA world would have found another charity that it deemed to be the most effective. Unless AMF thought they would be the most effective, they sort of 'got lucky' in that regard.

Comment author: ThomasSittler 06 May 2018 05:20:21PM *  7 points [-]

Thanks for the post. I'm sceptical of lock-in (or, more Homerically, tie-yourself-to-the-mast) strategies. It seems strange to override what your future self wants to do, if you expect your future self to be in an equally good epistemic position. If anything, future you is better informed and wiser...

I know you said your post just aims to provide ideas and tools for how you can avoid value drift if you want to do so. But even so, in the spirit of compromise between your time-slices, solutions that destroy less option value are preferable.

Comment author: MichaelPlant 06 May 2018 08:43:29PM 3 points [-]

It seems strange to override what your future self wants to do,

I think you're just denying the possibility of value drift here. If you think it exists, then committment strategies could make sense. if you don't, they won't.

Comment author: Gregory_Lewis 05 May 2018 01:06:42AM *  7 points [-]

It's very easy for any of us to call "EA" as we see it and naturally make claims about the preferences of the community. But this would be very clearly circular. I'd be tempted to defer to the EA Survey. AI was only the top cause of 16% of the EA Survey. Even among those employed full-time in a non-profit (maybe a proxy for full-time EAs), it was the top priority of 11.26%, compared to 44.22% for poverty and 6.46% for animal welfare.

As noted in the fb discussion, it seems unlikely full-time non-profit employment is a good proxy for 'full-time EAs' (i.e. those working full time at an EA organisation - E2Gers would be one of a few groups who should also be considered 'full-time EAs' in the broader sense of the term).

For this group, one could stipulate every group which posts updates to the EA newsletter (I looked at the last half-dozen or so, so any group which didn't have an update is excluded, but likely minor) is an EA group, and toting up a headcount of staff (I didn't correct for FTE, and excluded advisors/founders/volunteers/freelancers/interns - all of these decisions could be challenged) and recording the prevailing focus of the org gives something like this:

  • 80000 hours (7 people) - Far future
  • ACE (17 people) - Animals
  • CEA (15 people) - Far future
  • CSER (11 people) - Far future
  • CFI (10 people) - Far future (I only included their researchers)
  • FHI (17 people) - Far future
  • FRI (5 people) - Far future
  • Givewell (20 people) - Global poverty
  • Open Phil (21 people) - Far future (mostly)
  • SI (3 people) - Animals
  • CFAR (11 people) - Far future
  • Rethink Charity (11 people) - Global poverty
  • WASR (3 people) - Animals
  • REG (4 people) - Far future [Edited after Jonas Vollmer kindly corrected me]
  • FLI (6 people) - Far future
  • MIRI (17 people) - Far future
  • TYLCS (11 people) - Global poverty

Totting this up, I get ~ two thirds of people work at orgs which focus on the far future (66%), 22% global poverty, and 12% animals. Although it is hard to work out the AI | far future proportion, I'm pretty sure it is the majority, so 45% AI wouldn't be wildly off-kilter if we thought the EA handbook should represent the balance of 'full time' attention.

I doubt this should be the relevant metric of how to divvy-up space in the EA handbook. It also seems unclear how clear considerations of representation play in selecting content, or if so what is the key community to proportionately represent.

Yet I think I'd be surprised if it wasn't the case that among those working 'in' EA, the majority work on the far future, and a plurality work on AI. It also agrees with my impression that the most involved in the EA community strongly skew towards the far future cause area in general and AI in particular. I think they do so, bluntly, because these people have better access to the balance of reason, which in fact favours these being the most important things to work on.

Comment author: MichaelPlant 05 May 2018 01:01:23PM *  6 points [-]

'full-time EAs' (i.e. those working full time at an EA organisation - E2Gers would be one of a few groups who should also be considered 'full-time EAs' in the broader sense of the term).

I think this methodology is pretty suspicious. There are more ways to be a full-time EA (FTEA) that working at an EA org, or even E2Ging. Suppose someone spends their time working on, say, poverty out of an desire to do the most good, and thus works at a development NGO or for a governent. Neither development NGOs nor governments will count as an 'EA org' on your definition because they won't being posting updates to the EA newsletter. Why would they? The EA community has very little comparative advantage in solving poverty, so what we be the point in say, Oxfam or DFID sending update reports to the EA newsletter? It would frankly be bizarre for a government department to update the EA community. We might say "ah, put people who work on poverty aren't really EAs" but that would just beg the question.

Comment author: MichaelPlant 05 May 2018 11:38:31AM 3 points [-]

I downvoted this post because I found it unhelpful for someone to create half a dozen, quite short, posts on essentially the same topic, which I feel was unnecessary. I'd encourage you to put these into few post, or possibly just a single post linking to a google doc which contained the information

Comment author: Khorton 02 May 2018 01:14:31PM *  7 points [-]

I'm impressed and pleased that you gave credit to the Linda for the original idea. Well done for perpetuating good social norms.

Comment author: MichaelPlant 02 May 2018 01:29:13PM 4 points [-]

I'm pleased and impressed you thanked someone for perpetuating good social norms, whichh I think help perpetuating social norms, and have therefore upvoted your comment (#meta).

Comment author: Evan_Gaensbauer 01 May 2018 11:37:26PM 1 point [-]

I think you mean "unconvinced"?

Comment author: MichaelPlant 02 May 2018 01:27:11PM 0 points [-]

thanks. edited.

Comment author: MichaelPlant 01 May 2018 09:50:02PM *  4 points [-]

I think this idea is interesting but I'm unconvinced of the form you've chosen. As I've understood it seems to involve quite a lot of vetting and EA time before projects reach it to stage where they can ask people for funding. What's your objection to having an EA equivalent of GoFundMe/Kickstarter where people can just uploads their projects and then ask for funding. I imagine this could also work on the system that projects are time-limited and if they don't receive the funding they seek all the money gets returned to potential donors.

Comment author: Halstead 24 April 2018 06:46:46PM 12 points [-]

This comment comes across as a tad cult-y.

Comment author: MichaelPlant 24 April 2018 07:28:02PM 6 points [-]

I did think that while writing it, and it worried me too. Despite that, the thought doesn't strike me as totally stupid. If we think it's reasonable to talk about commitment devices in general, it seems one we ought to talk about in particular in one's choice of partner. If you want to do X, finding someone that supports you to towards you goal of achieving X seems rather helpful, whereas finding a partner that will discourage you from achieving X seems unhelpful. Nevertheless, I accept one of the obvious warning signs of being in a cult is the cult leaders tell you to date only people inside the cult lest you get 'corrupted'...

View more: Next