Comment author: MichaelPlant 06 June 2018 05:47:11PM 2 points [-]

I sypathise with Gregory (Lewis') point and it not being an attractive role for an EA. It might work better if billed as a short duration role, possibily for someone who wants to develop operational experience to do so in another EA org.

Comment author: Arepo 07 June 2018 12:20:09PM 2 points [-]

Is there any particular reason why the role needs to be filled by an EA? I think we as a community are too focused on hiring internally in general, and in this case almost no engagement with the ideas of EA seems like it would necessary - they just need to be good at running a hotel (and ok with working around a bunch of oddballs).

Comment author: Arepo 04 June 2018 10:15:51PM 7 points [-]

Hey Greg, this is a super interesting project - I really hope it takes off. Some thoughts on your essay:

1) Re the hotel name, I feel like this should primarily be made with the possibility of paying non-EAs in mind. EAs will - I hope - hear of the project by reputation rather than name, so the other guests are the ones you're most likely to need make a strong first impression on. 'Effective Altruism Hotel' definitely seems poor in that regard - 'Athena' seems ok (though maybe there's some benefits to renaming for the sake of renaming if the hotel was failing when you bought it)

2) > Another idea for empty rooms is offering outsiders the chance to purchase a kind of “catastrophic risk insurance”; paying, say, £1/day to reserve the right to live at the hotel in the event of a global (or regional) catastrophe.

This seems dubious to me (it's the only point of your essay I particularly disagreed with). It's a fairly small revenue stream for you, but means you're attracting people who're that little bit more willing to spend on their own self-interest (ie that little bit less altruistic), and penalises people who just hadn't heard of the project. Meanwhile, in the actual event, what practical effect would it have? Would you turn away people who showed up early when the sponsors arrived for their room?

If you want an explicit policy on using it as a GCR shelter, it seems like 'first come first served' would be at least as meritocratic, require less bureaucracy and offer a much more enforceable Schelling point.

3) As you say, I think this will be more appealing the more people it has involved from the beginning, so I would suggest aggressively marketing the idea in all EA circles which seem vaguely relevant, subject to the agreement of the relevant moderators - not that high a proportion of EAs read this forum, and of those who do, not that many will see this post. It's a really cool idea that I hope people will talk about, but again they'll do so a lot more if it's already seen as a success.

4) You describe it in the link, but maybe worth describing the Trustee role where you first mention it - or at least linking to it at that point.

Comment author: Peter_Hurford  (EA Profile) 09 March 2018 01:34:24AM *  1 point [-]

I have upvoted Joey's comment to indicate agreement with Joey. I have downvoted the original post too for that same reason.

Right now there's a collective action problem and we're very lucky that so few organizations post job ads here, despite there being a clear incentive for more organizations to try to hire via the EA Forum. Low-effort job ads clutter the EA Forum and bury great posts that some of us have spent dozens of hours writing.

Comment author: Arepo 16 March 2018 02:54:35PM 2 points [-]

K. I'll consider my wrist duly slapped!

Comment author: Arepo 12 March 2018 01:35:31AM *  2 points [-]

Great stuff! A few quibbles:

  • It feels odd to specify an exact year EA (or any movement) was 'founded'. Givewell (surprisingly not mentioned other than a logo on slide 6) have been around since 2007; MIRI since 2000; FHI since 2005; Giving What We Can since 2009. Some or all of these (eg GWWC) didn't exactly have a clear founding date, though, rather becoming more like their modern organisations over years. One might not consider some of them more strictly 'EA orgs than others' - but that's kind of the point.

  • I'd be wary of including 'moral offsetting' as an EA idea. It's fairly controversial, and sounds like the sort of thing that could turn people off the other ideas

  • Agree with others that overusing the word 'utilitarianism' seems unnecessary and not strictly accurate (any moral view that included an idea of aggregation is probably sufficient, which is probably all of them to some degree).

  • Slide 12 talks about suffering exclusively; without getting into whether happiness can counterweigh it, it seems like it could mention positive experiences as well

  • I'd be wary of criticising intuitive morality for not updating on moral uncertainty. The latter seems like a fringe idea that's received a lot of publicity in the EA community, but that's far from universally accepted even by eg utilitarians and EAs

  • On slide 18 it seems odd to have an 'other' category on the right, but omit it on the left with a tiny 'clothing' category. Presumably animals are used and killed in other contexts than those four, so why not just replace clothing with 'other' - which I think would make the graph clearer

  • I also find the colours on the same graph a bit too similar - my brain keeps telling me that 'farm' is the second biggest categorical recipient when I glance it it, for eg

  • I haven't read the Marino paper and now want to, 'cause it looks like it might update me against this, but provisionally: it still seems quite defensible to believe that chickens experience substantially less total valence per individual than larger animals, esp mammals, even if it's becoming rapidly less defensible to believe that they don't experience something qualitatively similar to our own phenomenal experiences. [ETA] Having now read-skimmed it, I didn't update much on the quantitative issue (though it seems fairly clear chickens have some phenomenal experience, or at least there's no defensible reason to assume they don't)

  • Slide 20 'human' should be pluralised

  • Slide 22 'important' and 'unimportant' seem like loaded terms. I would replace with something more factual like (ideally a much less clunkily phrased) 'causes large magnitude of suffering', 'causes comparatively small magnitude of suffering'

  • I don't understand the phrase 'aestivatable future light-cone'. What's aestivation got to do with the scale of the future? (I know there are proposals to shepherd matter and energy to the later stages of the universe for more efficient computing, but that seems way beyond the scope of this presentation, and presumably not what you're getting at)

  • I would change 'the species would survive' on slide 25 to 'would probably survive', and maybe caveat it further, since the relevant question for expected utility is whether we could reach interstellar technology after being set back by a global catastrophe, not whether it would immediately kill us (cf eg https://www.openphilanthropy.org/blog/long-term-significance-reducing-global-catastrophic-risks) - similarly I'd be less emphatic on slide 27 about the comparative magnitude of climate change vs the other events as an 'X-risk', esp where X-risk is defined as here: https://nickbostrom.com/existential/risks.html)

  • Where did the 10^35 number for future sentient lives come from for slide 26? These numbers seem to vary wildly among futurists, but that one actually seems quite small to me. Bostrom estimates 10^38 lost just for a century's delayed colonization. Getting more wildly speculative, Isaac Arthur, my favourite futurist, estimates a galaxy of Matrioska brains could emulate 10^44 minds - it's slightly unclear, but I think he means running them at normal human subjective speed, which would give them about 10^12 times the length of a human life between now and the end of the stelliferous era. The number of galaxies in the Laniakea supercluster is approx 10^5, so that would be 10^61 total, which we can shade by a few orders of magnitude to account for inefficiencies etc and still end up with a vastly high number than yours. And if Arthur's claims about farming Hawking radiation and gravitational energy in the post-stellar eras are remotely plausible, then the number of sentient beings Black Hole era would dwarf that number again! (ok, this maybe turned into an excuse to talk about my favourite v/podcast)

  • Re slide 29, I think EA has long stopped being 'mostly moral philosophers & computer scientists' if it ever strictly was, although they're obviously (very) overrepresented. To what end do you note this, though? It maybe makes more sense in the talk, but in the context of the slide, it's not clear whether it's a boast of a great status quo or a call to arms of a need for change

  • I would say EA needs more money and talent - there are still tonnes of underfunded projects!

Comment author: Joey 08 March 2018 06:52:35PM 13 points [-]

Personally I am not really a fan of job postings being put on this forum. Between all the different EA organizations it would be pretty easy for every second post to be a job ad, and I think that would weaken the forum content for most users. The "Effective Altruism Job Postings" group does a pretty good job at consolidating all jobs that are EA relevant in a central space without cluttering up a space like this.

Comment author: Arepo 09 March 2018 12:34:36AM 0 points [-]

I'm agnostic on the issue. FB groups have their own drawbacks, but I appreciate the clutter concern. In the interest of balance, perhaps anyone who agrees with you can upvote your comment, anyone who disagrees can upvote this comment (and hopefully people won't upvote them for any other reason) and if there's a decent discrepancy we can consider the question answered?

Comment author: Arepo 08 March 2018 12:10:13AM *  4 points [-]

Seconding Evan - it's great to have this laid out as a clear argument.

Re this:

In this way, any kind of broad based outreach is risky because it’s hard to reverse. Once your message is out there, it tends to stick around for years, so if you get the message wrong, you’ve harmed years of future efforts. We call this the risk of “lock in”.

I think there are some ways that this could still pan out as net positive, in reverse order of importance:

1) It relies on the arguments against E2G as a premium EA cause, which I'm still sceptical of given numerous very large funding gaps in EA causes and orgs. Admittedly In the case of China (and other semideveloped countries) the case against E2G seems stronger though, since the potential earnings are substantially lower, and the potential for direct work might be as strong or higher.

2) Depending on how you discount over time, and (relatedly) how seriously you take the haste consideration, getting a bunch of people involved sooner might be worth slower takeup later.

3) You mentioned somewhere in the discussion that you've rarely known anyone to be more amenable to EA because they'd encountered the ideas, but this seems like underestimating the nudge effects on which 99% of marketing are based. Almost no-one ever consciously thinks 'given that advert, I'm going to buy that product' - but when you see the product on the shelf, it just feels marginally more trustworthy because you already 'know' it. It seems like mass media EA outreach could function similarly. If so, lock-in might be a price worth paying.


This isn't to say that I think your argument is wrong, just that I don't yet think it's clear-cut.

It also seems like the risks/reward ratio might vary substantially from country to country, so it's perhaps worth thinking about at least each major economy separately?

To the degree that the argument does vary from country to country, I wonder whether there's any mileage in running some experiments with outreach in less economically significant countries, esp when they have historically similar cultures? Eg perhaps for China, it would be worth trialling a comparatively short termist strategy in Taiwan.

Comment author: Ben_Todd 07 March 2018 10:40:45PM 0 points [-]

Another consideration I'm not sure of is that a mainly English speaking community will be easier to coordinate than one of the same size split across many languages and cultures, so this might be reason to focus initially on one language (to the extent that efforts across different languages funge with each other).

Comment author: Arepo 07 March 2018 11:34:39PM *  2 points [-]

This seems no more (to me, less) of a concern than that having a diversity of languages and cultures would help avoid it becoming tribalised.

Also, re the idea of coordination, cf my comment above about 'thought leaders'. I know it's something Will's been pushing for, but I'm concerned about the overconcentration of influence in eg EA funds (although that's a slightly different issue from an overemphasis on the ideas of certain people)

Comment author: DavidMoss 05 March 2018 10:04:02PM 4 points [-]

There is a strong bias in favour of growth of various kinds in EA.

This seemed more the case a couple of years ago. I think the pendulum has swung pretty hard in the other direction among EA thought leaders.

Comment author: Arepo 07 March 2018 06:03:21PM *  7 points [-]

Somewhat tangentially, am I unusual in finding the idea of 'thought leaders' for a movement about careful and conscientious consideration of ideas profoundly uncomfortable?

-3

Founders Pledge is seeking a Community Manager

Job description   ROLE SUMMARY You will be the face of the community who engages, activates and coordinates our top priority members by building meaningful relationships and making sure they are getting the most out of their experience. This will involve an in depth understanding of our pledgers, a gregarious... Read More
Comment author: Henry_Stanley 09 February 2018 12:18:43AM 5 points [-]

Can confirm that the funds are held as cash, not invested.

Comment author: Arepo 11 February 2018 09:38:21PM 4 points [-]

Huh, that seems like a missed opportunity. I know very little about investing, but aren't there short-term investments with modest returns that would have a one-off setup cost for the fund, such that all future money could go into them fairly easily?

View more: Next