Comment author: Liam_Donovan 10 June 2018 06:13:10AM *  1 point [-]

Following on vollmer's point, it might be reasonable to have a blanket rule against policy/PR/political/etc work -- anything that is irreversible and difficult to evaluate. "Not being able to get funding from other sources" is definitely a negative signal, so it seems worthwhile to restrict guests to projects whose worst possible outcome is unproductively diverting resources.

On the other hand, I really can't imagine what harm research projects could do; I guess the worst case scenario is someone so persuasive they can convince lots of EAs of their ideas but so bad at research their ideas are all wrong, which doesn't seem very likely. (why not 'malicious & persuasive people'? the community can probably identify those more easily by the subjects they write about)

Furthermore, guests' ability to engage in negative-EV projects will be constrained by the low stipend and terrible location (if I wanted to engage in Irish republican activism, living at the EA hotel wouldn't help very much). I think the largest danger to be alert for is reputation risk, especially from bad popularizations of EA, since this is easier to do remotely (one example is Intentional Insights, the only negative-EV EA project I know of)

Comment author: vollmer 10 June 2018 04:26:08PM 1 point [-]

I agree research projects are more robustly positive. Information hazards are one main way in which they could do a significant amount of harm.

Comment author: Greg_Colbourn 07 June 2018 05:48:58PM *  1 point [-]

If they're students, they'll most likely be studying at a university outside Blackpool and might not be able to do so remotely.

Regarding studying, it would mainly be suitable for those doing so independently online (it’s possible to take many world class courses on EdX and Coursera for free). But could also be of use to university students outside of term time (say to do extra classes online, or an independent research project, over the summer).

they'll very likely be able to get funding from an EA donor

As John Maxwell says, I don’t think we are there yet with current seed funding options.

the hotel would mainly support work that the EA community as a whole would view as lower-quality

This might indeed be so, but given the much lower costs it’s possible that the quality-adjusted-work-per-£-spent rate could still be equal to - or higher than - the community average.

.. without the leadership and presence of highly experienced EAs (who work there as e.g. hotel managers / trustees).

I think it’s important to have experienced EAs in these positions for this reason.

Regarding “bad” EA projects, only one comes to mind, and it doesn’t seem to have caused much lasting damage. In the OP, I say that the “dynamics of status and prestige in the non-profit world seem to be geared toward being averse to risk-of-failure to a much greater extent than in the for-profit world (see e.g. the high rate of failure for VC funded start-ups). Perhaps we need to close this gap, considering that the bottom line results of EA activity are often considered in terms expected utility.” Are PR concerns a solid justification for this discrepancy between EA and VC? Or do Spencer Greenberg’s concerns about start-ups mean that EA is right in this regard and it’s VC that is wrong (even in terms of their approach to maximising monetary value)?

the enthusiasm for this project may be partly driven by social reasons

There’s nothing wrong with this, as long as people participating at the hotel for largely social reasons pay their own way (and don’t disrupt others’ work).

Comment author: vollmer 08 June 2018 04:30:16PM *  4 points [-]

Regarding “bad” EA projects, only one comes to mind, and it doesn’t seem to have caused much lasting damage. In the OP, I say that the “dynamics of status and prestige in the non-profit world seem to be geared toward being averse to risk-of-failure to a much greater extent than in the for-profit world (see e.g. the high rate of failure for VC funded start-ups). Perhaps we need to close this gap, considering that the bottom line results of EA activity are often considered in terms expected utility.” Are PR concerns a solid justification for this discrepancy between EA and VC? Or do Spencer Greenberg’s concerns about start-ups mean that EA is right in this regard and it’s VC that is wrong (even in terms of their approach to maximising monetary value)?

Just wanted to flag that I disagree with this for a number of reasons. E.g. I think some of EAF's sub-projects probably had negative impact, and I'm skeptical that these plus InIn were the only ones. I might write an EA forum post about how EA projects can have negative impacts at some point but it's not my current priority. See also this facebook comment for some of the ideas.

Regarding your last point, VCs are maximizing their own profit, but Spencer talks about externalities.

Comment author: John_Maxwell_IV 07 June 2018 09:19:36AM 7 points [-]

If they're launching a new great project, they'll very likely be able to get funding from an EA donor

EA Grants rejected 95% of the applications they got.

Comment author: vollmer 08 June 2018 04:23:41PM *  2 points [-]

Sure, but an EA hotel seems like a weird way to address this inefficiency: only few people with worthwhile projects can move to Blackpool to benefit from it, the funding is not flexible, it's hard to target this well, the project has some time lag, etc. The most reasonable approach to fixing this is simply to give more money to some of the projects that didn't get funded.

Maybe CEA will accept 20-30% of EA Grants applications in the next round, or other donors will jump in to fill the gaps. (I'd expect that a lot of the grants applications (maybe half) might have been submitted by people not really familiar with EA, and some of the others weren't worth funding.)

Comment author: MichaelPlant 06 June 2018 05:49:40PM 1 point [-]

Furthermore, people have repeatedly brought up the argument that the first "bad" EA project in each area can do more harm than an additional "good" EA project, especially if you consider tail risks, and I think this is more likely to be true than not. E.g. the first political protest for AI regulation might in expectation do more harm than a thoughtful AI policy project could prevent. This provides a reason for EAs to be risk-averse. (Specifically, I tentatively disagree with your claims that "we’re probably at the point where there are more false negatives than false positives, so more chances can be taken on people at the low end", and that we should invest "a small amount".) Related: Spencer Greenberg's idea that plenty of startups cause harm.

I thought this was pretty vague and abstract. You should say why you expect this particular project to suck!

It seems plausible that most EAs who do valuable work won't be able to benefit from this. If they're students, they'll most likely be studying at a university outside Blackpool and might not be able to do so remotely

I also wonder what the target market is. EA doing remote work? EAs need really cheap accommodation for certain time?

Comment author: vollmer 06 June 2018 09:16:44PM 0 points [-]

I thought this was pretty vague and abstract. You should say why you expect this particular project to suck!

I wasn't making a point about this particular project, but about all the projects this particular project would help.

Comment author: vollmer 06 June 2018 04:56:20PM *  9 points [-]

First, big kudos for your strong commitment to put your personal funding into this, and for the guts and drive to actually make it happen!

That said, my overall feelings about the project are mixed, mainly for the following reasons (which you also partly discuss in your post):

It seems plausible that most EAs who do valuable work won't be able to benefit from this. If they're students, they'll most likely be studying at a university outside Blackpool and might not be able to do so remotely. If they're launching a new great project, they'll very likely be able to get funding from an EA donor, and there will be major benefits from being in a big city or existing hub such as Oxford, London, or the Bay (so donors should be enthusiastic about covering the living costs of these places). While it's really impressive how low the rent at the hotel will be, rent cost is rarely a major reason for a project's funding constraints (at least outside the SF Bay Area).

Instead, the hotel could become a hub for everyone who doesn't study at a university or work on a project that EA donors find worth funding, i.e. the hotel would mainly support work that the EA community as a whole would view as lower-quality. I'm not saying I'm confident this will happen, but I think the chance is non-trivial without the leadership and presence of highly experienced EAs (who work there as e.g. hotel managers / trustees).

Furthermore, people have repeatedly brought up the argument that the first "bad" EA project in each area can do more harm than an additional "good" EA project, especially if you consider tail risks, and I think this is more likely to be true than not. E.g. the first political protest for AI regulation might in expectation do more harm than a thoughtful AI policy project could prevent. This provides a reason for EAs to be risk-averse. (Specifically, I tentatively disagree with your claims that "we’re probably at the point where there are more false negatives than false positives, so more chances can be taken on people at the low end", and that we should invest "a small amount".) Related: Spencer Greenberg's idea that plenty of startups cause harm.

The fact that this post got way more upvotes than other projects that are similarly exciting in my view (such as Charity Entrepreneurship) also makes me think that the enthusiasm for this project may be partly driven by social reasons (it feels great to have a community hotel hub with likeminded people) as opposed to people's impact assessments. But maybe there's something I'm overlooking, e.g. maybe this post was just shared much more on social media.

What happens if you concentrate a group of EAs who wouldn't get much funding from the broader community in one place and help them work together? I don't know. It could be very positive or very negative. Or it just couldn't lead to much at all. Overall, I think it may not be worth the downside risks.

Comment author: vollmer 06 June 2018 04:19:44PM 4 points [-]

This is amazing, congrats on making this happen!

Comment author: james_aung 25 April 2018 03:50:07PM 5 points [-]

Hey! Thanks for the comment.

I think it captures a few different notions. I'll try and spell out a few salient ones

1) Pushes back against the idea that an outreach talk needs to cover all aspects of EA. e.g. I think some intro EA 45min talks end up being really unsatisfactory as they only have time to skim across loads of different concepts and cause areas lightly. Instead I think it could be OK and even better to do outreach talks that don't introduce all of EA but do demonstrate a cool and interesting facet of EA epistemology. e.g. I could imagine a talk on differential vs absolute technological progress as being a way to attract new people.

2) Pushes back against running introductory discussion groups. Sometimes it feels like you need to guide someone through the basics, but I've found that often you can just lend people books or send them articles and they'll be able to pick up the same stuff without it taking up your time.

3) Reframes particular community niches, such as a technical AI safety paper reading group, as also a potential entry-point into the broader community. e.g. People find out about the AI group since they study computer science and find it interesting and then get introduced to EA.

Comment author: vollmer 03 May 2018 01:44:10PM 2 points [-]

I'm still confused: Intuitively, I would understand "Don't introduce EA" as "Don't do introductory EA talks". The "don't teach" bit also confuses me.

My personal best guess is that EA groups should do regular EA intro talks (maybe 1-2 per year), and should make people curious by touching on some of the core concepts to motivate the audience to read up on these EA ideas on their own. In particular, presenting arguments where relatively uncontroversial assumptions lead to surprising and interesting conclusions ("showing how deep the rabbit hole goes") often seems to spark such curiosity. My current best guess is that we should aim to "teach" such ideas in "introductory" EA talks, so I'd be interested whether you disagree with this.

Comment author: vollmer 21 April 2018 02:27:57PM *  0 points [-]

Thanks for writing this! EAF is considering launching a "REG for crypto" by the next giving season; this post might help us getting started.

Comment author: vollmer 10 March 2018 04:08:46PM 0 points [-]

Have you considered using a service that allows for anonymous conversations between you and the other person? This would enable you to respond to and discuss anonymous submissions. (I'm not sure this is needed – just an input.)

Comment author: Evan_Gaensbauer 06 March 2018 04:56:55PM 2 points [-]

I've tried to initiate translation projects for EA into non-English languages in the past. I was looking for EAs who were (close to) fluent in a language and local to where outreach would take place. This was a couple years ago. So, the local EA communities outside the English-speaking world were new, small and didn't have enough people to start up their own translation project. Given the arguments in Ben's post, I don't think necessarily much was lost in not having capitalized on the opportunity to translate EA content into other languages as well.

The most successful case of translation of EA content, and moreover, the generation of brand new EA content, outside of English is in Germany. This was started by EAs who were native speakers of German, and the work of their EA Foundation (EAF). Depending on how much one thinks their circumstances could generalize, it might be best for the movement to work with local groups which successfully develop over a few years to generate new content in other languages. This content could be specialized in its messaging to the culture.

Comment author: vollmer 10 March 2018 03:51:02PM *  7 points [-]

Based on EAF's experience in Germany and Switzerland, I strongly agree with Ben's main points in the post. In the early days we made several mistakes that could have been prevented fairly easily. In particular, it seems hard to correct the perception that EA is not just about donating (to GiveWell top charities). It also remains very difficult to counter the impression that EA is mainly the practical implementation of Singer's views; e.g. Singer's views on infanticide get quoted in many media articles about EA.

Some of the challenges that might have led to this:

  • DGB and Singer's EA book were translated to German, but much of the more advanced content is only available in English.
  • Quickly translating English content is easy. However, it takes much more time to ensure high quality both in terms of language and framings/nuance, and it's even more challenging to keep these translations up to date. See the "fidelity model" blog post referenced above for more discussion of this.
  • The media frequently interview members of the community. Community members are more or less up to date with recent EA publications and would explain EA well, but the media very proactively ask about charitable donations and related issues. It takes a lot of active effort and experience with media interviews to counter this pigeonholing, which is hard to do without much practice. I personally find it pretty hard to give good guidance on this.

So as a conclusion, I think the expansion to Germany, Switzerland, and Austria could have gone much better still, and while I agree it could be the deemed most successful case of translation of EA content, I think it was worse than what we should be aiming for.

View more: Next