Comment author: ColinBested 25 July 2018 04:35:40AM 1 point [-]

Thanks for this Holly.

I plan to share your article and talk about in an upcoming workshop I am doing on Self Care for the Altruistic (which will mostly consist of using 80k's strategies in addition to taking suggestions from participants and others EA folks

I resonated with most of what you wrote about. You are definitely not alone in having those feelings. (I have also been thinking and writing about sustaining altruistic motivation, and I found your article to be a helpful addition).

Another feeling that I come across might be an extension of 'social approval and connection' along with 'other values'. I'll call it Representing EA (or EA brand/perception management). Especially as a community builder, a mixed feeling comes up in moments where I think that I am helping shape someone's early perception of EA yet I know that the ideas and practical realities are more complex, challenging, and individualised than I can easily portray in words, especially when I am, at the same time, trying to honestly mention the appealing parts of EA to someone who seems to be really interested in the EA ideas and values. A feeling kind of like: I cannot do a good job of explaining this to you until you see more for of it for yourself yet I still feel the need to do a good job of explaining this to you and I hope I don't give you the wrong ideas even though the decisions are ultimately up to you.

Comment author: hollymorgan 10 September 2018 08:20:21PM 0 points [-]

Just seen this!

Cool idea with the workshop :-) I think Julia Wise is putting together a template Self Care event for EA group organisers, if there are any takeaways you'd like to share with her.

I think I experience something similar to what you do with Representing EA. I have always hated doing anything that resembles a pitch, and unfortunately I think people usually can see my heart sink when they say, "So tell me more about effective altruism." Actually the last time I was asked to do something like this, my list of caveats, clarifications and nuances was getting so long that I consciously skipped one and when this later became apparent I got cut off with " - because you were trying to manipulate me?" In some ways, I think online is higher fidelity than offline. Conversation doesn't have footnotes ;-)

Comment author: Farhan 05 August 2018 01:46:15PM 3 points [-]

Thank you so much Holly. It just feels so relieving to know there are people who care. Hearing you say you've contacted some organisations gives me so much hope. Please don't hesitate to contact me if you need more details or footage over the crisis. I'm so eager to start a brainstorming session here, because I'm at a loss for ideas on how to act as productively as possible. We're emailing international news outlets but I'm unsure over how effective it proves to be. Violence is continuing here and there does not seem to be any signs of it subsiding. I'm dreading that Dhaka may be becoming a war zone. As far as solutions go, I can't think much further than international intervention. I hope more people see this post so that there is more to discuss. I believe this place is brilliant for constructive thinking.

Best regards.

Comment author: hollymorgan 05 August 2018 11:30:35PM *  1 point [-]

Emailing international news outlets sounds like a good start to me but I expect you know a lot more than me about the most useful action to take here. For all the strengths of this community, rapid response to crises is generally not one of them, and while I hope that some of the other readers are able to offer useful expertise/networks, I want to set realistic expectations of how we might be able to help and I encourage you to continue reaching out to other communities and organisations (e.g. perhaps Amnesty International...I would be very surprised if they haven't caught it on the news by now, but perhaps it helps them to be in contact with more locals who are keen to work with international support?) I'm struggling to think of the right words here but basically I just want to say: ❤️❤️❤️❤️❤️❤️❤️❤️❤️❤️❤️❤️❤️❤️❤️

Comment author: hollymorgan 05 August 2018 11:10:54AM 9 points [-]

Farhan, I'm so so sorry this is happening, this is horrific. I've reached out to a couple of organisations who may be in a position to help and will let you know if that's the case. But I expect many other people here will have better ideas and I hope they can offer useful advice. Much love to you for trying to mobilise more support on this (and for reining in what must be a heartbreaking urge to just scrawl "THIS IS AWFUL AARGHH HELP??!!" in order to put together such an informative post).


Why are you here? An origin stories thread.

tl;dr I think origin stories are useful. Please share yours if you like. Here's mine.   Introduction I generally find "origin stories" - personal accounts of how people first become involved with EA - to be quite illuminating, and I think that the bulk of their value comes from their... Read More
Comment author: Evan_Gaensbauer 04 August 2018 02:13:45AM 0 points [-]

I meant the EA Foundation, who I was under the impression received incubation from CEA. Since apparently my ambiguous perception of those events might be wrong, I've switched the example of one CEA's incubees to ACE.

Comment author: hollymorgan 04 August 2018 03:42:42AM 13 points [-]

That one is accurate.

Also "incubees" is my new favourite word.

Comment author: Evan_Gaensbauer 03 August 2018 10:38:38PM *  8 points [-]

helps them recruit people

Do you mind clarifying what you mean by "recruits people?" I.e., do you mean they recruit people to attend the workshops, or to join the organizational staff.

I have spoken with four former interns/staff who pointed out that Leverage Research (and its affiliated organizations) resembles a cult according to the criteria listed here.

In this comment I laid out the threat to EA as a cohesive community itself for those within to like the worst detractors of EA and adjacent communities to level blanket accusations of an organization of being a cult. Also, that comment was only able to provide mention of a handful of people describing Leverage like a cult, admitting they could not recall any specific details. I already explained that that report doesn't not qualify as a fact, nor even an anecdote, but hearsay, especially since further details aren't being provided.

I'm disinclined to take seriously more hearsay of a mysterious impression of Leverage as cultish given the poor faith in which my other interlocutor was acting in. Since none of the former interns or staff this hearsay of Leverage being like a cult are coming forward to corroborate what features of a cult from the linked Lifehacker article Leverage shares, I'm unconvinced your or the other reports of Leverage as being like a cult aren't being taken out of context from the individuals you originally heard them from, nor that this post and the comments aren't a deliberate attempt to do nothing but tarnish Leverage.

The EA Summit 2018 website lists LEAN, Charity Science, and Paradigm Academy as "participating organizations," implying they're equally involved. However, Charity Science is merely giving a talk there. In private conversation, at least one potential attendee was told that Charity Science was more heavily involved. (Edit: This issue seems to be fixed now.)

Paradigm Academy was incubated by Leverage Research, as many organizations in and around EA are by others (e.g., MIRI incubated CFAR; CEA incubated ACE, etc.). As far as I can tell now, like with those other organizations, Paradigm and Leverage should be viewed as two distinct organizations. So that itself is not a fact about Leverage, which I also went over in this comment.

The EA Summit 2018 website lists LEAN, Charity Science, and Paradigm Academy as "participating organizations," implying they're equally involved. However, Charity Science is merely giving a talk there. In private conversation, at least one potential attendee was told that Charity Science was more heavily involved. (Edit: This issue seems to be fixed now.)

As I stated in that comment as well, there is a double standard at play here. EA Global each year is organized by the CEA. They aren't even the only organization in EA with the letters "EA" in their name, nor are they exclusively considered among EA organizations able to wield the EA brand. And yet despite all this nobody objects on priors to the CEA as a single organization branding these events each year. As we shouldn't. Of course, none of this necessary to invalidate the point you're trying to make. Julia Wise as the Community Liaison for the CEA has already clarified the CEA themselves support the Summit.

So the EA Summit has already been legitimized by multiple EA organizations as a genuine EA event, including the one which is seen as the default legitimate representation for the whole movement.

(low confidence) I've heard through the grapevine that the EA Summit 2018 wasn't coordinated with other EA organizations except for LEAN and Charity Science.

As above, that the EA Summit wasn't coordinated by more than one organization means nothing. There are already EA retreat- and conference-like events organized by local university groups and national foundations all over the world, which have gone well, such as the Czech EA Retreat in 2017. So the idea EA should be so centralized only registered non-profits with some given caliber of prestige in the EA movement, or those they approve, can organize events to be viewed as legitimate by the community is unfounded. Not even the CEA wants that centralized. Nobody does. So whatever point you're trying to prove about the EA Summit using facts about Leverage Research is still invalid.

For what it's worth, while no other organizations are officially participating, here are some effective altruists who will be speaking at the EA Summit, and the organizations they're associated with. This is sufficient to warrant a correct identification that those organizations are in spirit welcome and included at EAG. So the same standard should apply to the EA Summit.

  • Ben Pace, Ray Arnold and Oliver Habryka: LessWrong isn't an organization, but it's played a formative role in EA, and with LW's new codebase being the kernel of for the next version of the EA Forum, Ben and Oliver as admins and architects of the new LW are as important representatives of this online community as any in EA's history.

  • Rob Mather is the ED of the AMF. AMF isn't typically regarded as an "EA organization" because they're not a metacharity in need of dependence directly on the EA movement. But that Givewell's top-recommended charity since EA began, which continues to receive more donations from effective altruists than any other, to not been given consideration would be senseless.

  • Sarah Spikes runs the Berkeley REACH.

  • Holly Morgan is a staffer for the EA London organization.

In reviewing these speakers, and seeing so many from LEAN and Rethink Charity, with Kerry Vaughan being a director for individual outreach at CEA, I see what the EA Summit is trying to do. They're trying to have as speakers at the event to rally local EA group organizers from around the world to more coordinated action and spirited projects. Which is exactly what the organizers of the EA Summit have been saying the whole time. This is also why as an organizer for rationality and EA projects in Vancouver, Canada, trying to develop a project to scale both here and cities everywhere a system for organizing local groups to do direct work; and as a very involved volunteer online community organizer in EA, I was invited to attend the EA Summit. It's also why one the event organizers consulted with me before they announced the EA Summit how they thought it should be presented in the EA community.

This isn't counterevidence to be skeptical of Leverage. This is evidence counter to the thesis the EA Summit is nothing but a launchpad for Leverage's rebranding within the EA community as "Paradigm Academy," being advanced in these facts about Leverage Research. No logical evidence has been presented that the tenuous links between Leverage and the organization of the 2018 EA Summit entails the negative reputation Leverage has acquired over the years should be transferred onto the upcoming Summit.

Comment author: hollymorgan 04 August 2018 01:48:26AM *  11 points [-]

CEA incubated EAF

I don't think this is accurate. (Please excuse the lack of engagement with anything else here; I'm just skimming some of it for now but I did notice this.)

[Edit: Unless you meant EA Funds (rather than Effective Altruism Foundation, as I read it)?]

Comment author: hollymorgan 23 July 2018 06:53:16AM *  37 points [-]

Upvoted because I think it's a good community norm for people to call each other out on things like this.

However, with the rapid upvoting, and human attention span being what it is, I'm a bit worried that for many readers the main takeaway of this post will be something not far from "Nick Beckstead = bad". So in an effort to balance things out a bit in our lizard brains...

Ode to Nick Beckstead

  • I personally can't think of anyone I'd attribute more credit to for directing funding towards AI Safety work (and in the likely case that I'm wrong, I'd still be surprised if Nick wasn't in the top handful of contributors)

  • Nick was an EA before it was cool, founding trustee of CEA, helped launch the first Giving What We Can student groups, helped launch The Life You Can Save etc.

  • Rob Wiblin calls him "one of the smartest people I know" with "exceptional judgement"

  • On top of all the public information, in private I've found him to be impressively even-handed in his thinking and dealings with people, and one of the most emotionally supportive people I've known in EA. [Edit, h/t Michelle_Hutchinson: Disclaimer: I work for an EA community-building organisation that was offered an EA Community Grant last month by CEA.]

Comment author: John_Maxwell_IV 12 July 2018 02:59:12PM *  1 point [-]

Centralised coordination/control is a way to counteract that.

OpenPhil funding OpenAI might be a case of a "central" organization taking unilateral action that's harmful. vollmer also mentions that he thinks some of EAF's subprojects were probably negative impact elsewhere in this thread--presumably the EAF is relatively "central".

If we think that "individuals underestimate potential downsides relative to their estimations concerning potential upsides", why do we expect funders to be immune to this problem? There seems to be an assumption that if you have a lot of money, you are unusually good at forecasting potential downsides. I'm not sure. People like Joey and Paul Christiano have offered prizes for the best arguments against their beliefs. I don't believe OpenPhil has ever done this, despite having a lot more money.

In general, funding doesn't do much to address the unilateralist's curse because any single funder can act unilaterally to fund a project that all the other funders think is a bad idea. I once proposed an EA donor's league to address this problem, but people weren't too keen on it for some reason.

it doesn't seem clear to me that "the fact that some EA thinks it's a good idea" is sufficient grounds to attribute positive expected value to a project, given no other information

Here's a thought experiment that might be helpful as a baseline scenario. Imagine you are explaining effective altruism to a stranger in a loud bar. After hearing your explanation, the stranger responds "That's interesting. Funny thing, I gave no thought to EA considerations when choosing my current project. I just picked it because I thought it was cool." Then they explain their project to you, but unfortunately, the bar is too loud for you to hear what they say, so you end up just nodding along pretending to understand. Now assume you have two options: you can tell the stranger to ditch their project, or you can stay silent. For the sake of argument, let's assume that if you tell the stranger to ditch their project, they will ditch it, but they will also get soured on EA and be unreceptive to EA messages in the future. If you stay silent, the stranger will continue their project and remain receptive to EA messages. Which option do you choose?

My answer is, having no information about the stranger's project, I have no particular reason to believe it will be either good or bad for the world. So I model the stranger's project as a small random perturbation on humanity's trajectory, of the sort that happen thousands of times per day. I see the impact of such perturbations as basically neutral on expectation. In the same way the stranger's project could have an unexpected downside, it could also have an unexpected upside. And in the same way that the stranger's actions could have some nasty unforeseen consequence, my action of discouraging the stranger could also have some nasty unforeseen consequence! (Nasty unforeseen consequences of my discouragement action probably won't be as readily observable, but that doesn't mean they won't exist.) So I stay silent, because I gain nothing on expectation by objecting to the project, and I don't want to pay the cost of souring the stranger on EA.

Suppose you agree with my argument above. If so, do you think that we should default to discouraging EAs from doing projects in the absence of further information? Why? It seems a bit counterintuitive/implausible that being part of the EA community would increase the odds that someone's project creates a downside. If anything, it seems like being plugged into the community should increase a person's awareness of how their project might pose a risk. (Consider the EA hotel in comparison to an alternative of having people live cheaply as individuals. Being part of a community of EAs = more peer eyeballs on your project = more external perspectives to spot unexpected downsides.) And in the same way giving strangers default discouragement will sour them on EA, giving EAs default discouragement on doing any kind of project seems like the kind of thing that will suck the life out of the movement.

I don't want to be misinterpreted, so to clarify:

  • I am in favor of people discouraging projects if, after looking at the project, they actually think the project will be harmful.

  • I am in favor of bringing up considerations that suggest a project might be harmful with the people engaged in it, even if you aren't sure about the project's overall impact.

  • I'm in favor of people trying to make the possibility of downsides mentally available, so folks will remember to check for them.

  • I'm in favor of more people doing what Joey does and offering prizes for arguments that their projects are harmful.

  • I'm in favor of people publicly making themselves available to shoot holes in the project ideas of others.

  • I'm in favor of people in the EA community trying to coordinate more effectively, engage in moral trade, cooperate in epistemic prisoner's dilemmas, etc.

  • In general, I think brainstorming potential downsides has high value of information and people should do it more. But a gamble can still be positive expected value without having purchased any information! (Also, in order to avoid bias, maybe you should try to spend an equal amount of time brainstorming unexpected upsides.)

  • I think it may be reasonable to focus on projects which have passed the acid test of trying to think of plausible downsides (since those projects are likely to be higher expected value).

But I don't really see what purpose blanket discouragement serves.

Comment author: hollymorgan 16 July 2018 12:40:02PM *  0 points [-]

The OpenPhil/OpenAI article was a good read, thanks, although I haven't read the comments on either post or Ben's latest thoughts, and I don't really have an opinion either way on the value/harm of OpenPhil funding OpenAI if they did so "to buy a seat on OpenAI’s board for Open Philanthropy Project executive director Holden Karnofsky". But of course, I wasn't suggesting that centralised action is never harmful; I was suggesting that it's better on average [edit: in UC-type scenarios, which I'm not sure your two examples this stuff is confusing!]. It's also ironic that part of the reason funding OpenAI might have been a bad idea seems to be that it creates more of a Unilateralist's Curse scenario (although I did notice that the first comment claims this is not their current strategy): "OpenAI’s primary strategy is to hire top AI researchers to do cutting-edge AI capacity research and publish the results, in order to ensure widespread access."

If we think that "individuals underestimate potential downsides relative to their estimations concerning potential upsides", why do we expect funders to be immune to this problem?

Excellent question. No strong opinion as I'm still in anecdote territory here, but I reckon emotional attachment to one's own grand ideas is what's driving the underestimation of risk, and you'd expect funders to be able to assess ideas more dispassionately.

I'm not sure that EA is all that relevant to the answer I'd give in your thought experiment. If they didn't have much power then I'd say go for it. If their project would have large consequences before anyone else could step in I'd say stop. As I said before, "I currently still think the EA Hotel has positive expected value - I don't think it's giving individuals enough power for the Unilateralist's Curse to really apply." I genuinely do expect the typical idea someone has for improving the status quo to be harmful, whether they're an EA or a stranger in a bar. Most of the time it's good to encourage innovation anyway, because there are feedback mechanisms/power structures in place to stop things getting out of hand if they start to really not look like good ideas. But in UC-type scenarios i.e. where those checks are not in place, we have a problem.

We might be talking past each other. Perhaps we agree that: In your typical real-life scenario i.e. where an individual does not have unilateral power, we should encourage them to pursue their altruistic ideas. Perhaps this was even what you were saying originally, and I just misinterpreted it.

[Edit: I'm pretty sure we're talking past each other to at least some extent. I don't think there should be "blanket discouragement". I think the typical project that someone/an EA thinks is a good idea is in fact a bad idea, but that they should test it anyway. I do think there should be blanket discouragement of actions with large consequences that can be taken by a small minority without the endorsement of others (eg. relating to reputational risk or information hazards).]

Comment author: Henry_Stanley 15 July 2018 11:17:53PM 6 points [-]

I like this a lot.

Random plug: I know a lot of EAs (including myself) use goal-and-task-tracking tool Complice; you can assign an accountability partner who sees your progress (and you see theirs). You can also share a link like this that lets anyone get updates on your public goals, which could potentially be quite motivating.

Comment author: hollymorgan 16 July 2018 11:33:31AM 0 points [-]

Ooo yes good shout. I'll include it at some point.

Comment author: jglamine 15 July 2018 10:39:45PM 3 points [-]

Thanks for posting, this seems worth doing. I've set up a 30 minute meeting with a friend!

Comment author: hollymorgan 16 July 2018 11:31:38AM 0 points [-]

Thanks for the feedback, hope it goes well :-)

View more: Next