Comment author: Evan_Gaensbauer 03 August 2018 10:38:38PM *  8 points [-]

helps them recruit people

Do you mind clarifying what you mean by "recruits people?" I.e., do you mean they recruit people to attend the workshops, or to join the organizational staff.

I have spoken with four former interns/staff who pointed out that Leverage Research (and its affiliated organizations) resembles a cult according to the criteria listed here.

In this comment I laid out the threat to EA as a cohesive community itself for those within to like the worst detractors of EA and adjacent communities to level blanket accusations of an organization of being a cult. Also, that comment was only able to provide mention of a handful of people describing Leverage like a cult, admitting they could not recall any specific details. I already explained that that report doesn't not qualify as a fact, nor even an anecdote, but hearsay, especially since further details aren't being provided.

I'm disinclined to take seriously more hearsay of a mysterious impression of Leverage as cultish given the poor faith in which my other interlocutor was acting in. Since none of the former interns or staff this hearsay of Leverage being like a cult are coming forward to corroborate what features of a cult from the linked Lifehacker article Leverage shares, I'm unconvinced your or the other reports of Leverage as being like a cult aren't being taken out of context from the individuals you originally heard them from, nor that this post and the comments aren't a deliberate attempt to do nothing but tarnish Leverage.

The EA Summit 2018 website lists LEAN, Charity Science, and Paradigm Academy as "participating organizations," implying they're equally involved. However, Charity Science is merely giving a talk there. In private conversation, at least one potential attendee was told that Charity Science was more heavily involved. (Edit: This issue seems to be fixed now.)

Paradigm Academy was incubated by Leverage Research, as many organizations in and around EA are by others (e.g., MIRI incubated CFAR; CEA incubated ACE, etc.). As far as I can tell now, like with those other organizations, Paradigm and Leverage should be viewed as two distinct organizations. So that itself is not a fact about Leverage, which I also went over in this comment.

The EA Summit 2018 website lists LEAN, Charity Science, and Paradigm Academy as "participating organizations," implying they're equally involved. However, Charity Science is merely giving a talk there. In private conversation, at least one potential attendee was told that Charity Science was more heavily involved. (Edit: This issue seems to be fixed now.)

As I stated in that comment as well, there is a double standard at play here. EA Global each year is organized by the CEA. They aren't even the only organization in EA with the letters "EA" in their name, nor are they exclusively considered among EA organizations able to wield the EA brand. And yet despite all this nobody objects on priors to the CEA as a single organization branding these events each year. As we shouldn't. Of course, none of this necessary to invalidate the point you're trying to make. Julia Wise as the Community Liaison for the CEA has already clarified the CEA themselves support the Summit.

So the EA Summit has already been legitimized by multiple EA organizations as a genuine EA event, including the one which is seen as the default legitimate representation for the whole movement.

(low confidence) I've heard through the grapevine that the EA Summit 2018 wasn't coordinated with other EA organizations except for LEAN and Charity Science.

As above, that the EA Summit wasn't coordinated by more than one organization means nothing. There are already EA retreat- and conference-like events organized by local university groups and national foundations all over the world, which have gone well, such as the Czech EA Retreat in 2017. So the idea EA should be so centralized only registered non-profits with some given caliber of prestige in the EA movement, or those they approve, can organize events to be viewed as legitimate by the community is unfounded. Not even the CEA wants that centralized. Nobody does. So whatever point you're trying to prove about the EA Summit using facts about Leverage Research is still invalid.

For what it's worth, while no other organizations are officially participating, here are some effective altruists who will be speaking at the EA Summit, and the organizations they're associated with. This is sufficient to warrant a correct identification that those organizations are in spirit welcome and included at EAG. So the same standard should apply to the EA Summit.

  • Ben Pace, Ray Arnold and Oliver Habryka: LessWrong isn't an organization, but it's played a formative role in EA, and with LW's new codebase being the kernel of for the next version of the EA Forum, Ben and Oliver as admins and architects of the new LW are as important representatives of this online community as any in EA's history.

  • Rob Mather is the ED of the AMF. AMF isn't typically regarded as an "EA organization" because they're not a metacharity in need of dependence directly on the EA movement. But that Givewell's top-recommended charity since EA began, which continues to receive more donations from effective altruists than any other, to not been given consideration would be senseless.

  • Sarah Spikes runs the Berkeley REACH.

  • Holly Morgan is a staffer for the EA London organization.

In reviewing these speakers, and seeing so many from LEAN and Rethink Charity, with Kerry Vaughan being a director for individual outreach at CEA, I see what the EA Summit is trying to do. They're trying to have as speakers at the event to rally local EA group organizers from around the world to more coordinated action and spirited projects. Which is exactly what the organizers of the EA Summit have been saying the whole time. This is also why as an organizer for rationality and EA projects in Vancouver, Canada, trying to develop a project to scale both here and cities everywhere a system for organizing local groups to do direct work; and as a very involved volunteer online community organizer in EA, I was invited to attend the EA Summit. It's also why one the event organizers consulted with me before they announced the EA Summit how they thought it should be presented in the EA community.

This isn't counterevidence to be skeptical of Leverage. This is evidence counter to the thesis the EA Summit is nothing but a launchpad for Leverage's rebranding within the EA community as "Paradigm Academy," being advanced in these facts about Leverage Research. No logical evidence has been presented that the tenuous links between Leverage and the organization of the 2018 EA Summit entails the negative reputation Leverage has acquired over the years should be transferred onto the upcoming Summit.

Comment author: Habryka 05 August 2018 06:48:43PM *  7 points [-]

(While LessWrong.com was historically run by MIRI, the new LessWrong is indeed for most intents and purposes an independent organization (while legally under the umbrella of CFAR) and we are currently filing documents to get our own 501c3 registered, and are planning to stick around as an organization for at least another 5 years or so. Since we don't yet have a name that is different from "LessWrong", it's easy to get confused about whether we are an actual independent organization, and I figured I would comment to clarify that.)

Comment author: Larks 19 July 2018 09:29:24PM 3 points [-]

Hey, first of all, thanks for what I'm sure what must have been a lot of work behind this. Many of these ideas seem very sensible.

Am I right in assuming that the scale for the upvotes was intended to be roughly-but-not-exactly logarithmic? And do downvotes scale the same way?

Comment author: Habryka 20 July 2018 04:55:30PM 2 points [-]

Yep, we chose them based on a logarithmic scale, and then jiggled them around to fit to fuller numbers.

And yep, vote power also applied to downvotes, at least on LW, and would be somewhat surprised if we would do something else here.

Comment author: MoneyForHealth 19 July 2018 11:53:42PM *  2 points [-]

DON'T BE A NECROMANCER!

LessWrong 2.0 resurrected all posts deleted by original posters which then had to be individually deleted, by users who may or may not be aware that this had happened. Please ensure this isn’t replicated with Effective Altruism Forum 2.0. If I can’t control my content I’ll move somewhere else that I can. <throw away account>

Comment author: Habryka 20 July 2018 04:51:19PM 4 points [-]

Huh, I am unaware of this. Feel free to ping us on Intercom about any old posts you want deleted. The old database was somewhat inconsistent about the ways it marked posts as deleted, so there is a chance we missed some.

Comment author: gray 26 October 2017 07:41:00PM 16 points [-]

Georgia here - The direct context, "Research also shows that diverse teams are more creative, more innovative, better at problem-solving, and better at decision-making," is true based on what I found.

What I found also seemed pretty clear that diversity doesn't, overall, have a positive or negative effect on performance. Discussing that seems important if you're trying to argue that it'll yield better results, unless you have reason to think that EA is an exception.

(E.g., it seems possible that business teams aren't a good comparison for local groups or nonprofits, or that most teams in an EA context do more research/creative/problem-solving type work than business teams, so the implication "diversity is likely to help your EA team" would be possibly valid - but whatever premise that's based on would need to be justified.)

That said, obviously there are reasons to want diversity other than its effect on team performance, and I generally quite liked this article.

Comment author: Habryka 26 October 2017 10:18:01PM *  22 points [-]

As a relevant piece of data:

I looked into the 4 sources you cite in your article as improving the effectiveness of diverse teams and found the following:

  • 1 didn't replicate, and the replication found the opposite effect with a much larger sample size (which you link to in your article)
  • One is a Forbes article that cites a variety of articles, two of which I looked into and didn't say at all what the Forbes article said they say, with the articles usually saying "we found no significant effects"

  • One study you cited directly found the opposite result of what you seemed to imply it does, with its results table looking like this:

https://imgur.com/a/dRms0

And the results section of the study explicitly saying:

"whereas background diversity displayed a small negative, yet nonsignificant, relationship with innovation (.133)."

(the thing that did have a positive relation was "job-related diversity" which is very much not the kind of diversity the top-level article is talking about)

  • The only study that you cited that did seem to cite some positive effects was one with the following results table:

https://imgur.com/a/tgS6q

Which found some effects on innovation, though overall it found very mixed effects of diversity, with its conclusion stating:

"Based on the results of a series of meta-analyses, we conclude that cultural diversity in teams can be both an asset and a liability. Whether the process losses associated with cultural diversity can be minimized and the process gains be realized will ultimately depend on the team’s ability to manage the process in an effective manner, as well as on the context within which the team operates."

Comment author: Habryka 26 October 2017 09:40:06PM *  34 points [-]

As a general note for the discussion: Given the current incentive landscape in the parts of society most EAs are part of, I expect opposition to this post to be strongly underrepresented in the comment section.

As a datapoint, I have many disagreements with this article, but based on negative experiences with similar discussions, I do not want to participate in a longer discussion around it. I don't think there is an easy fix for this, but it seems reasonable for people reading the comments to be aware that they might be getting a very selective set of opinions.

Comment author: Julia_Wise  (EA Profile) 01 January 2017 08:29:35PM 4 points [-]

Clarification: You're using EAF to mean EA Forum, while I usually see it used to mean EA Foundation.

Comment author: Habryka 01 January 2017 09:18:19PM 0 points [-]

Ah, I was really confused during the whole article and thought that this had something to do with traffic to the EA Foundation websites. Thanks for clarifying!

Comment author: Jeff_Kaufman 04 November 2016 01:22:27PM 2 points [-]

Any outcome yet?

Comment author: Habryka 07 November 2016 10:44:14PM 6 points [-]

And the results are in!

The bet was resolved with 6 yes votes, and 4 no votes, which means a victory for Carl Shulman. I will be sending Carl $10, as per our initial agreement.

Comment author: kbog  (EA Profile) 26 October 2016 04:43:02PM 0 points [-]

What will you do about people who don't reply to your messages?

Comment author: Habryka 28 October 2016 10:01:06PM *  5 points [-]

(I haven't run this by Carl yet, but this is my current plan for how to interpret the incoming data)

Since our response rates where somewhat lower than expected (mostly because we chose an account that was friends with only one person from our sample, and so messages probably ended up in people's secondary Inbox), we decided to only send messages until we get 10 responses to (1), since we don't want to spam a ton of people with a somewhat shady looking question (I think two people expressed concern about conducting a poll like this).

Since our stopping criteria is 10 people, we will also stop if we get more than 7 yes responses, or more than 3 no responses, before we reach 10 people.

Comment author: Gleb_T  (EA Profile) 25 October 2016 07:48:53PM 3 points [-]

I'm fine taking a random sample of 20 people.

Regarding positive connections, the claim made by Oliver is what we're trying to measure - that I made "significantly worse" the experience of being a member of the EA community for "something like 80%" of the people there. I had not made any claims about my positive connections.

Comment author: Habryka 25 October 2016 08:41:24PM *  8 points [-]

After some private conversation with Carl Shulman, who thinks that I am miscalibrated on this, and whose reasoning I trust quite a bit, I have updated away from me winning a bet with the words "significantly worse" and also think it's probably unlikely I would win a bet with 8/10, instead of 7/10.

I have however taken on a bet with Carl with the exact wording I supplied below, i.e. with the words "net negative" and 7/10. Though given Carl's track record of winning bets, I feel a feeling of doom about the outcome of that bet, and on some level expect to lose that bet as well.

At this point, my epistemic status on this is definitely more confused, and I assign significant probability to me overestimating the degree to which people will report that have InIn or Gleb had a negative impact on their experience (though I am even more confused whether I am just updating about people's reports, or the actual effects on the EA community, both of which seem like plausible candidates to me).

Comment author: Gleb_T  (EA Profile) 25 October 2016 06:34:44PM 5 points [-]

I'll be happy to take that bet. So if I understand correctly, we'd choose a random 10 people on the EA FB group - ones who are not FB friends with you or I to avoid potential personal factors getting into play - and then ask them if their experience of the EA community has been "significantly worsened" by InIn. If 8 or more say yes, you win. I suggest 1K to a charity of the choice of the winning party? We can let a third party send messages to prevent any framing effects.

Comment author: Habryka 25 October 2016 07:59:46PM *  4 points [-]

Since the majority of the FB group is inactive, I propose that we limit ourselves to the 50 or 100 most recently active members on the FB group, which will give a more representative sample of people who are actually engaging with the community (and since I don't want to get into debates of what precisely an EA is).

Given that I am friends with a large chunk of the core EA community, I don't think it's sensible to exclude my circle of friends, or your circle of friends for that matter.

Splitting this into two questions seems like a better idea. Here is a concrete proposal:

  1. Do you identify as a member of the EA community? [Yes] [No]
  2. Do you feel like the engagement of Gleb Tsipursky or Intentional Insights with the EA community has had a net negative impact on your experience as a member of the EA community? [Yes] [No]

I am happy to take a bet that chosen from the top 50 most recent posters on the FB group (at this current point in time), 7 out of 10 people who said yes to the first question, will say yes to the second. Or, since I would prefer a larger sample size, 14 out of 20 people.

(Since I think this is obviously a system of high noise, I only assign about 60% probability to winning this bet.)

I sadly don't have $1000 left right now, but would be happy about a $50 bet.

View more: Next