Comment author: Evan_Gaensbauer 03 August 2018 10:03:28PM *  4 points [-]
  1. The CEA, the very organization you juxtaposed with Leverage and Paradigm in this comment has in the past been compared to a Ponzi scheme. Effective altruists who otherwise appreciated that criticism thought much of the value was lost in comparing it to a Ponzi scheme, and without it, the criticism may been better received. Additionally, LessWrong and the rationality community; CFAR and MIRI; and all of AI safety have been for years been smeared as a cult by their detractors. The rationality community isn't perfect. There is no guarantee interactions with a self-identified (aspiring) rationality community will "rationally" go however an individual or small group of people interacting with the community, online or in person, hope or expect. But the vast majority of effective altruists, even those who are cynical about these organizations or sub-communities within EA, disagree with how these organizations have been treated, for it poisons the well of good will in EA for everyone. In this comment, you stated your past experience with the Pareto Fellowship and Leverage left you feeling humiliated and manipulated. I've also been a vocal critic in person throughout the EA community of both Leverage Research and how Geoff Anders has led the organization. But that to elevate a personal opposition of them to a public exposure of opposition research in an attempt to tarnish an event they're supporting alongside many other parties in EA is not something I ever did, or will do. My contacts in EA and myself have followed Leverage. I've desisted in making posts like this myself, because digging for context I found Leverage has changed from any impression I've gotten of them. And that's why at first I was skeptical of attending the EA Summit. But upon reflection, I realized it wasn't supported by the evidence to conclude Leverage is so incapable of change that anything they're associated with should be distrusted. But what you're trying to do with Leverage Research is no different than what EA's worst critics do not in an effort to change EA or its members, but to tarnish them. From within or outside of EA, to criticize any EA organization in such a fashion is below any acceptable epistemic standard in this movement.

  2. If the post and comments here are stating facts about Leverage Research, and you're reporting impressions with no ability to remember specific details that Leverage is like a cult, those are barely facts. The only fact is some people perceived Leverage to be like a cult in the past, which are only anecdotes. And without details, they're only hearsay. Combined with the severity of the consequences if this hearsay was borne out, to be unable to produce actual facts invalidates the point you're trying to make.

Comment author: kbog  (EA Profile) 05 August 2018 09:02:18AM *  4 points [-]

Your comments seem to be way longer than they need to be because you don't trust other users here. Like, if someone comes and says they felt like it was a cult, I'm just going to think "OK, someone felt like it was a cult." I'm not going to assume that they are doing secret blood rituals, I'm not going to assume that it's a proven fact. I don't need all these qualifications about the difference between cultishness and a stereotypical cult, I don't need all these qualifications about the inherent uncertainty of the issue, that stuff is old hat. This is the EA Forum, an internal space where issues are supposed to be worked out calmly; surely here, if anywhere, is a place where frank criticism is okay, and where we can extend the benefit of the doubt. I think you're wasting a lot of time, and implicitly signaling that the issue is more of a drama mine than it should be.

Comment author: ZachWeems 05 August 2018 12:01:52AM 0 points [-]

Meta:

It might be worthwhile to have some sort of flag or content warning for potentially controversial posts like this.

On the other hand, this could be misused by people who dislike the EA movement, who could use it as a search parameter to find and "signal-boost" content that looks bad when taken out of context.

Comment author: kbog  (EA Profile) 05 August 2018 08:40:59AM *  7 points [-]

This is a romp through meadows of daisies and sunflowers compared to what real Internet drama looks like. It's perfectly healthy for a bunch of people to report on their negative experiences and debate the effectiveness of an organization. It will only look controversial if you frame it as controversial; people will only think it is a big deal if you act like it is a big deal.

Comment author: throwaway2 04 August 2018 09:57:58AM 7 points [-]

Also, that comment was only able to provide mention of a handful of people describing Leverage like a cult, admitting they could not recall any specific details.

I could list a number of specific details, but not without violating the preferences of the people who shared their experiences with me, and not without causing even more unnecessary drama.

These details wouldn't make for a watertight case that they're a "cult". I deliberately didn't claim that Leverage is a cult. (See also this.) But the details are quite alarming for anyone who strives to have well-calibrated beliefs and an open-minded and welcoming EA community. I do think their cultishness led to unnecessary harm to well-meaning, young people who wanted to do good in the world.

Comment author: kbog  (EA Profile) 05 August 2018 08:09:27AM *  10 points [-]

There's a big difference between feeling cultlike, as in "weird", "disorienting", "bizarre" etc, and exhibiting the epistemic flaws of a cult, as in having people be afraid to disagree with the thought leader, a disproportionate reverence for a single idea or corpus, the excommunication of dissenters, the application of one idea or corpus to explain everything in the world, instinctively explaining away all possible counterarguments, refusal to look seriously at outside ideas, and so on.

If you could provide any sanitized, abstracted details to indicate that the latter is going on rather than merely the former, then it would go a long way towards indicating that LR is contrary to the goal of well-calibrated beliefs and open-mindedness.

Comment author: kbog  (EA Profile) 05 August 2018 05:41:26AM *  6 points [-]

I honestly don't get all this stuff about not publishing your work. Time to brag, boy will I get shit on for this comment, but it's really relevant to the issue here: I never even had a minor in the subject, but when I had a good philosophical argument I got it published in a journal, and it wasn't that hard. Peer reviewed, not predatory, went through three rounds of revisions. Not a prestigious journal by any stretch of the imagination, but it proves that I knew what I was doing, which is good enough. You think that peer review is bullshit, fine: that means it's not that hard. With your supposedly superior understanding of academic incentives and meta-science and all that stuff, I'm sure you can dress up something so that it tickles the reviewers in the right way. Not wanting to mess with it most of the time is understandable, but you can still do us the courtesy of at least getting one or two things through the gauntlet so that we aren't left scratching our heads in confusion about whether we're looking at Kripke or Timecube or something in between. MIRI did it so you can too. Plus, it sounds like lots of this research is being kept hidden from public view entirely, which I just can't fathom.

The movement building sounds like good stuff however, I'm happy to see that.

Comment author: kbog  (EA Profile) 02 August 2018 10:39:23AM *  2 points [-]

1% is very low. Personally, when I first heard of 1FTW my gut reaction was a sort of dismissive cynicism, like, "oh look how little they are doing while congratulating themselves". I think that people who are very morally driven on this issue (particularly people who hate wealthier people such as Wharton MBAs) might have similar reactions and I worry that this increases the chance that they will have a generally dismissive attitude about EA. Plus, I would think that a 5% or 10% pledge is able to get at least 1/5 or 1/10 as many people respectively to sign up.

On the other hand, naively looking at donation quantities ignores the general social effects of getting a large number of influential people to grasp and support EA ideas. So, I think the core idea is good. If I were reassured that few people have a cynical response to your messaging then I think I'd consider it one of the very top uses of funding. Perhaps the messaging should be more stoic, but then you may get less positive interest.

Comment author: kbog  (EA Profile) 02 August 2018 10:24:34AM *  4 points [-]

Generally we think about maximizing happiness as an abstract moral claim. We're in favor of whatever really does maximize happiness in the long run, even if the direct strategy is different. So we're okay with the idea of promoting stoicism rather than positive psychology, even if we're utilitarian. The possibilities that we may become unstable, suffer greatly at the first sign of hardship, lose productivity due to addiction, etc are all things that matter to a happiness maximizer in various ways, because they make people suffer in the long run.

On the other hand, this really seems like a difficult psychology question. What attitude promotes the best mental well-being in the long run? Maybe stoicism is more sustainable and robust. Or maybe it's not, maybe positive psychology is also a good route to acceptance. I think it's not clear.

7

The Ethics of Giving part one: Thomas Hill on the Kantian perspective on giving

A review and critique of the first section of the volume The Ethics of Giving: Philosophers' Perspectives on Philanthropy , edited by Paul Woodruff, with an emphasis on issues relevant to the decision making of Effective Altruists. Hill is a distinguished Kant scholar who looks at what Kantian theory has... Read More
Comment author: kbog  (EA Profile) 20 July 2018 04:25:24PM *  3 points [-]

If we upvote someone's comments then we trust them to be a better authority, so we should give them a greater weight in vote totals. So it seems straightforward that a weighted vote count is a better estimate of the quality of a comment.

The downside is that can create a feedback loop for a group of people with particular views. Having normal votes go from 1x to 3x over the course of so many thousands of karma seems like too small a change to make this happen. But the scaling of strong votes all the way up to 16x seems very excessive and risky to me.

Another downside is that it may encourage people to post stuff here that is better placed elsewhere, or left unsaid. I think that after switching to this system for a while, we should take a step back and see if there is too much crud on the forums.

Comment author: John_Maxwell_IV 20 July 2018 03:16:31AM *  6 points [-]

Great point. I think it's really interesting to compare the blog comments on slatestarcodex.com to the reddit comments on /r/slatestarcodex. It's a relatively good controlled experiment because both communities are attracted by Scott's writing, and slatestarcodex has a decent amount of overlap with EA. However, the character of the two communities is pretty different IMO. A lot of people avoid the blog comments because "it takes forever to find the good content". And if you read the blog comments, you can tell that they are written by people with a lot of time on their hands--especially in the open threads. The discussion is a lot more leisurely and people don't seem nearly as motivated to grab the reader's interest. The subreddit is a lot more political, maybe because reddit's voting system facilitates mobbing.

Digital institution design is a very high leverage problem for civilization as a whole, and should probably receive EA attention on those grounds. But maybe it's a bad idea to use the EA forum as a skunk works?


BTW there is more discussion of the subforums thing here.

Comment author: kbog  (EA Profile) 20 July 2018 04:17:04PM *  1 point [-]

My impression is that the subreddit comments can be longer, more detailed and higher quality than the blog comments. Maybe they are not better on average, but the outliers are far better and more numerous, and the karma sorting means the outliers are the ones that you see first.

Comment author: Jan_Kulveit 19 July 2018 03:17:07PM *  14 points [-]

Feature request: integrate the content from the EA fora into LessWrong in a similar way as alignmentforum.org

Risks&dangers: I think there is non-negligible chance the LW karma system is damaging the discussion and the community on LW in some subtle but important way.

Implementing the same system here makes the risks correlated.

I do not believe anyone among the development team or moderators really understands how such things influence people on the S1 level - it seems somewhat similar to likes on facebook, and it's clear likes on Facebook are able to mess up with peoples motivation in important ways. So the general impression is people are playing with something possibly powerful, likely without deep understanding, and possibly with a bad model of what the largest impacts are (focus of ordering of content, vs. subtle impacts on motivation)

In situations with such uncertainty, I would prefer the risks to be less correlated

edit: another feature request: allow to add co-authors of posts. a lot of texts are created by multiple people, ant it would be nice if all the normal functionality worked

Comment author: kbog  (EA Profile) 20 July 2018 04:13:03PM 2 points [-]

This forum is currently correlated with the EA subreddit with its conventional counting of votes, and if we went with a like system then it would be correlated with Facebook. I'm not sure what else you could do, aside from having no likes or votes at all, which would clearly be bad because it makes it very hard to find the best content.

View more: Prev | Next