H

Habryka

18952 karmaJoined Sep 2014

Bio

Project lead of LessWrong 2.0, often helping the EA Forum with various issues with the forum. If something is broken on the site, it's a good chance it's my fault (Sorry!).

Comments
1190

Topic contributions
1

This is an extremely rich guy who isn't donating any of his money.

FWIW, I totally don't consider "donating" a necessary component of taking effective altruistic action. Most charities seem much less effective than the most effective for-profit organizations, and most of the good in the world seems achieved by for-profit companies. 

I don't have a particularly strong take on Bryan Johnson, but using "donations" as a proxy seems pretty bad to me.

Less than a year ago Deepmind and Google Brain were two separate companies (both making cutting-edge contributions to AI development). My guess is if you broke off Deepmind from Google you would now just pretty quickly get competition between Deepmind and Google Brain (and more broadly just make the situation around slowing things down a more multilateral situation).

But more concretely, anti-trust action makes all kinds of coordination harder. After an anti-trust action that destroyed billions of dollars in economic value, the ability to get people in the same room and even consider coordinating goes down a lot, since that action itself might invite further anti-trust action.

Huh, fwiw I thought this proposal would increase AI risk, since it would increase competitive dynamics (and generally make coordinating on slowing down harder). I at least didn't read this post as x-risk motivated (though I admit I was confused what it's primary motivation was).

Yeah, that's a decent link. I do think this comment is more about whether anti-recommendations for organizations should be held to a similar standard. My comment also included some criticisms of Sean personally, which I think do also make sense to treat separately, though at least I definitely intend to also try to debias my statements about individuals after my experiences with SBF in-particular on this dimension.

Hmm, I agree that there was some aggression here, but I felt like Sean was the person who first brought up direct criticism of a specific person, and very harsh one at that (harsher than mine I think). 

Like, Sean's comment basically said "I think it was directly Bostrom's fault that FHI died a slow painful death, and this could have been avoided with the injection of just a bit of competence in the relevant domain". My comment is more specific, but I don't really see it as harsher. I also have a prior to not go into critiques of individual people, but that's what Sean did in this context (of course Bostrom's judgement is relevant, but I think in that case so is Sean's).

Pushback (in the form of arguments) is totally reasonable! It seems very normal that if someone is arguing for some collective path of action, using non-shared assumptions, that there is pushback. 

The thing that feels weirder is to invoke social censure, or to insist on pushback when someone is talking about their own beliefs and not clearly advocating for some collective path of action. I really don't think it's common for people to push back when someone is expressing some personal belief of theirs that is only affecting their own actions. 

In this case, I think it's somewhat ambiguous whether there I am was arguing for a collective path of action, or just explaining my private beliefs. By making a public comment I at least asserted some claim towards relevance towards others, but I also didn't explicitly say that I was trying to get anyone else to really change behavior.

And in either case, invoking social censure on the basis of someone expressing a belief of theirs without also giving a comprehensive argument for that belief seems rare (not unheard of, since there are many places in the world where uniform ideologies are enforced, though I don't think EA has historically been such a place, nor wants to be such a place).

This also roughly matches my impression. I do think I would prefer the EA community to either go towards more centralized governance or less centralized governance in the relevant way, but I agree that given how things are, the EA Forum team has less leeway with moderation than the LW team. 

 I think this might be one of the LTFF writeups Oli mentions (apologies if wrong), and seems like a good place to start

Yep, that's the one I was thinking about. I've changed my mind on some of the things in that section in the (many) years since I wrote it, but it still seems like a decent starting point.

This thread doesn't feel great for this, though CSER is an organization for which I do really wish more people shared their assessments. Also happy to have a call if your curiosity extends that far, and you would be welcome to write up the things that I say in that call publicly (though of course that's a lot of work and I don't think you have any obligation to do so). 

Load more