JWS

3189 karmaJoined Jan 2023

Bio

Kinda pro-pluralist, kinda anti-Bay EA.

I have come here to extend the principle of charity to bad criticisms of EA and kick ass. And I'm all out of charity.

(my opinions are fully my own, and do not represent the views of any close associates or the company I work for)

Posts
6

Sorted by New
4
JWS
· 1y ago · 1m read

Sequences
1

Criticism of EA Criticism

Comments
262

JWS
1d27
10
0

Again, a fan of you and your approach David, but I think you somewhat underestimate just how hostile/toxic Émile has been toward all of EA. I think it's very fair to substitute one for the other, and it's the kind of thing we do all the time in real, social settings. In a way, you seem to be emulating a hardcore 'decoupling' mindset here.

Like, at risk of being inflammatory, an intuition pump from your perspective might be:

It is possible that many complaints about Trump are true and also that Trump raises important concerns. I would not like to see personal criticism of Trump become a substitute for engagement with criticism by Trump.

I think many EAs view 'engagement with criticism by Torres' in the same way that you'd see 'engagement with criticism by Trump', that the critic is just so toxic/bad-faith that nothing good can come of engagement.

JWS
1d22
13
0

I think the main thing is their astonishing success. Like, whatever else anyone wants to say to Émile, they are damn hard working and driven. It's just in their case they are driven by fear and pure hatred of EA.

Approximately ~every major news media piece critical of EA (or covering EA with a critical lens, which are basically the same thing over the last year and a half) seems to link to/quote Émile at some point as a reputable and credible report on EA.

Sure, those more familiar with EA might be able to see the hyperbole, but it's not imo out there to imagine that Émile's immensely negative presentation of EA being picked out by major outlets has contributed to the fall of EA's reputation over the last couple of years.

Like, I was wish we could "collectively agree to make Émile irrelevant", but EA can't do that unilaterally given the influence their[1] ideas and arguments have had. Those are going to have to be challenged or confronted sooner or later.

  1. ^

    That is, Émile's

Answer by JWSApr 29, 20242
0
0

To answer your question very directly on the confidence of millions of years in the future, the answer I think is "no", because I don't think we can be reasonably confident and precise about any significant belief about the state of the universe millions of years into the future.[1] I'd note that the article you link isn't very convincing for someone who doesn't share the same premesis, though I can see it leading to 'nagging thoughts' as you put it.

Other ways to answer the latter question about human extinction could be:

  • That humanity is positive (if human moral value is taken be larger than the effect on animals)
  • That humanity is net-positive (if the total effect of humanity is positive, most likely because of belief that wild-animal suffering is even worse)
  • Option value, or the belief that humanity has the capacity to change (as others have stated)

In practice though, I think if you reach a point where you might consider it to be a moral course of action to make all of humanity extinct, perhaps consider this a modus tonens of the principles that brought you to that conclusion rather than as a logical consequence that you ought to believe and act on. (I see David made a similar comment basically at the same time)

  1. ^

    Some exceptions for phyisics especially outside of our lightcone yada yada, but I think for the class of beliefs (I used significant beliefs) that are similar to this question this holds

JWS
12d14
17
5

I don't understand your lack of understanding. My point is that you're acting like a right arse.

When people make claims, we expect there to be some justification proportional to the claims made. You made hostile claims that weren't following on from prior discussion,[1] and in my view nasty and personal insinuations as well, and didn't have anything to back it up. 

I don't understand how you wouldn't think that Sean would be hurt by it.[2] So to me, you behaved like arse, knowing that you'd hurt someone, didn't justify it, got called out, and are now complaining.

So I don't really have much interest in continuing this discussion for now, or much opinion at the moment of your behaviour or your 'integrity'

  1. ^

    Like nobody was discussing CSER/CFI or Sean directly until you came in with it

  2. ^

    Even if you did think it was justified

JWS
12d120
40
8
2
3

Sorry Oli, but what is up with this (and your following) comment?

From what I've read from you[1] seem to value what you call "integrity" almost as a deontological good above all others. And this has gained you many admirers. But to my mind high integrity actors don't make the claims you've made in both of these comments without bringing examples or evidence. Maybe you're reacting to Sean's use of 'garden variety incompetence' which you think is unfair to Bostrom's attempts to tow the fine line between independence and managing university politics but still, I feel you could have done better here.

To make my case:

  • When you talk about "other organizations... become a hollow shell of political correctness and vapid ideas" you have to be referring to CSER & Leverhulme here right, like it's the only context that makes sense.
    • If not, I feel like that's very misleadingly phrased.
    • But if it is, then calling those organisations 'hollow shells' of 'vapid ideas' is like really rude, and if you're going to go there at least have the proof to back it up?
  • Now that just might be you having very different politics from CSER & Leverhulme people. But then you say "he [Bostrom] didn't compromise on the integrity of the institution he was building", which again I read as you directly contrasting against CSER & Leverhulme - or even Sean in person.
    • Is this true? Surely organisation can have different politics or even have worse ideas without compromising on integrity?
    • If they did compromise on integrity, feels like you should share what those are.
    • If it is directed at Sean personally, that feels very nasty. Making assertions about someone's integrity without solid proof isn't just speculation, it's harmful to the person and also poor 'epistemic hygiene' for the community at large.
  • You say "the track record here speaks quite badly to Sean's allocation of responsibility by my lights". But I don't know what 'track record' your speaking about here. Is it at FHI? CSER & Leverhulme? Sean himself?
  • Finally, this trio of claims in your second comment really rubbed me[2] the wrong way. You say that you think:
    • "CSER and Leverhulme, which I think are institutions that have overall caused more harm than good and I wish didn't exist"
      • This is a huge claim imo. More harm than good? So much so that you wish it didn't exist? With literally no evidence apart from it being your opinion???
    • "Sean thought were obvious choices were things that would have ultimately had long-term bad consequences"
      • I assume that this is about relationship management with the university perhaps? But I don't know what to make of it because you don't say what these 'obvioous choices are', or why you think they're so likely to have bad consequences
    • "I also wouldn't be surprised if Sean's takes were ultimately responsible for a good chunk of associated pressure and attacks on people's intellectual integrity"
      • This might be the worst one. Why are Sean's takes responsible? What were the attacks on people's integrity? Was this something Sean did on purpose?
      • I don't know what history you're referring to here, and the language used is accusatory and hostile. It feels really bad form to write it without clarifying what you're referring to for people (like me) who don't know what context you're talking about.

Maybe from your perspective you feel like you're just floating questions here and sharing your personal perspective, but given the content of what you've said I think it would have been better if you had either brought more examples or been less hostile.

  1. ^

    And I feel like I've read quite a bit, both here, on LW, and on your Twitter

  2. ^

    And given the votes, a lot of readers including some who may have agreed with your first comment

JWS
1mo43
6
4
1

(I'm going to wrap up a few disparate threads together here, and will probably be my last comment on this post ~modulo a reply for clarification's sake. happy to discuss further with you Rob or anyone via DMs/Forum Dialogue/whatever)

(to Rob & Oli - there is a lot of inferential distance between us and that's ok, the world is wide enough to handle that! I don't mean to come off as rude/hostile and apologies if I did get the tone wrong)

Thanks for the update Rob, I appreciate you tying this information together in a single place. And yet... I can't help but still feel some of the frustrations of my original comment. Why does this person not want to share their thoughts publicly? Is it because they don't like the EA Forum? Because their scared of retaliation? It feels like this would be useful and important information for the community to know.

I'm also not sure what to make of Habryka's response here and elsewhere. I think there is a lot of inferential distance between myself and Oli, but it does seem to me to come off as a "social experiment in radical honesty and perfect transparency" , which is a vibe I often get from the Lightcone-adjacent world. And like, with all due respect, I'm not really interested in that whole scene. I'm more interested in questions like:

  1. Were any senior EAs directly involved in the criminal actions at FTX/Alameda?
  2. What warnings were given about SBF to senior EAs before the FTX blowup, particularly around the 2018 Alameda blowup, as recounted here.
    1. If these warnings were ignored, what prevented people from deducing that SBF was a bad actor?[1]
    2. Critically, if these warnings were accepted as true, who decided to keep this a secret and to supress it from the community at large, and not act on it?
  3. Why did SBF end up with such a dangerous set of beliefs about the world? (I think they're best described as 'risky beneficentrism' - see my comment here and Ryan's original post here)
  4. Why have the results of these investigations, or some legally-cleared version, not been shared with the community at large?
  5. Do senior EAs have any plan to respond to the hit to EA-morale as a result of FTX and the aftermath, along with the intensely negative social reaction to EA, apart from 'quietly hope it goes away'?

Writing it down, 2.b. strikes me as what I mean by 'naive consequentialism' if it happened. People had information that SBF was a bad character who had done harm, but calculated (or assumed) that he'd do more good being part of/tied to EA than otherwise. The kind of signalling you described as naive consequentialism doesn't really seem pertinent to me here, as interesting as the philosophical discussion can be.

tl'dr - I think there can be a difference between a discussion about what norms EA 'should' have, or senior EAs should act by, especially in the post-FTX and influencing-AI-policy world, but I think that's different from the 'minimal viable information-sharing' that can help the community heal, hold people to account, and help make the world a better place. It does feel like the lack of communication is harming that, and I applaud you/Oli pushing for it, but sometimes I wish you would both also be less vague too. Some of us don't have the EA history and context that you both do!

epilogue: I hope Rebecca is doing well. But this post & all the comments makes me feel more pessimistic about the state of EA (as a set of institutions/organisations, not ideas) post FTX. Wounds might have faded, but they haven't healed 😞

  1. ^

    Not that people should have guessed the scale of his wrongdoing ex-ante, but was there enough to start to downplay and disassociate?

JWS
1mo11
4
0

My guess is there's something ideological or emotional behind these kind of EA critiques,

Something I've come across while looking into/responding to EA criticism over the last few months is that a lot of EA critics seem to absolutely hate EA[1], like with an absolutely burning zeal. And I'm not really sure why or what to do with it - feels like it's an underexplored question/phenomenon for sure.

 

  1. ^

    Or at least, what they perceive EA/EAs to be

JWS
1mo17
5
0

What are you referring to when you say "Naive consequentialism"?[1] Because I'm not sure that it's what others reading might take it to mean?

Like you seem critical of the current plan to sell Wytham Abbey, but I think many critics view the original purchase of it as an act of naive consequentialism that ignored the side effects that it's had, such as reinforcing negative views of EA etc. Can both the purchase and the sale be a case of NC? Are they the same kind of thing?

So I'm not sure the 3 respondents from the MCF and you have the same thing in mind when you talk about naive consequentialism, and I'm not quite sure I am either.

  1. ^

    Both here and in this other example, for instance

My deductions were here, there are two main candidates given the information available (if it is reliable).

Load more