RH

Rafael Harth

652 karmaJoined Jul 2022

Comments
35

I read your first paragraph and was like "disagree", but when I got to the examples, I was like "well of I agree here, but that's only because those analogies are stupid".

At least one analogy I'd defend is the Sorcerer's Apprentice one. (Some have argued that the underlying model has aged poorly, but I think that's a red herring since it's not the analogy's fault.) I think it does share important features with the classical x-risk model.

Not my conclusion! I only linked to the post/copied and reformatted the text -- the author is Ozy Brennan.

I agree, and I got permission from Ozy to include the full text, so now it's here.

Ah, sorry! I had vaguely remembered that comment and it took me a while to find it, and I think  I was so annoyed in that moment that I just assumed the context would fit without checking (which I realize makes no sense since you didn't even write the original post). I'll edit my comment.

Reasonable disagreement, I think, should be as in the legal sense for doubt: disagreement based on clear reasons, not disagreement from people who are generally reasonable.

With this definition, any and all ability to enforce norms objectively goes out the window. A follower of {insert crazy idea X} would be equally justified to talk about unambiguous delusions in doubters of X, and anyone disputing it would have to get into a debate about the merits of X rather than pointing out that plenty of people disagree with X so it doesn't seem unambiguous.

We already have plenty of words to express personal opinions about a topic. Why would you want to define words that talk about consensus to also refer to personal opinions only? That just takes away our ability to differentiate between them. Why would we want that? Whether or not most people think something is useful information.

And there's also a motte-and-bailey thing going on here. Because if you really only want to talk about what you personally think, then there wouldn't be a reason to talk about unambiguous falsehoods. You've used the word unambiguous because it conveys a sense of objectivity, and when challenged, you defend by saying you that personally feel that way.

I’d see a lot more use to engaging with your point if instead of simply asserting that people could disagree, you explain precisely which you disagree with and why.

This is the second time you've tried to turn my objection into an object-level debate, and it completely misses the point. You don't even know if I have any object-level disagreements with your post. I critiqued your style of of communication, which I believe to be epistemically toxic, not the merits of your argument.

The example there is rhetorically effective not because there is an analogy between what the New York Times does and what this post did, but because there isn’t.

I objected to the comparison because it's emotionally loaded. "You're worse than {bad thing}" isn't any less emotionally loaded than "you're like {bad thing}".

People are still arguing about the falsehoods, but it’s unclear to me either that those arguments have any substance or that they’re germane to the core of my point.

Well yes, I would have been very surprised if you thought they had substance given your post. But the term "unambiguous" generally implies that reasonable people don't disagree, not just that the author feels strongly about them. This is one of the many elements of your post that made me describe it as manipulative; you're taking one side on an issue that people are still arguing about and call it unambiguous.

There are plenty of people who have strong feelings about the accusations but don't talk like that. Nonlinear themselves didn't talk like that! Throughout these discussions, there's usually an understanding that we differentiate between how strongly we feel about something and whether it's a settled matter, even from people who perceive the situation as incredibly unfair.

I find this post to be emotionally manipulative to a pretty substantial degree. It opens by explicitly asking readers to picture a scene involving the NYT, which most people will have a negative association with:

Picture a scene: the New York Times is releasing an article on Effective Altruism (EA) with an express goal to dig up every piece of negative information they can find. They contact Émile Torres, David Gerard, and Timnit Gebru, collect evidence about Sam Bankman-Fried, the OpenAI board blowup, and Pasek's Doom, start calling Astral Codex Ten (ACX) readers to ask them about rumors they'd heard about affinity between Effective Altruists, neoreactionaries, and something called TESCREAL. They spend hundreds of hours over six months on interviews and evidence collection, paying Émile and Timnit for their time and effort. The phrase "HBD" is muttered, but it's nobody's birthday.

... and then proceeds to try to build an analogy between the NYT and Ben's post.

When Nonlinear posted their reply, there was some controversy over whether the section "Sharing Information on Ben Pace" is okay or not. I defended it because, rightly or wrongly, Nonlinear had their reputation destroyed to a significant extent by Ben's post, and I thought and still think that gives them some leeway when it comes to emotional appeals in their post. Nothing like that applies here, yet this post is written like a novel rather than an impartial analysis. (To be clear there's much more to this than just the central analogy, but I'm not going to get into details.)

I think emotional manipulation is always bad because it's orthogonal to truth, but it's super extra bad when the topic under discussion is this controversial and emotionally loaded in the first place. I don't expect many to agree with this, but I honestly think it may be because you don't realize the extent to which you're being manipulated because I guess this is more subtle than what Nonlinear did. I'd argue that makes it worse; Nonlinear's section on Ben actually told the readers what it was doing. Again, don't expect much agreement here, but I think incentivizing stuff like this is a bad idea, and it's especially bad if we're only incentivizing it if the author doesn't admit what they're doing.

The post itself also seems premature for the reasons habryka already mentioned. Both Ben's post and Nonlinear's reply widely swung pubic opinion; why do we expect that Ben's reply won't do the same? And if it does, then it seems unhelpful to make a post framing it as a settled matter. And it's also not true that they're unambiguous falsehoods since most of them leave plenty of room for argument, as demonstrated by people still arguing about them.

Load more