R

Raemon

1874 karmaJoined Sep 2014

Comments
192

Topic contributions
1

I work for Habryka, so my opinion here should be discounted. (for what it's worth I think I have disagreed with some of his other comments this week, and I think your post did update me on some other things, which I'm planning to write up). But re:

incorrectly predicted what journalists would think of your investigative process, after which we collaborated on a hypothetical to ask journalists, all of whom disagreed with your decision.

this seems egregiously inaccurate to me. Two of the three journalists said some flavor of "it's complicated" on the topic of whether to delay publication, for very similar reasons to what Habryka has mentioned. It seems like at best it was a wash, and it felt pretty weird in the OP that you wrote them up as if they supported your thesis.

What’s wrong with “make a specific targeted suggestion for a specific person to do the thing, with an argument for why this is better than whatever else the person is doing?”, like Linch suggests?

This can still be hard, but I think the difficulty lives in the territory, and is an achievable goal for someone who follows EA Forum and pays attention to what organizations do what.

It seemed useful to dig into "what actually are the useful takeaways here?", to try an prompt some more action-oriented discussion.

The particular problems Elizabeth is arguing for avoiding:

  • Active suppression of inconvenient questions
  • Ignore the arguments people are actually making
  • Frame control / strong implications not defended / fuzziness
  • Sound and fury, signifying no substantial disagreement
  • Bad sources, badly handled
  • Ignoring known falsehoods until they're a PR problem

I left off "Taxing Facebook" because it feels like the wrong name (since it's not really platform specific). I think the particular behavior she was commenting on there was something like "persistently bringing up your pet issue whenever a related topic comes up."

Many of the instances here are noteworthy that a single instance of them isn't necessarily that bad. It can be reasonable to bring up your pet issue once or twice, but if there's a whole crowd of people who end up doing it every single time, it becomes costly enough to tax conversation and have systematic effects.

"Replying to an article as if it made a claim it didn't really make" is likewise something that's annoying if it just came up once, but adds up to a systemic major cost when either one person or a crowd of people are doing it over and over.

I'm not actually sure what to do about this, but it seemed like a useful frame for thinking about the problem.

Is your concrete suggestion/ask "get rid of the karma requirement?"

Quick note: I don't think there's anything wrong with asking "are you an english speaker" for this reason, I'm just kinda surprised that that seemed like a crux in this particular case. Their argument seemed cogent, even if you disagreed with it.

The comments/arguments about the community health team mostly make me think something more like "it should change its name" than be disbanded. I think it's good to have a default whisper network to report things to and surreptitiously check in with, even if they don't really enforce/police things. If the problem is that people have a false sense of security, I think there are better ways to avoid that problem.

Just maintaining the network is probably a fair chunk of work.

That said – I think one problem is that the comm-health team has multiple roles. I'm honestly not sure I understand all the roles they consider themselves to have taken on. But it seems likely to me that at least some of those roles are "try to help individuals" and at least some of those roles are more like "protect the ecosystem as whole" and "protect the interests of CEA in particular", and those might come into conflict with the "help individuals" one. And it's hard to tell from the outside how those tradeoffs get made.

I know a person who maintained a whisper network in a local community, who I'd overall trust more than CEA in that role, because basically their only motivation was "I want to help my friends and have my community locally be safe." And in some sense this is more trustworthy than "also, I want to help the world as a whole flourish", because there's fewer ways for them to end up conflicted or weighing multiple tradeoffs.

But, I don't think the solution can necessarily be "well, give the Whisper Network Maintenance role to less ambitious people, so that their motives are pure", because, well, less ambitious people don't have as high a profile and a newcomer won't know where to find them. 

In my mind this adds up to "it makes sense for CEA to keep a public-node of a whisper network running, but it should be clearer about it's limitations, and they should be upfront that there are some limits as to what people can/should trust/expect from it." (and, ideally there should maybe be a couple different overlapping networks, so in situations where people don't trust CEA, they have alternatives. i.e. Healthy Competition is good, etc)

But a glum aphorism comes to mind: the frame control you can expose is not the true frame control.

I think it's true that frame control (or, manipulation in general) tends to be designed to make it hard to expose, but, I think the actual issue here is more like "manipulation is generally harder to expose than it is to execute, so, people trying to expose manipulation have to do a lot of disproportionate work."

Part of the reason I think it was worth Ben/Lightcone prioritizing this investigation is as a retro-active version of "evaluations."

Like, it is pretty expensive to "vet" things. 

But, if your org has practices that lead to people getting hurt (whether intentionally or not), and it's reasonably likely that those will eventually come to light, orgs are more likely to proactively put more effort into avoiding this sort of outcome.

Raemon
7mo59
18
1
1
2

(crossposted from LessWrong)

This is a pretty complex epistemic/social situation. I care a lot about our community having some kind of good process of aggregating information, allowing individuals to integrate it, and update, and decide what to do with it.

I think a lot of disagreements in the comments here and on LW stem from people having an implicit assumption that the conversation here is about "should [any particular person in this article] be socially punished?". In my preferred world, before you get to that phase there should be at least some period focused on "information aggregation and Original Seeing."

It's pretty tricky, since in the default, world, "social punishment?" is indeed the conversation people jump to. And in practice, it's hard to have words just focused on epistemic-evaluation without getting into judgment, or without speech acts being "moves" in a social conflict. 

But, I think it's useful to at least (individually) inhabit the frame of "what is true, here?" without asking questions like "what do those truths imply?".

With that in mind, some generally useful epistemic advice that I think is relevant here:

Try to have Multiple Hypotheses

It's useful to have at least two, and preferably three, hypotheses for what's going on in cases like this. (Or, generally whenever you're faced with a confusing situation where you're not sure what's true). If you only have one hypothesis, you may be tempted to shoehorn evidence into being evidence for/against that hypothesis, and you may be anchored on it.

If you have at least two hypotheses (and, like, "real ones", that both seem plausible to you), I find it easier to take in new bits of data, and then ask "okay, how would this fit into two different plausible scenarios"? which activates my "actually check" process.

I think three hypotheses is better than two because two can still end up in a "all the evidence ways in on a one-dimensional spectrum". Three hypotheses a) helps you do 'triangulation', and b) helps remind you to actually do the "what frame should I be having here? what are other additional hypotheses that I might not have thought of yet?"

Multiple things can be going on at once

If two people have a conflict, it could be the case that one person is at-fault, or both people are at-fault, or neither (i.e. it was a miscommunication or something). 

If one person does an action, it could be true, simultaneously, that:

  • They are somewhat motivated by [Virtuous Motive A]
  • They are somewhat motivated by [Suspicious Motive B]
  • They are motivated by [Random Innocuous Motive C]

I once was arguing with someone, and they said "your body posture tells me you aren't even trying to listen to me or reason correctly, you're just trying to do a status monkey smackdown and put me in my place." And, I was like "what? No, I have good introspective access and I just checked whether I'm trying to make a reasoned argument. I can tell the difference between doing The Social Monkey thing and the "actually figure out the truth" thing."

What I later realized is that I was, like, 65% motivated by "actually wanna figure out the truth", and like 25% motivated by "socially punish this person" (which was a slightly different flavor of "socially punish" then, say, when I'm having a really tribally motivated facebook fight, so I didn't recognize it as easily).

Original Seeing vs Hypothesis Evaluation vs Judgment

OODA Loops include four steps: Observe, Orient, Decide, Act

Often people skip over steps. They think they've already observed enough and don't bother looking for new observations. Or it doesn't even occur to them to do that explicitly. (I've noticed that I often skip to the orient step, where I figure out about "how do I organize my information? what sort of decision am I about to decide on?", and not actually do the observe step, where I'm purely focused on gaining raw data.

When you've already decided on a schema-for-thinking-about-a-problem, you're more likely to take new info that comes in and put it in a bucket you think you already understand.

Original Seeing is different from "organizing information".

They are both different from "evaluating which hypothesis is true"

They are both different from "deciding what to do, given Hypothesis A is true"

Which is in turn different from "actually taking actions, given that you've decided what to do."

I have a sort of idealistic dream that someday, a healthy rationalist/EA community could collectively be capable of raising hypotheses, without people anchoring on them, and people share information in a way you robustly trust won't get automatically leveraged into a conflict/political move. I don't think we're close enough to that world to advocate for it in-the-moment, but I do think it's still good practice for people individually to be spending at least some of their time in node the OODA loop, and tracking which node they're currently focusing on.

Load more