SI

Susan II

190 karmaJoined

Comments
8

Vulnerable EAs also want to follow only good norms while disposing of the bad ones!

If you offer people the heuristic "figure out if it's reasonable and only obey it if it is" then often they will fail.

You mention clear-cut examples, but oftentimes they will be very grey, or they will seem grey while being inside them. There may be several strong arguments why the norm isn't a good one; the bad actor will be earnest, apologetic, and trying to let you have your norm even though they don't believe in it. They may seem like a nice reasonable person trying to do the right thing in an awkward situation.

Following every norm would be quite bad. Socially enforced gendered cosmetics are disgusting and polyamory is pretty nifty.

Nonetheless, we must recognize that the same process that produces "polyamory is pretty nifty" will also produce in many people: "there's no reason I can't have a friendly relationship with my employer rather than an adversarial one" (these are the words they will use to describe the situation while living in their employer's house) and "I can date my boss if we are both ethical about it."

We must not look down on these people as though we'd never fall for it - everyone has things they'd fall for, no matter how smart they are.

My suggestion is to outsource. Google your situation. Read reddit threads. Talk to friends, DM people who have the same job as you (and who you are certain have zero connection to your boss) - chances are they'll be happy to talk to someone in the same position.

A few asides, noting that these are basics and noncomplete.

  • If someone uses the phrase "saving the world" on any level approaching consistent, run. Legitimate people who are working on legitimate problems do not rely on this drama. The more exciting the narrative and the more prominent a role the leader plays in it, the more skeptical you should be.
    • (Ah, you might say, but facts can't be too good to be true: they are simply true or false. My answer to that would be the optimizer's curse.)
  • If someone compares themselves to Professor Quirrell, run. In a few years, we'll have enough abusers who identified with him to fill a scrapbook.
    • If there's a dumb enough schmuck in EA to compare themselves to Galileo/da Vinci, exit calmly while giggling.
  • If someone is willing to break a social contract for utilitarian benefit, assume they'll break other social contracts for personal benefit i.e. sex.
  • If you are a somewhat attractive woman with unusual epistemic rigor, assume people will try to take advantage of that.
    • If someone wants unusual investment from you in a relationship, outsource.
    • If they say they're uncomfortable with how much you talk to other people, this must be treated as an attempt to subvert you.
  • Expect to hear "I have a principled objection to lying and am utterly scandalized whenever someone does it" many times, and be prepared to catch that person lying.
  • If someone pitches you on something that makes you uncomfortable, but for which you can't figure out your exact objection - or if their argument seems wrong but you don't see the precise hole in their logic - it is not abandoning your rationality to listen to your instinct.
  • If someone says "the reputational risks to EA of you publishing this outweigh the benefits of exposing x's bad behavior. if there's even a 1% chance that AI risk is real, then this could be a tremendously evil thing to do", nod sagely then publish that they said that.
  • Those last two points need a full essay to be conveyed well but I strongly believe them and think they're important.

I think censorship would be a bad choice here, because the EA forum hasn't discussed these concepts previously (in any routine way, I'm sure there is a screed or two that could be dug up from a mound of downvotes) and is unlikely to in the future.

I would agree that race/IQ debates on the EA forum are unlikely to produce anything of value. But it's my experience that if you have free discussion rights and one banned topic, that causes more issues than just letting people say their piece and move on.

I'd also agree that EA isn't meant to be a social club for autists - but from a cynical perspective, the blithely curious and alien-brained are also a strategic resource and snubbing them should be avoided when possible.

If people are still sharing takes on race/IQ two weeks from now, I think that would be a measurable enough detraction from the goal of the forum to support the admins telling them to take it elsewhere. But I would be surprised if it were an issue.

  • "There is a racial gap on IQ test scores and it's really disturbing. We're working really hard to fix it and we will fix it one day - but it's a tough complicated problem and no one's sure what angle to attack it from."
  • "Black people score worse than white people on IQ tests."
  • "Black people have lower IQs than white people."
  • "Black people are dumber than white people."

The first statement would be viewed positively by most, the second would get a raised eyebrow and a "And what of it?", the third is on thin fucking ice, and the fourth is  utterly unspeakable.

2-4 aren't all that different in terms of fact-statements, except that IQ ≠ intelligence, so some accuracy is lost moving to the last. It's just that the first makes it clear which side the speaker is on, the second states an empiricism and the next two look like they're... attacking black people, I think?

I would consider the fourth a harmful gloss - but it doesn't state that there is a genetic component to IQ, that's only in the reader's eye. This makes sense in the context of Bostrom posing outrageous but Arguably Technically True things to inflame the reader's eye.

  • "Poor people are dumber than rich people."

I think people would be mad at this, because they feel like poor people are being attacked and want to defend them. They would think, 'Oh, you're saying that rich people got there by being so smart and industrious, and if some single mom dies of a heart attack at 30 it's a skill issue.' But no one said that.

  • "People who go to worse schools are dumber than those who go to better schools."

And this would be uncontested.

  • "Vaccines cause an increased rate of heart cancer."

If someone says that, you'd probably assume they were pushing an antivax agenda and raise an eyebrow, even if they can produce a legitimate study showing that. (I don't think there is, I made up that example.) So I am sympathetic to being worried about agenda-pushing that is just saying selectively true statements.

Man, this shit is exhausting. Maybe CEA has the right idea here: they disavow the man's words without disavowing the man and then go back to their day.

I worry that most people here don't have timelines, just vibes.

And when AI does something scary, they go, "Look, I was espousing doomy vibes and then AI did something that looks doomy! Therefore I am worth paying more attention to!"

Or, "Hm, I was more into global development but the vibes are AI now. Maybe I should pull my old doomist uniform out of the closet."

If that sounds like something you're doing reader, maybe reconsider?

I feel like this is pretty important. I think this is basically fine if it's a billionaire who thinks CEA needs real estate, and less fine if it is incestuous funding from another EA group.

As opposed to speaking with Congressmen, is "prepare a scientific report and meet with the NIH director/his advisors" an at-all plausible mechanism for shutting down the specific research grant Soares linked?

Or if not, becoming NIH peer reviewers?

And—despite valiant effort!—we've been able to do approximately nothing.

Why not?

I apologize for an amateur question but: what all have we tried and why has it failed?