R

RobBensinger

@ Machine Intelligence Research Institute
7435 karmaJoined Sep 2014Berkeley, CA, USA

Sequences
1

Late 2021 MIRI Conversations

Comments
558

Topic contributions
2

Every aspect of that summary of how MIRI's strategy has shifted seems misleading or inaccurate to me.

I find myself agreeing with Nora on temporary pauses - and I don't really understand the model by which a 6-month, or a 2-year, pause helps, unless you think we're less than 6 months, or 2-years, from doom. 

This doesn't make a lot of sense to me. If we're 3 years away from doom, I should oppose a 2-year pause because of the risk that (a) it might not work and (b) it will make progress more discontinuous?

In real life, if smarter-than-human AI is coming that soon then we're almost certainly dead. More discontinuity implies more alignment difficulty, but on three-year timelines we have no prospect of figuring out alignment either way; realistically, it doesn't matter whether the curve we're looking at is continuous vs. discontinuous when the absolute amount of calendar time to solve all of alignment for superhuman AI systems is 3 years, starting from today.

I don't think "figure out how to get a god to do exactly what you want, using the standard current ML toolbox, under extreme time pressure" is a workable path to humanity surviving the transition to AGI. "Governments instituting and enforcing a global multi-decade pause, giving us time to get our ducks in order" does strike me as a workable path to surviving, and it seems fine to marginally increase the intractability of unworkable plans in exchange for marginally increasing the viability of plans that might work.

If a "2-year" pause really only buys you six months, then that's still six months more time to try to get governments to step in.

If a "2-year" pause buys you zero time in expectation, and doesn't help establish precedents like "now that at least one pause has occurred, more ambitious pauses are in the Overton window and have some chance of occurring", then sure, 2-year moratoriums are useless; but I don't buy that at all.

Daniel Wyrzykowski replies:

The contract is signed for when bad things and disagreements happen, not for when everything is going good. In my opinion “I had no contract and everything was good” is not as good example as “we didn’t have a contract, had major disagreement, and everything still worked out” would be.

Even though I hate bureaucracy and admin work and I prefer to skip as much as reasonable to move faster, my default is to have a written agreement, especially if working with a given person/org for the first time. Generally, the weaker party should have the final say on forgoing a contract. This is especially true the more complex and difficult situation is (eg. living/travelling together, being in romantic relationships).

I agree with the general view that both signing and not signing have prons and cons and sometimes it's better to not sign and avoid the overhead.

Elizabeth van Nostrand replies:

I feel like people are talking about written records like it's a huge headache, but they don't need to be. When freelancing I often negotiate verbally, then write an email with terms to the client., who can confirm or correct them. I don't start work until they've confirmed acceptance of some set of terms.  This has enough legal significance that it lowers my business insurance rates, and takes seconds if people are genuinely on the same page. 

What my lawyer parent taught me was that contracts can't prevent people from screwing you over. (which is impossible). At my scale and probably most cases described here, the purpose of a contract is to prevent misunderstandings between people of goodwill. And it's so easy to do notably better than nonlinear did here.

Linda responds:

This is a good point. I was thinking in terms of legal vs informal, not in terms of written vs verbal. 

I agree that having something written down is basically always better. Both for clarity, as you say, and because peoples memories are not perfect. And it have the added bonus that if there is a conflict, you have something to refer back to.

Duncan Sabien replies:

[...]

While I think Linda's experience is valid, and probably more representative than mine, I want to balance it by pointing out that I deeply, deeply, deeply regret taking a(n explicit, unambiguous, crystal clear) verbal agreement, and not having a signed contract, with an org pretty central to the EA and rationality communities.  As a result of having the-kind-of-trust that Linda describes above, I got overtly fucked over to the tune of many thousands of dollars and many months of misery and confusion and alienation, and all of that would've been prevented by a simple written paragraph with two signatures at the bottom.

(Such a paragraph would've either prevented the agreement from being violated in the first place, or would at least have made the straightforward violation that occurred less of a thing that people could subsequently spin webs of fog and narrativemancy around, to my detriment.)

As for the bit about telling your friends and ruining the reputation of the wrongdoer ... this option was largely NOT available to me, for fear-of-reprisal reasons as well as not wanting to fuck up the subsequent situation I found myself in, which was better, but fragile.  To this day, I still do not feel like it's safe to just be fully open and candid about the way I was treated, and how many norms of good conduct and fair dealings were broken in the process.  The situation was eventually resolved to my satisfaction, but there were years of suffering in between.

[...]

Cross-posting Linda Linsefors' take from LessWrong:

I have worked without legal contracts for people in EA I trust, and it has worked well.

Even if all the accusation of Nonlinear is true, I still have pretty high trust for people in EA or LW circles, such that I would probably agree to work with no formal contract again.

The reason I trust people in my ingroup is that if either of us screw over the other person, I expect the victim to tell their friends, which would ruin the reputation of the wrongdoer. For this reason both people have strong incentive to act in good faith. On top of that I'm wiling to take some risk to skip the paper work.

When I was a teenager I worked a bit under legally very sketch circumstances. They would send me to work in some warehouse for a few days, and draw up the contract for that work afterwards. Including me falsifying the date for my signature. This is not something I would have agreed to with a stranger, but the owner of my company was a friend of my parents, and I trusted my parents to slander them appropriately if they screwed me over. 

I think my point is that this is not something very uncommon, because doing everything by the book is so much overhead, and sometimes not worth it.

It think being able to leverage reputation based and/or ingroup based trust is immensely powerful, and not something we should give up on.

For this reason, I think the most serious sin committed by Nonlinear, is their alleged attempt of silencing critics. 
Update to clarify: This is based on the fact that people have been scared of criticising Nonlinear. Not based on any specific wording of any specific message.
Update: On reflection, I'm not sure if this is the worst part (if all accusations are true). But it's pretty high on the list.

I don't think making sure that no EA every give paid work to another EA, with out a formal contract, will help much. The most vulnerable people are those new to the movement, which are exactly the people who will not know what the EA norms are anyway. An abusive org can still recruit people with out contracts and just tell them this is normal. 

I think a better defence mechanism is to track who is trust worthy or not, by making sure information like this comes out. And it's not like having a formal contract prevents all kinds of abuse.

Update based on responses to this comment: I do think having a written agreement, even just an informal expression of intentions, is almost always strictly superior to not having anything written down. When writing this I comment I was thinking in terms of formal contract vs informal agreement, which is not the same as verbal vs written. 

I'd be happy to talk with you way more about rationalists' integrity fastidiousness, since (a) I'd expect this to feel less scary if you have a clearer picture of rats' norms, and (b) talking about it would give you a chance to talk me out of those norms (which I'd then want to try to transmit to the other rats), and (c) if you ended up liking some of the norms then that might address the problem from the other direction.

In your previous comment you said "it doesn’t seem obviously unethical to me for Nonlinear to try to protect its reputation", "That’s a huge rationalist no-no, to try to protect a narrative", and "or to try to affect what another person says about you". But none of those three things are actually rat norms AFAIK, so it's possible you're missing some model that would at least help it feel more predictable what rats will get mad about, even if you still disagree with their priorities.

Also, I'm opposed to cancel culture (as I understand the term). As far as I'm concerned, the worst person in the world deserves friends and happiness, and I'd consider it really creepy if someone said "you're an EA, so you should stop being friends with Emerson and Kat, never invite them to parties you host or discussion groups you run, etc." It should be possible to warn people about bad behavior without that level of overreach into people's personal lives.

(I expect others to disagree with me about some of this, so I don't want "I'd consider it really creepy if someone did X" to shut down discussion here; feel free to argue to the contrary if you disagree! But I'm guessing that a lot of what's scary here is the cancel-culture / horns-effect / scapegoating social dynamic, rather than the specifics of "which thing can I get attacked for?". So I wanted to speak to the general dynamic.)

Emerson approaches me to ask if I can set up the trip. I tell him I really need the vacation day for myself. He says something like “but organizing stuff is fun for you!”.

[...]

She kept insisting that I’m saying that because I’m being silly and worry too much and that buying weed is really easy, everybody does it.

😬 There's a ton of awful stuff here, but these two parts really jumped out at me. Trying to push past someone's boundaries by imposing a narrative about the type of person they are ('but you're the type of person who loves doing X!' 'you're only saying no because you're the type of person who worries too much') is really unsettling behavior.

I'll flag that this is an old remembered anecdote, and those can be unreliable, and I haven't heard Emerson or Kat's version of events. But it updates me, because Chloe seems like a pretty good source and this puzzle piece seems congruent with the other puzzle pieces.

E.g., the vibe here matches something that creeped me out a lot about Kat's text message to Alice in the OP, which is the apparent attempt to corner/railroad Alice into agreement via a bunch of threats and strongly imposed frames, followed immediately by Kat repeatedly stating as fact that Alice will of course agree with Kat: "[we] expect you will do the same moving forward", "Sounds like you've come to the same conclusion", "It sounds like we're now on the same page about this".

Working and living with Nonlinear had me forget who I was, and lose more self worth than I had ever lost in my life. I wasn’t able to read books anymore, nor keep my focus in meetings for longer than 2 minutes, I couldn’t process my own thoughts or anything that took more than a few minutes of paying attention.

😢 Jesus.

Maybe I’m wrong— I really don’t know, and there have been a lot of “I don’t know” kind of incidents around Nonlinear, which does give me pause— but it doesn’t seem obviously unethical to me for Nonlinear to try to protect its reputation. 

I think it's totally normal and reasonable to care about your reputation, and there are tons of actions someone could take for reputational reasons (e.g., "I'll wash the dishes so my roommate doesn't think I'm a slob", or "I'll tweet about my latest paper because I'm proud of it and I want people to see what I accomplished") that are just straightforwardly great.

I don't think caring about your reputation is an inherently bad or corrupting thing. It can tempt you to do bad things, but lots of healthy and normal goals pose temptation risks (e.g., "I like food, so I'll overeat" or "I like good TV shows, so I'll stay up too late binging this one"); you can resist the temptation without stigmatizing the underlying human value.

In this case, I think the bad behavior by Nonlinear also would have been bad if it had nothing to do with "Nonlinear wants to protect its reputation".

Like, suppose Alice honestly believed that malaria nets are useless for preventing malaria, and Alice was going around Berkeley spreading this (false) information. Kat sends Alice a text message saying, in effect, "I have lots of power over you, and dirt I could share to destroy you if you go against me. I demand that you stop telling others your beliefs about malaria nets, or I'll leak true information that causes you great harm."

On the face of it, this is more justifiable than "threatening Alice in order to protect my org's reputation". Hypothetical-Kat would be fighting for what's true, on a topic of broad interest where she doesn't stand to personally benefit. Yet I claim this would be a terrible text message to send, and a community where this was normalized would be enormously more toxic than the actual EA community is today.

Likewise, suppose Ben was planning to write a terrible, poorly-researched blog post called Malaria Nets Are Useless for Preventing Malaria. Out of pure altruistic compassion for the victims of malaria, and a concern for EA's epistemics and understanding of reality, Hypothetical-Emerson digs up a law that superficially sounds like it forbids Ben writing the post, and he sends Ben an email threatening to take Ben to court and financially ruin him if he releases the post.

(We can further suppose that Hypothetical-Emerson lies in the email 'this is a totally open-and-shut case, if this went to trial you would definitely lose', in a further attempt to intimidate and pressure Ben. Because I'm pretty danged sure that's what happened in real life; I would be amazed if Actual-Emerson actually believes the things he said about this being an open-and-shut libel case. I'm usually reluctant to accuse people of lying, but that just seems to be what happened here?)

Again, I'd say that this Hypothetical-Emerson (in spite of the "purer" motives) would be doing something thoroughly unethical by sending such an email, and a community where people routinely responded to good-faith factual disagreements with threatening emails, frivolous lawsuits, and lies, would be vastly more toxic and broken than the actual EA community is today.

I'm not sure I've imagined a realistic justifying scenario yet, but in my experience it's very easy to just fail to think of an example even though one exists. (Especially when I'm baking in some assumptions without realizing I'm baking them in.)

Load more