Hide table of contents

I am unsatisfied with the quantity of dialogue I'm aware of about EA capturing itself as a self-perpetuating but misaligned engine. 

Doing Actual Thing

Zvi

We need to culturally establish the act of Doing Actual Thing.

He extensionally describes what this means in the post, but I think there's old Eliezer material on "trying to try" as opposed to trying to win. From both Methods and the Sequences one can learn that "trying" in many cases means "getting a head start at generating excuses for when you fail". This is going to form the core of my interpretation of what Zvi means by "Doing Actual Thing". 

In short, Doing Actual Thing is when you're razor focused on winning and you've cleanly excised the urge to get a head start on generating excuses for just in case you fail (note this is different from having backup plans and putting nonzero credence in futures that include failure; generating excuses is about mitigating the emotional impact of failure). Pressures toward Not Doing Actual Thing include 

  • Will I still have a social life if I lose?
  • Will I still have a job if I lose?
  • Can I retain status if I lose?

Another class of pressures require me to recapitulate the great covid lesson of social reality vs. physical reality (I forget which rationalist citation goes here, I'm building off of a discussion at NYC Solstice): we recall that the CDC does not tell officials which policies keep constituents safe and healthy (physical reality), only which policies keep you out of trouble (social reality).  I.e., you can fail ("my constituents got sick and died") and save face ("but according to the CDC recommendations, I tried my best"). 

Are EAs immune to social reality problems? I'll suggest not by enumerating more pressures toward Not Doing Actual Thing

  • Sufficient in-movement high-status epistemic pressure is on this intervention being plausibly solid, so it'll be an understandable mistake if we're wrong to fund it. I don't have to double check my work, because a bunch of high status people agree with my first pass.
  • I've found a vulnerability in my grantmaker's epistemics (could be halo effect, group think, deference, etc.) which can be exploited to lock down a salary for research that I know isn't going to work, and the thing I think is likelier to work doesn't tell as plausible a story to grantmakers.

Some spicier "citation needed" and "big if true" extensions I can make are (one of these is a thinly veiled reference to real life, the other one is made up wholesale)

  • I wanted to work at StatusAI because they're nominally about safety, but I'm starting to think they're up to nothing but capabilities. Working there remains a top career move.
  • High status org staked it's reputation and spent the goodwill of a billionaire on an intervention they would later discover had deep research methodology issues and was entirely mistaken. It would be deeply costly to walk it back.

Let's consider "trying to try" or Not Doing Actual Thing in the non-xrisk case and the xrisk case. 

In the non-xrisk case, recall fuzzies, utilons, and status. If I set out to convert my USD into utilons for the poor and the animals, and the plan doesn't work (the intervention's metrics were based on faulty reporting, for instance), I can always tell myself that it came out of my fuzzies or status budget. Or if I'm a salaried direct worker, and I know my project isn't working, there might be a variety of reasons to swallow the sunk cost fallacy and chalk it up to "this job I'm currently working" instead of "this project I'm really excited about" or "this strategy I have to win". 

In the xrisk case, you either die along with literally everyone else in your lifetime or literally everyone dies after your lifetime. In the former case: having a job, status in your community, friends who respect you, is a bit of a consolation prize and a nice way to go out. In the latter case, you don't really have skin in the game, so how hard you try to win while you're alive is sort of the razor between stated ("I care about others") and revealed ("...but as long as literally everyone dies after I'm dead of old age it's enough to feel like I tried") preferences.

Doing Actual Thing is about winning. It's about increasing welfare for animals, making the poor less poor, preventing literally everyone from dying, heck, it might be about preventing a few people from dying. This means believing the truest things you possibly can, thinking clearly about your values, and inferring your goals from your beliefs and values with respect to levers, opportunities, and your personal hubris budget. I think the point is that social reality is downstream of the beliefs-values-goals calculus. Yes communities and institutions are instrumentally useful, yes rent/food adds noise, bias, and friction that you have to outsmart, but these are the social reality part, not the physical reality of Doing Actual Thing. The point is, if you're not careful the social reality parts can become upstream! This is a difficult failure mode to build up antibodies against. 

Organizations vs. Getting Stuff Done

In my sordid past as a soup kitchen and infoshop denizen, there was a zine called "Organizations vs. Getting Sh!t Done" that I have treasured and cited to many EAs on discord every time the topic of community building comes up. In it, author Gillis contrasts "organization" with "an organization", where the former is a generic site of intelligent coordination, but the latter is defined as forming a simplification and contextualization of a myriad of individuals' flavors of a set of goals, defined by a discrete set of people, and legitimized by formal processes. In the essay, Gillis identifies four failure modes of organizations (ones of the "an organization" sense), two of which I would mostly apply to information bottlenecks of consensus processes (that most group decision-making protocols are not great social choice functions, and fudging/smearing over disagreements and contradictions internal to the group), and a third point about informal power (which I will not get into here). I think the fourth point, mental laziness, sheds the most light on the current post. 

momentum and peer pressure are not particularly strong compared to true motivation, they’re often driven by loose biological instincts and can be randomly overridden by other base instincts. Worse, at core momentum and peer pressure are ethically corrosive tools in that they appeal to and build habits rather than active vigilance.

In other words, Gillis thinks a structural property of having institutions at all is that one habituates substandard practices by growing reliant on them for motivation, and that externalities of this dependence include reduced vigilance which would I interpret as vulnerable epistemics and degraded focus/resolve on winning. 

A further, rather obvious critique of organizations that I don't think appears in the essay is that people focus on their upkeep rather than the goal that the organization was formed to win at. An intuition pump from my sordid past is the police brutality in the US cause area. Stated-wise the goal was the three Ds: disempower, disarm, and disband. Revealed-wise you would observe that the goal looks like having meetings, writing newsletters, having more meetings, and trying to poach members from the minimum wage scene or soup kitchens. What's more, the acts of organizational upkeep quickly became a kind of salon/church, regardless of how genuinely or even rigorously the members believed the three Ds was the optimal policy. In a way I think we all were trying - forming an org was just an intuitive step - but our attention was consumed entirely by "how do we keep the org afloat?", there was little spared for "what leverage can we find or invent?" (or, for that matter, "do we need to iterate on our policy proposal?").

Some basic effective altruist examples

  • I'm a student at Miskatonic University and I start EA Miskatonic and I build up my reputation as someone who wants to help others believe true things and think clearly about their values, in the dining hall. 
  • I'm in IT and I apply to work at StatusAI and I save up a runway to quit my job and do Eleuther projects.
  • I'm a five-figure donor and I dump a bunch of cash into known, managed funds and I ask people in my network if they're cash-bottlenecked on an idea of theirs, and I smash that bottleneck.

Gillis: 

As with the world we’d like to see, we need to build a movement where the overall focus is on discrete projects of limited lifespan–only sometimes augmented or assisted in small, defined ways by persistent groups, themselves with starkly limited license. With people fluidly overlapping and transitioning between such projects as need be rather than building identities and/or territories in relation to them. 

Zvi again: 

The more EA funds are giving to other EA funds and those funds are about expanding EA, the more one should worry it’s a giant circle of nothing.

10

0
0

Reactions

0
0

More posts like this

Comments6
Sorted by Click to highlight new comments since:

The "Organizations vs. Getting Stuff Done" post is about anarchist political activism. This is a rather unusual area -- under normal circumstances organizations are a relevant tool to aid in getting things done, not an obstacle to it.

to partially rehash what was on discord and partially add more: 

  • I don't think saying that institutions have benefits and are effective is at all an argument against specific drawbacks and failure modes. Things that are have pros can also have cons, pros and cons can coexist, etc. 
  • I agree that a portion of the criticism is moot if you don't on priors think hierarchy and power are intrinsically risky or disvaluable, but I think having those priors directs one's attention to problems or failure modes that people without those priors would be wise to learn from. Moreover, if you look at the four points in the article, I don't think those priors are critical for any of them.
    • specifically, I think a variety of organizations are interested in trading off inefficiency problems of bottom-up against the information bottleneck problems of top-down. People who are motivated by values to reject the top-down side would intuitively have learned lessons about how to make the bottom-up side function.
  • If I find the name of the individual, I'll return to thread to make my point about the german scientist who may have prevented the nazis from getting nukes by going around and talking to people (not by going through institutional channels)

The self-perpetuation issue has been discussed before, e.g. here.

great link btw, thanks! 

Another class of pressures require me to recapitulate the great covid lesson of social reality vs. physical reality (I forget which rationalist citation goes here, I'm building off of a discussion at NYC Solstice):

The citation you're looking for is https://putanumonit.com/2021/04/03/monastery-and-throne/ particularly the section titled "Coordinating Social Reality"

More from quinn
Curated and popular this week
Relevant opportunities