GS

Grue_Slinky

237 karmaJoined Jul 2017

Bio

Zbetna Fvapynver [rot13]

Comments
10

Seconded: great post with good questions, but also soliciting anonymous recommendations (even if half-baked) seems valuable. To piggyback on John_Maxwell comment above, the EA leaders sound like they might have contradicting opinions, but it's possible they collectively agree on some more nuanced position. This could be clarified if we heard what they would actually have the movement do differently.

When I read Movement Collapse Scenarios, it struck me how EA is already pretty much on a Pareto frontier, in that I don't think we can improve anything about the movement without negatively affecting something else (or risking this). From that post, it seemed to me that most steps we could take in reducing risk from Dilution increase the risk from Sequestration, and vice versa.

And of course just being on a Pareto frontier by itself doesn't mean the movement's anywhere near the right point(s), because the tradeoffs we make certainly matter. It's just that when we say "EA should be more X" or "EA should have less Y", this is often meaningless (if taken literally), unless we're willing to make the tradeoff(s) that entails.

Nice! I like these kinds of synthesis posts, especially when they try to be comprehensive. One could also add:

EA as a "gap-filling gel" within the context of existing society and its altruistic tendencies (I think I heard this general idea (not the name) at Macaskill's EAG London closing remarks, but the video isn't up yet so I'm not sure and don't want to put words in his mouth). The idea is that there's already lots of work in:

  • Making people healthier
  • Reducing poverty
  • Animal welfare
  • National/international security and diplomacy (incl. nukes, bioweapons)

And if none of these existed, "doing the most good in the world" would be an even more massive undertaking than it might already seem, e.g. we'd likely "start" with inventing the field of medicine from scratch.

But a large amount of altruistic effort does exist, it's just that it's not optimally directed when viewed globally, because it's mostly shaped by people who only think about their local region of it. Consequently, altruism as a whole has several blind spots:

  • Making people healthier and/or reducing poverty in the developing world through certain interventions (e.g. bednets, direct cash transfers) that turn out to work really well
  • Animal welfare for factory-farmed and/or wild animals
  • Global security from technologies whose long-term risks are neglected (e.g. AI)

And the role of EA is to fill those gaps within the altruistic portfolio.


As an antithesis to that mode of thinking, we could also view:

EA as foundational rethinking of our altruistic priorities, to the extent we view those priorities as misdirected. Examples:

  • Some interventions which were posed with altruistic goals in mind turn out to be useless or even net-negative when scrutinized (e.g. Scared Straight)
  • Many broader trends which seem "obviously good" such as economic growth or technological progress, seem neutral, uncertain, or even net-negative in light of certain longtermist thinking

The stated reasoning for the 2nd place prize doesn't say anything about the actual substance of the paper. Surely, it didn't win that prize just based on style?

Not sure Greg officially approves of this, but there's also an octagon-shaped common room which we typically call "The Octagon". If you want to help financially and also troll all of us to no end, you could stipulate that we rename it to some other shape, e.g. "The Triangle".

For reference, some other lists of AI safety problems that can be tackled by non-AI people:

Luke Muehlhauser's big (but somewhat old) list: "How to study superintelligence strategy"

AI Impacts has made several lists of research problems

Wei Dai's, "Problems in AI Alignment that philosophers could potentially contribute to"

Kaj Sotala's case for the relevance of psychology/cog sci to AI safety (I would add that Ought is currently testing the feasibility of IDA/Debate by doing psychological research)

As one of the people you mentioned (I'm flattered!), I've also been curious about this.

As for my own anecdata, I basically haven't applied yet. Technically I did apply and get declined last round, but a) it was a fairly low-effort application since I didn't really need the money then which b) I said so on the application and c) I didn't have any public posts until 2 months ago so I wasn't in your demographic and d) I didn't have any references because I don't really know many people in the research community.

I'm about to submit a serious application for this round, where of those only (d) is still true. At least, I haven't extensively interacted with any high-status researchers for it to make sense to ask anyone for references. And I think maybe there's a correlation there that explains part of your question: I post/comment online when I'm up to it because it's one of the best ways for me to get good feedback (this being a great example), even though I'm a slow writer and it's a laborious process for me to get from "this seems like a coherent, nontrivial idea probably worth writing up" to feeling like I've covered the all the inferential gaps, noted all the caveats, taken into account relevant prior writings, and thought of possible objections enough to feel ready to hit the submit button. But anyways, I would guess that maybe online people slightly skew towards being isolated (else they'd get feedback or spread their ideas by just talking to e.g. coworkers), hence not having references. But I don't think this is a large effect (and I defer to Habryka's comment). Of the people you mentioned, I believe Evan is currently working with Christiano at OpenAI and has been "clued-in" for a while, and I have no idea about the first 3.

Also, I often wonder how much Alignment research is going on that I'm just not clued into from "merely" reading the Alignment Forum, Alignment Newsletter, papers by OpenAI/DeepMind/CHAI etc. I know that MIRI is nondisclosed-by-default now, and I get that. But they laid out their reasons for that in detail, and that's on top of the trust they've earned from me as an institution for their past research. When I hear about people who are doing their own research but not posting anything, I get pretty skeptical unless they've produced good Alignment research in the past (producing other technical research counts for something, my own intuition is that the pre-paradigmatic nature of Alignment research is different enough that the tails come apart), and my system 1 says (especially if they're getting funded):

Oh come on! I would love to sit around and do my own private "research" uninterrupted without the hard work of writing things up, but that's what you have to do if you want to be a part of the research community collectively working toward solving a problem. If everyone just lounged around in their own thoughts and notes without distilling that information for others to build on, there just wouldn't be any intellectual progress. That's the whole point of academic publication, and forum posting is actually a step down from that norm, and even that's only possible because the community of < 100 is small, young, and non-specialized enough that medium-effort ways of distilling ideas still work (less inferential gaps to cross etc.)

(My system 2 would obviously use a different tone than that, but it largely agrees with the substance.)

Also, to echo points made by Jan, LW is not the best place for a broad impression of current research, the Alignment Forum is strictly better. But even the latter is somewhat skewed towards MIRI-esque things over CHAI, OpenAI, and Deepmind's stuff, here's another decent comment thread discussing that.

Is there some taxonomy somewhere of the ways different social/intellectual movements have collapsed (or fizzled)? Given that information, we'd certainly have to adjust for the fact that EA:

  • Exists in the 21st century specifically, with all the idiosyncrasies of the present time
  • Is kind of a hybrid between a social movement and an intellectual movement: it's based on rather nuanced ideas, is aimed at the highly-educated, and has a definite academic component (compare mainstream conservatism/socialism with postmodernism/neoliberalism)

But still, I'd guess there's potentially a lot of value in looking at the outside view.

And it looks like the prize goes to PeterMcCluskey's comment, which at the current time has 33 votes, the next highest being a tie with 21.

FYI people are allowed/encouraged to defend the Hotel here, but I'm mainly interested in seeing critiques so that is what I'm financially incentivizing. I don't personally intend to get into the object-level any more than I did above (unless asked to clarify something).

Load more