Hide table of contents

On social media (especially Twitter), the debate over AI extinction risk is strongly influenced by a smallish group of 'e/acc' people ('Effective Accelerationists'), who seem to dismiss X risks and 'AI Doomers' (including many EAs), encourage AGI development at maximum speed (including a fast takeoff towards ASI), reject any regulation of the AI industry, and look forward to a 'post-human' future of mostly machine intelligence. The e/acc movement seems closely associated with Singularity enthusiasts and transhumanists (although plenty of people in those subcultures aren't e/acc). 

What are the best medium-length critiques of this e/acc movement -- ideally ones that are intellectual & morally serious?

22

0
0

Reactions

0
0
New Answer
New Comment

6 Answers sorted by

This doesn't directly address your question, but I think in general EAs spend too much time engaging with e/acc* folks online, and I think we should generally just ignore them more. The arguments put out by e/acc seem to be unusually bad (and from what I can tell, likely made with an unusually high amount of motivated reasoning), and they're also not a politically or socially powerful or persuasive group.

* note – I'm talking specifically about e/acc. There are other people who are more generally AI accelerationists who I think it is important for us to engage with, for various reasons.

I was leaning this direction, but they've recently they seem to have had some success with their outreach efforts. We now have the CEO of Notion and the President of YCombinator identifying as e/acc.

So the strategy of not engaging may slow their growth, but it may also allow them to persuade people who could have been persuaded otherwise if they were exposed to robust counter-argument.

I've listened to Beff Jezos on Twitter spaces and he (+ some other e/accs in the space) support a philosophy for all intents and purposes consists of worshipping of entropy. Obviously, they would deny that their philosophy is engaging in any kind of worship and they wouldn't be lying when they say this, because from their perspective I expect it just feels like they doing nothing more than observing the truth.

However, it makes sense to say that descriptively they worship entropy as they unironically argue that the purpose of the universe is to increase entropy as fast as possible and that it should therefore be our purpose as well.

If you try to point out that this is the naturalistic fallacy and that just because this is what the universe does, it doesn't automatically follow that it is good, then instead of them addressing this argument, they start arguing that it is pointless to fight against the universe as you'll lose.

I think it's interesting to observe a fundamental contradiction in the movement. The movement presents itself as strong and brave and masculine and driving humanity forward, but when you reflect on the defeatism it becomes apparent that at least the Beff Jezos version of e/acc philosophy is weak and cowardly and definitely not masculine and completely anti-human.

I did have some hesitation in typing the previous paragraph as I know that some commentators here might feel that I'm being overly harsh/uncharitable. At the same time, when you have a movement that is mostly driven by vibes rather than by arguments, it seems important to actually talk about and consider to what extent the movement's image actually corresponds to reality.

Chris -- well said. I think this is accurate.

The e/acc movement has a lot of flagrantly macho rhetoric, and they tend to portray people concerned about AI safety as weak, effeminate, neurotic, and fearful.

On the other hand, they seem to be eager to dive into a kind of fetishistic submission to AI, which isn't actually very macho.

I know it may be considered unseemly to psycho-analyze their movement on EA Forum, but, as you say, it's a movement driven by vibes, image, and rhetoric, rather than by rational/empirical arguments, and they need to be confronted on that basis.

https://www.lesswrong.com/tag/complexity-of-value 

I'm roughly comfortable sort of leaving it here, though how different people really get convinced of it is not obvious. They're right to question speciesism or whatever, and I hope it becomes salient to them that their mistakes aren't simply disloyalty.

quinn- thanks for sharing that 'complexity of value' post; I hadn't seen it before. Sounds persuasive, at least to this psychologist.

Epistemic status: just a 5-minute collation of some useful sources, with a little explanatory text off the top of my head. 

Stampy's answers to "Why is AI dangerous?"and "Why might we expect a superintelligence to be hostile by default?" seem pretty good to me.

To elaborate a little:

Alignment seems hard. Humans value very complex things, which it seems both A) difficult to tell an AI to preserve and B) seem unlikely for AI to preserve by default. 

A number of things seem to follow pretty directly from the idea of 'creating an agent which is much more intelligent than humans':

  • Non-human goals: we have a strong  prior that its goals will not line up with human goals (See orthogonality thesis)
  • Optimising is Destructive: optimising for one value system will by default destroy value according to other value systems (see: instrumental convergence)
  • Intelligence is Dangerous: as it's much smarter than humans, predicting its behaviour will be very difficult, as will containing or controlling it. (See AI boxing)

When you combine these  things, you get an expectation that the default outcome of unaligned AGI is very bad for humans -- and an idea of why AI alignment may be difficult. 

 

To take a different approach:

Humans have a pretty bad track record of not using massively destructive technology. It seems at least plausible that COVID-19 was a lab leak (and its plausibility is enough for this argument). The other key example to me is the nuclear bomb. 

What's important is that both of these technologies are relatively difficult to get access to. At least right now, it's relatively easy to get access to state-of-the-art AI. 

Why is this important? It's related to the unilateralist's curse. If we think that AI has the potential to be very harmful (which deserves its own debate), then the more people that have access to it, the more likely that harm becomes. Given our track record with lower-access technologies, it seems likely from this frame that accelerationism will lead to non-general artificial intelligence being used to do massive harm by humans. 

I would argue that while it is a 'forked' group it unfortunately retains a number of the basic assumptions one can find in EA. While their presentation is explicitly bullish on AI, this is only superficially different from the implicitly bullish EA community. For example, Bostrom recently said in an interview that people (presumably the alignment group) have overrated the need to regulate and that now we need LESS, not more regulation. This aligns with views that a lot of current 'negative' AI discourse is nothing more than criti-hype (https://sts-news.medium.com/youre-doing-it-wrong-notes-on-criticism-and-technology-hype-18b08b4307e5). I won't even discuss the huge religious themes in transhumanism and the 'church of the singularity' that are so pervasive in both communities. 

What have e/acc's achieved? What compelling arguments have they brought?

They haven't achieved much concretely, but they seem to be somewhat influential on Twitter, at least around their little subculture. And their arguments do seem to be similar to those made by quite influential pro-AI accelerationists such as Yann LeCun and Marc Andreessen.

2
Nathan Young
9mo
Sure, but that's my criticism.  what have they acheived concretely.
Comments4
Sorted by Click to highlight new comments since: Today at 6:04 AM

I thought this concise set of arguments presented on Dan Hendrycks' Twitter was quite good.

Linch - thanks very much. That's super helpful; it's just the kind of thing I'm looking for!

Wait, I thought e/acc was ironic

No, they mostly seem earnest, authentic, and serious in their intent. They think they're the good guys, and that 'AI Doomers' (AI safety advocates) are standing in the way of longevity, prosperity, and galactic colonization.

Curated and popular this week
Relevant opportunities