Hide table of contents

In no particular order, here's a collection of Twitter screenshots of people attacking AI Safety. A lot of them are poorly reasoned, and some of them are simply ad-hominem. Still, these types of tweets are influential, and are widely circulated among AI capabilities researchers.

1

2

3

4

5

(That one wasn't actually a critique, but it did convey useful information about the state of AI Safety's optics.)

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

Conclusions

I originally intended to end this post with a call to action, but we mustn't propose solutions immediately. In lieu of a specific proposal, I ask you, can the optics of AI safety be improved?

14

0
0

Reactions

0
0
Comments13
Sorted by Click to highlight new comments since:

"Still, these types of tweets are influential, and are widely circulated among AI capabilities researchers." I'm kind of skeptical of this.

Outside of Giada Pistilli and Talia Ringer I don't think these tweets would appear on the typical ML researcher timeline, they seem closer to niche rationality/EA shitposting.

Whether the typical ML person would think alignment/AI x-risk is really dumb is a different question, and I don't really know the answer to that one!

Although to be clear it's still nice to have bunch of different critical perspectives! This post exposed me to some people I didn't know of.

As someone who is not really on Twitter, I found this an interesting thread to read through, thanks! :)

I'd enjoy reading periodic digests like this of "here's surveys of what people are saying about us, and some cultural context on who these people are". I do feel a bit lost as to who all of these people are, which I think would help me parse what is going on here a bit better.

Strongly agree.

I think it's essential to ask some questions first:

  • Why do people hold these views? (Is it just their personality, or did somebody in this community do something wrong?)
  • Is there any truth to these views? (As can be seen here, anti-AI safety views are quite varied. For example, many are attacks on the communities that care about them rather than the object-level issues.)
  • Does it even matter what these particular people think? (If not, then leave them be.)

Only then should one even consider engaging in outreach or efforts to improve optics.

Could someone explain the “e/acc” in some of these? I haven’t seen it before.

Neither have I, but judging by one of the tweets it stands for "effective acclerationist"? Which I guess means trying to get as much tech as possible and trusting [society? markets? individual users?] to deal effectively with any problem that comes up?

It's something that was recently invented on Twitter, here is the manifesto they wrote: https://swarthy.substack.com/p/effective-accelerationism-eacc?s=w
It's only believed by a couple people afaict, and unironically maybe by no one (although this doesn't make it unimportant!)

We expect e/acc to compile as “scary” for many EAs, although that’s not the goal. We think EA has a lack of focus and is missing an element of willingness to accept the terms of the deal in front of humanity — i.e. to be good stewards of a consciousness-friendly technocapital singularity or die trying.

Unlike EA, e/acc

  • Doesn’t advocate for modernist technocratic solutions to problems
  • Isn’t passively risk-averse in the same way as EAs that “wish everything would just slow down”
  • Isn’t human-centric — as long as it’s flourishing, consciousness is good
  • Isn’t in denial about how fast the future is coming
  • Rejects the desire for a panopticon implied by longtermist EA beliefs

Like EA, e/acc:

  • Is prescriptive
  • Values more positive valence consciousness as good
  • Values zero recognizable consciousness in the universe as the absolute worst outcome.

I agree with some of these allegedly not EA ideas and disagree wih some of the allegedly EA ones ("more positive valence consciousness = good"). But I'm not sure the actual manifesto has anything to do with any of these.

Abridged version of #10, as I understand it, after looking it up on Twitter: Pistilli was aware of the threat of superintelligente, but eventually chose to work on other important AI ethics problems unrelated to X-risk. She was repeatedly told that's negligent and irresponsible and felt very alienated by people in her field. Now she refuses to delve into sentience/AGI problems altogether, which seems like a loss.

The lesson is that the X-risk crowd needs to learn to play better with the other kids, who work on problems that will pop up in a world where we aren't all dead.

I like the format.

The interesting ones IMO (meaning "the ones that may convey some kind of important truth") are 1, 6, 8, 17, 19. And maybe 10 but I can't see the thread so I don't know what it's saying.

Okay, this is very off-topic, but I just really want more EAs know about a browser extension that has massively improved my Twitter experience. 

https://github.com/insin/tweak-new-twitter/

Tweak New Twitter is a browser extension which removes algorithmic content from Twitter, hides news and trends, lets you control which shared tweets appear on your timeline, and adds other UI improvements

I also find the screenshotted post in #7 problematic:  "Once AGI is so close to being developed that there's no longer sufficient time for movement building or public education to help with AI safety, I guess I can go on holiday and just enoy the final few months of my life." 

I'd be doubtful that official AI safety organisations or their representatives would communicate similarly. But a good takeaway in general is to not promote content on global priorities that insinuates a sense of powerlessness.

Curated and popular this week
Relevant opportunities