Hide table of contents

Over a year ago, Rohin Shah wrote this, about people trying to slow or stop AGI development through mass public outreach about the dangers of AGI:

But it really doesn't seem great that my case for wide-scale outreach being good is "maybe if we create a mass delusion of incorrect beliefs that implies that AGI is risky, then we'll slow down, and the extra years of time will help". So overall my guess is that this is net negative.

(On my beliefs, which I acknowledge not everyone shares, expecting something better than "mass delusion of incorrect beliefs that implies that AGI is risky" if you do wide-scale outreach now is assuming your way out of reality.)

I agree much more with the second paragraph than the first one.

I think there's still an angle for that few have tried in a really public way. Namely, ignorance and asymmetry. (There is definitely a better term or two for what I'm about to describe, but I forgot it. Probably from Taleb or one of the SSC posts about people being cautious in seemingly-odd ways due to their boundedness.)

One Idea

A high percentage of voting-eligible people in the US... don't vote. An even higher percentage vote in only the presidential elections, or only some presidential elections. I'd bet a lot of money that most of these people aren't working under a Caplan-style non-voting logic, but instead under something like "I'm too busy" or "it doesn't matter to me / either way / from just my vote".

Many of these people, being politically disengaged, would not be well-informed about political issues (or even have strong and/or coherent values related to those issues). What I want to see is an empirical study that asks these people "are you aware of this?" and "does that awareness, in turn, factor into you not-voting?".

I think there's a world, which we might live in, where lots of non-voters believe something akin to "Why should I vote, if I'm clueless about it? Let the others handle this lmao, just like how the nice smart people somewhere make my bills come in."

In a relevant sense, I think there's an epistemically-legitimate and persuasive way to communicate "AGI labs are trying to build something smarter than humans, and you don't have to be an expert (or have much of a gears-level view of what's going on) to think this is scary. If our smartest experts still disagree on this, and the mistake-asymmetry is 'unnecessary slowdown VS human extinction', then it's perfectly fine to say 'shut it down until [someone/some group] figures out what's going on'".

To be clear, there's still a ton of ways to get this wrong, and those who think otherwise are deluding themselves out of reality. I'm claiming that real-human-doable advocacy can get this right, and it's been mostly left untried.

Extreme Care Still Advised If You Do This

Most persuasion, including digital, is one-to-many "broadcast"-style; "going viral" usually just means "some broadcast happened that nobody heard of", like an algorithm suggesting a video to a lot of people at once. Given this, plus anchoring bias, you should expect and be very paranoid about the "first thing people hear = sets the conversation" thing. (Think of how many people's opinions are copypasted from the first classy video essay mass-market John Oliver video they saw about the subject, or the first Fox News commentary on it.)

Not only does the case for X-risk need to be made first, but it needs to be right (even in a restricted way like my above suggestion) the first time. Actually, that's another reason why my restricted-version suggestion should be prioritized, since it's more-explicitly robust to small issues.

(If somebody does this in real life, you need to clearly end on something like "Even if a minor detail like [name a specific X] or [name a specific Y] is wrong, it doesn't change the underlying danger, because the labs are still working towards Earth's next intelligent species, and there's nothing remotely strong about the 'safety' currently in place.")

In closing... am I wrong? Can we do this better?

I'm highly interested in better ideas for the goal of mass-outreach-about-AGI-X-risks, whether or not they're in the vein of my suggestion. I think alignment and EA people are too quick to jump to "mass persuasion will lead to wrong actions, or be too Dark Arts for us, or both". If it's true 90% of the time, that other 10% still seems worth aiming for!

(Few people have communications-imagination in general, and I don't think I personally have that much more of it than others here, but it seems like something that someone could have an unusually high amount of.)

And, of course, I'm (historically) likely to be missing one or more steps of logic that, if I knew it, would change my mind on the feasibility of this project. If you (a media person) want to try any of this, wait a while for contrary comments to come in, and try to interact with them.

This post is mostly copied from my own comment here. 

12

0
0

Reactions

0
0

More posts like this

Comments2
Sorted by Click to highlight new comments since:

Executive summary: The author believes there may be an opportunity for principled, ethical mass outreach to raise awareness about existential risks from advanced AI systems. This could involve appealing to non-voters' epistemic asymmetry and bounded rationality. However, extreme care is still advised given the high stakes.

Key points:

  1. Many non-voters don't participate due to feeling uninformed or that their vote doesn't matter. The author hypothesizes some may feel similarly about advanced AI.
  2. There may be an opening to ethically communicate the stakes and asymmetry involved with advanced AI to such groups. This could encourage broader societal deliberation without necessitating technical expertise.
  3. Any mass outreach faces severe challenges and risks, so proposals must be extremely careful, robust, and lead with the right framing. Going viral amplifies any issues.
  4. The author welcomes suggestions for better ideas about mass outreach on existential risk, provided they meet high evidentiary standards. Most alignment researchers wrongly dismiss such efforts as inevitably unethical or ineffective.
  5. The author admits likely gaps in their own logic and invites critical feedback, especially for anyone attempting real-world campaigns. Misstep risks could be catastrophic.

 

 

This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.

Reading this quickly on my lunch break, seems accurate to most of my core points. Not how I'd phrase them, but maybe that's to be expected(?)

Curated and popular this week
Relevant opportunities