Hide table of contents

I have been a member of my local EA chapter since it was founded by a group of my friends. At the beginning, I participated semi-regularly at meetings and events, but have long since stopped all participation even though my friends have instead changed their entire lives to align with the EA movement. I admit that a major reason why I have been alienated is the belief of AI as an existential risk held by some of my friends. Sometimes I think that they have lost the core idea of EA, making most effective change in the world as possible, to incoherent science fiction stories. Their actions pattern-match to a cult, making me think of Scientology more than a charity.

I recognize that some might find this opinion insulting. I want to make clear that it's not my intention to insult. I'm fully aware that I might be wrong and their focus on AI might be perfectly justified. However, I have had these thoughts and I want to be honest about them. I believe based on my discussions with people that many others have similar feelings and it might affect the public image of EA, so it's important to discuss them.

Issues I have with the idea of AI risk

In this section I will outline the main issues I have with the concept of AI risk.

My intuition of AI is in conflict with AI risk scenarios

I have some experience in AI: I have studied NLP models in various projects both at the university and at my workplace at a language technology company. AI at work is very different from AI in the context of EA: the former hardly even works, the latter is an incorporeal being independent of humans. Ada-Maaria Hyvärinen wrote recently a great post about similar feelings which I think describes them excellently.

With this background it's natural that when I heard about the idea of AI as an existential risk, I was very sceptical. I have since been closely following the development of AI and noticed that while every now and then a new model comes out and does something incredible that now one could imagine, none of the new models are progressing towards the level of agency and awareness that an ASI would require.

Based on my experience and the studies I have read, there is no existential threat posed by the current AI systems, nor does it seem that those scenarios would become likely in near future.

“AI is an existential risk” is not a falsifiable statement.

When I discuss with my people about AI and reveal that I don't believe it poses a significant risk, they often require me to proof my position. When I explain that the current technology doesn't have potential for these risks, they counter me with the statement “It's only a matter of time that the new technology is developed.”

The problem with this statement is, of course, that it's not possible for me to prove that something doesn't exist in the future. I can only say that it doesn't exist now and it doesn't seem likely to exist in near future. We know that it's physically possible for ASIs to exist, so, in theory, it could be developed tomorrow.

However, is it rational to pour money into AI research based on this? AI is just one of the many possible dangers of the future. We cannot really know which of them are relevant and which are not. The principles of EA say that we should focus on areas that are neglected and have effective interventions. AI safety is not neglected: a lot of universities and companies that develop AI systems do safety research and ethical consideration. There also aren't effective interventions: since ASI do not exist, it's impossible to prove that the research done now even has effect on the future technology that might be based on entirely different principles than the ones being studied now. So while dangerously advanced AIs are not impossible, uncertainty around them prevents doing anything that is known to be effective.

“AI is an existential risk” resembles non-falsifiable statements made by religions and conspiracy theories. I cannot disprove the existence of god, and in the same way I cannot disprove the future existence of ASI. But I also cannot choose which god to believe in based on this knowledge, and I cannot know if my interventions will actually reduce the AI risk.

Lack of proper scientific study

What I would like to see that would change my opinions on this matter would a proper scientific research on the topic. It's surprising how little peer-reviewed studies exist. This lack of academic involvement takes away a lot of credibility from the EA community.

When I recently talked to an EA active who works on AI safety research about why their company doesn't publish their research scientifically, I got the following explanations:

  1. There are no suitable journals
  2. Peer-review is a too slow process
  3. The research is already conducted and evaluated by experts
  4. The scientific community would not understand the research
  5. It's easier to conduct research with a small group
  6. It would be dangerous to publish the results
  7. Credibility is not important

These explanations, especially points 4–6 are again cult-like. As if AI risk is secret knowledge that only the enlightened understand and only the high-level members may even discuss. Even if these are opinions of just a small group of EA people, most people are still accepting the lack of scientific study. I think it's a harmful attitude.

One of the most cited studies are AI expert surveys by Grace et al. In the latest survey, 20% of responders gave a probability of 0% to extinction due to AI, and another 20% gave a risk greater than 25% (the median being 5%). Since this question does not limit the time-period of extinction and thus speculation of very far-future events, it's not useful for predicting near-future events which we can reliably influence with effective interventions. Those surveys aside, there is very little research on the evaluation of existential risks.

It seems that most other cited works are highly speculative with no widespread acceptance in the academia. In fact, according to my experience, most researchers I have met at the university are hostile towards the concept of AI risk. I remember that when I first started studying for my Bachelor's thesis, one of the first lectures had the teacher explain how absurd the fear of AI was. This has been repeated throughout the courses I took. See for example this web course material provided by my university.

It seems weird to not care about the credibility of the claims in the eyes of the wider academic community. Some people view AI risk like some kind of alternative medicine: pseudo-scientific fiction, a way to scare people with an imagined illness and make them pay for a non-effective treatment, laughed at by all real scientists. Why should I trust my EA friends about this when the researchers I respect tell me to go as far away from them as possible?

Conclusions

I have outlined the most important reasons for my negative feelings towards the AI risk scene. First, it doesn't seem likely that these risks would realize in the near future. Second, the discussion about these risks often revolves around speculative and non-falsifiable statements that are reminiscent of claims made by religions and conspiracy theories. Third, the lack of scientific study and interest towards it is bothering and eats the credibility of the claims.

I think it's sad that EA is so involved with AI risk (and long-termism in general), since I believe in many of its core ideas like effective charities. This cognitive dissonance between aspects of AI that I perceive rational and irrational alienate me from the whole movement. I think it would be beneficial to separate near-termist and long-termist branches as clearly different ideologies with different basic beliefs instead of labeling them both under the EA umbrella.

Comments9
Sorted by Click to highlight new comments since:

I also used to be pretty skeptical about the credibility of the field. I was surprised to learn about how much mainstream, credible support AI safety concerns have received:

  • Multiple leading AI labs have large (e.g. 30-person) teams of researchers dedicated to AI alignment.
    • They sometimes publish statements like, "Unaligned AGI could pose substantial risks to humanity and solving the AGI alignment problem could be so difficult that it will require all of humanity to work together. "
  • Key findings that are central to concerns over AI risk have been accepted (with peer review) into top ML conferences.
  • A top ML conference is hosting a workshop on ML safety (with a description that emphasizes "long-term and long-tail safety risks").
  • Reports and declarations from some major governments have endorsed AI risk worries.
    • The UK's National AI Strategy states, "The government takes the long term risk of non-aligned Artificial General Intelligence, and the unforeseeable changes that it would mean for the UK and the world, seriously."
  • There are AI faculty at universities including MIT, UC Berkeley, and Cambridge who endorse AI risk worries.

To be fair, AI risk worries are far from a consensus view. But in light of the above, the idea that all respected AI researchers find AI risk laughable seems plainly mistaken. Instead, it seems clear that a significant fraction of respected AI researchers and institutions are worried. Maybe these concerns are misguided, but probably not for any reason that's obvious to whoever has basic knowledge of AI--or these worried AI experts would have noticed.

(Also, in case you haven't seen it yet, you might find this discussion on whether there are any experts on these questions interesting.)

Thank you for these references, I'll take a close look on them. I'll write a new comment if I have any thoughts after going through them.

Before having read them, I want to say that I'm interested in research about risk estimation and AI progress forecasting. General research about possible AI risks without assigning them any probabilities is not very useful in determining if a threat is relevant. If anyone has papers specifically on that topic, I'm very interested in reading them too.

IMO by far the most through estimation of AI x-risk thus far is Carlsmith's Is Power-Seeking an Existential Risk? (see also summary presentation, reviews).

(edited to add: as you might guess from my previous post, I think some level of AI skepticism is healthy and I appreciate you sharing your thoughts. I've become more convinced of the seriousness of AI x-risk over time, feel free to DM me if you're interested in chatting sometime)

I would be curious to know if your beliefs have been updated in light of the recent developments?

I can understand many of these points, though I disagree with most of them. I think the speculativeness point worries me most though, and I see it pretty frequently. I totally agree that AI risks are currently very uncertain and speculative, but I guess I think the relevance of this comes down to a few points:

  1. Is it highly plausible that when AI as smart as or smarter than humans arrives, this will be a huge, world changing threat?

  2. Around how long do we need to address this threat properly?

  3. How soon before this threat materializes do we think our understanding of the risks will cross your threshold of rigor?

You might disagree on any of this, but for my own part I think it is fairly intuitive that the answers to these are “yes”, “decades at least”, and “years at most” respectively when you think about it. Taken together, this means that the speculativeness objection will by default sleepwalk us into the worst defaults of this risk, and that we should really start taking this risk as seriously as we ever plan to when it is still uncertain and speculative.

I think this on its own doesn’t answer whether it is a good cause area right now, alien invasion, the expansion of the sun, and the heat death of the universe all look like similarly big and hard problems, but they are arguably less urgent, we expect them much longer from now. A final assumption needed to worry about AI risks now, which you seem to disagree on, is that this is coming pretty darn soon.

I want to emphasize this as much as possible, this is super unclear and all of the arguments about when this is coming are sort of pretty terrible, but all of the most systematic, least pretty terrible ones I’m aware of converge on “around a century or sooner, probably sooner, possibly much sooner”, like the partially informative priors study, Ajeya Cotra’s biological anchors report (which Cotra herself thinks estimates too late an arrival date), expert surveys, and metaculus.

Again, all of this could very easily be wrong, but I don’t see a good enough reason to default to that assumption, so I think it just is the case that, not only should we take this risk as seriously as we ever plan to while it’s still speculative, but we should take this risk as seriously as we ever plan to as soon as possible. I would recommend reading Holden Karnofsky’s most important century series for a more spelled out version of similar points, especially about timelines, if you’re interested, but that’s my basic view on this issue and how to react to the speculativeness.

I do agree that there is some risk, and it's certainly worth some thought and research. However, in the EA context, the cause areas should have effective interventions. Due to all this uncertainty, AI risk seems a very low-priority cause, since we cannot be sure if the research and other projects funded have any real impact. It would seem more beneficial to use the money for interventions that have been proved effective. That is why I think that EA is a wrong platform for AI risk discussion.

On the standard "importance, tractability, neglectedness" framework, I agree that tractability is AI risk's worst feature if that's what you mean. I think there is some consensus on this amongst people worried about the issue, as stated in 80k's recently updated profile on the issue:

"Making progress on preventing an AI-related catastrophe seems hard, but there are a lot of avenues for more research and the field is very young. So we think it’s moderately tractable, though we’re highly uncertain — again, assessments of the tractability of making AI safe vary enormously."

I think these other two aspects, importance and neglectedness, just matter a great deal and it would be a bad idea to disqualify cause areas just for moderately weak tractability. In terms of importance, transformative AI seems like it could easily be the most powerful technology we've ever made, for roughly the same reasons that humans are the most transformative "technology" on Earth right now. But even if you think this is overrated, consider the relatively meager funds and tiny field as it exists today. I think many people who find the risk a bit out there would at least agree with you that it's "worth some thought and research", but because of the rarity of the type of marginal thinking about good and willingness to take weird-sounding ideas seriously found in EA, practically no one else is ensuring that there is some thought and research. The field would, arguably, almost entirely dry up if EA stopped routing resources and people towards it.

Again though, I think maybe some of the disagreement is bound up in the "some risk" idea. My vague impression, and correct me if this doesn't describe you, is that people who are weirded out by EA working on this as a cause area think that it's a bit like if EA was getting people, right now, to work on risks from alien invasions (and then a big question is why isn't it?), whereas people like me who are worried about it think that it is closer to working on risks from alien invasions if NASA discovered an alien spaceship parked five lightyears away from us. The risks here would still be very uncertain, the timelines, what we might be able to do to help, what sorts of things these aliens would be able to or want to do, but I think it would still look crazy if almost no one was looking into it, and I would be very wary of telling one of the only groups that was trying to look into it that they should let someone else handle it.

If you would like I would be happy to chat more about this, either by DMs, or email, or voice/video call. I'm probably not the most qualified person since I'm not in the field, but in a way that might give you a better sense of why the typical EA who is worried about this is. I guess I would like to make this an open invitation for anyone this post resonates with. Feel absolutely no pressure to though, and if you prefer I could just link some resources I think are helpful.

I'm just in the awkward position of both being very worried about this risk, and being very worried about how EA talking about this risk might put potential EAs off. I think it would be a real shame if you felt unwelcome or uncomfortable in the movement because you disagree about this risk, and if there's something I can do to try to at least persuade you that those of us who are worried are worth sharing the movement with at least, I would like to try to do that.

Hang in there. I really hope that one day EA will be able to break out of it's AI obsession, and realize how flimsy and full of half-baked assumptions the case for AI x-risk actually is. I think a problem is that a lot of people like you are understandably gonna get discouraged and just leave the movement or not join in the first place, further ossifying the subtle groupthink going on here. 

Thankfully EA is very open to criticism, so I'm hoping to slowly chink away at the bad reasoning. For example, relying on a survey where you ask people to give a chance of destruction as a percentage, which will obviously anchor people to the 1-99 % range. 

Interesting, I hadn't thought of the anchoring effect you mention. One way to test this might be to poll the same audience about other more outlandish claims, something like the probability of x-risk from alien invasion, or CERN accidentally creating a blackhole. 

Curated and popular this week
Relevant opportunities