Hide table of contents

First of all, I am currently studying Philosophy, Neuroscience and Cognition in my Bachelor's degree in Magdeburg, Germany. After being at EAxOxford last week and planning to go to EA London, I am thinking hard about my career planning process.

In doing so, I am considering switching to a more technical Bachelor, Aritificial Intelligence and Cognitive Science.

What would heavily influence me in this decision would be whether a philosophical/neuroscience/cognitive science/psychology (conceptual) perspective on AI Safety would benefit the field more than another technical perspective coming from somebody who studied something technical like Artifical Intelligence.

80000hours says in the technical AI safety career plan that it is possible to contribute to AI Safety research via a neuro (even if they say specifically, computational neuroscience) perspective. However, aside from being possible, is it a good idea?

I've seen a EA Forum post about this topic from 5 years ago, supporting the view that it is. I wonder, if anything has changed on this view in the field yet - are there any updates? What do people in the field think? 

What would help me the most:

  1. What is your personal take on this? Do you believe a philosophical/neuroscience/cognitive science perspective would be useful to the field of AI Safety?
  2. Do you know any people/arguments that strongly suggest that this is not the case?
  3. Do you know any people that have tried something (connect neuroscience to AI safety) and were successful/failed? 

If you know somebody, that might have an interesting perspective on this, please let me know! 

18

0
0

Reactions

0
0
New Answer
New Comment

1 Answers sorted by

To make up a number, I'd expect an AI degree to be ~10x more valuable for technical AI alignment research all else equal (i.e. assuming your personal fit for both is the same). Primarily this is because there are lots of existing philosophy / neuro / cog sci people who want to contribute to AI safety, and my impression is that they have trouble finding useful things to do, and I personally see many more opportunities for people with AI expertise than for people with any of those other degrees.

(Note that for other areas like AI governance I would not make the same claim.)

Comments2
Sorted by Click to highlight new comments since: Today at 4:04 AM

You might browse Intro to Brain-Like-AGI Safety , or check back in a few weeks once it's all published. Towards the end of the sequence Steve intends to include "a list of open questions and advice for getting involved in the field."

DeepMind takes a fair amount of inspiration from neuroscience.
Diving in to their related papers might be worthwhile, though the emphasis is often on capabilities rather than safety.


Your personal fit is a huge consideration when evaluating the two paths (80k hours might be able to help you think through this). But if you're on the fence, I'd lean towards the more technical degree.

Neuroscience seems likely to be useful for figuring out human values. My position is that if we attempt to figure out human values purely from the outside, without looking at internal workings, then we're discarding most of the info.

Curated and popular this week
Relevant opportunities