From my understanding, AI alignment is difficult because AI models would struggle to understand human values.

Could progress in technology like brain-computer interfaces that Neuralink works on and brain organoids (essentially human brains grown in a lab) allow us to connect AI models to human brains, which could help them understand human values?

Is anyone working on this as a research direction for AI Alignment?

8

0
0

Reactions

0
0
Comments2
Sorted by Click to highlight new comments since:

I am personally not convinced of their usefulness, Robert Long has an alternative take here.

The fundamental problem, as I see it, is that

  1. Giving unaligned AI systems access to your neural state is bad™, and
  2. "Merging" with AI systems is under-defined.

I'd love to see an actual explanation for how brain-computer interfaces would be useful for alignment.

Additionally, I object to "AI alignment is difficult because AI models would struggle to understand human values". Under my best understanding, AI alignment is about making cognition aimable at all.

Brain organoids are a way to quickly get functional/morphological significance readouts of intelligence-related genes (or genes related to other functions), so they are useful as a way of studying intelligence.

https://www.frontiersin.org/journals/science/articles/10.3389/fsci.2023.1148127/full

As a huge, huge moonshot, one could investigate avian brain organoids as an alternative substrate for intelligence (they are way more space-efficient than mammalian brains, and potentially could do way more compute in a small (manageable) volume if appropriately cultured and unbounded...

Curated and popular this week
Relevant opportunities