Hide table of contents

Okay, hear me out: we already have human level/ general intelligent human aligned agents by the billions. And making new ones artificially seems to be really hard according to lots of alignment researchers. So why haven't I heard or read about projects trying to improve human intelligence? I'm not saying to do it in order to solve the alignment of pure artificial intelligences (although that could be a possibility too), but to the point of getting a hybrid (natural + artificial) superintelligence or just more useful than AIs. 

I know that then there is the question about what a superintelligence would do or how having it aligned would help us to not make misaligned ones, and although there are possible good answers to that, those are questions that already exist outside "normal" alignment. 

The only similar thing to that that I've heard was Elon Musk talking about Neuralink, but there is a hugeee difference between the things that Elon mentions as aspirations (in particular merging with an AI) and the product that is making right now. I don't see how Brain-Computer Interfaces would improve our decision making that much. A lot of people when they talk about augmenting intelligence seem to bring them up, and of course they could be useful, but again: I'm not talking about using them to solve AI Alignment, but to get around it. I'm wondering if we can find a way of scaling human intelligence in the same way we scale artificial intelligence. 

I found a post that briefly mentions similar ideas than mine but under the term BCI, which I don't understand if it's a more abarcative term than "a device that allows you to use other devices with the mind", because as I said, I don't know any device which would improve our decision making that much if we could just use it with our minds.

The clearest strategy that comes to mind is to make artificial neurons communicable with biological ones and then integrable with whole human neural networks. Could that be possible? I know that it could sound crazy, but I guess I'm talking to the people who think aligning an AI is really difficult and that having superintelligences on humanity's side sooner or later seems like the only path forward.

9

0
0

Reactions

0
0
New Answer
New Comment

4 Answers sorted by

Intelligence augmentation is generally regarded as relatively safe (and thus good to come before AI) but relatively difficult (and thus unlikely to come before AI). See Nick Bostrom's "Paths to superintelligence" in Superintelligence (2014).

The clearest strategy that comes to mind is to make artificial neurons communicable with biological ones and then integrable with whole human neural networks.

This doesn't make much sense to me; I'm not aware of relevant work or reasons to believe this is promising. (Disclaimer: I'm not familiar with intelligence augmentation.)

I didn't know that in Superintelligence Bostrom talked about other paths to superintelligence, I need to read it ASAP. 

This doesn't make much sense to me; I'm not aware of relevant work or reasons to believe this is promising.

Yeah, you are probably right, and I guess what I was trying to say was that the thing that pops in my mind when I think about possible paths to make us superintelligent is a hybrid between BCI and brain emulations. 

And I was imagining that maybe neuron emulations could not be that difficult or that signals from AI "neurons" (something similar to present day NN) could be enough to be recognize as neurons by the brain.

[comment deleted]2
0
0

Maybe that doesn't sound promising, but without having much knowledge in AI alignment, outer alignment sounds already like aligning human neural networks with an optimizer. And then to inner align you have to align the optimizer with an artificial neural network. This to me sound simpler: to align a type of NN with another.

But maybe it is wrong to think about the problem like that and the actual problem is easier.

I think that how important human cognitive enhancement might be depending on how quickly people think AI is coming and how transformative that AI will be. If we need aligned AI very quickly because we may all be wiped out, then that would take precedence. But if we have time, accelerating advances in human cognitive enhancement may be an extremely worthwhile endeavor. Morally and cognitively enhanced humans may be extremely motivated to do research in areas that EAs are interested in and create technology to mitigate disasters. 

There are discussions of improving intelligence through genetic enhancement technology. Superhuman intelligence inside humans would have a better shot of being aligned with human values. I'm not sure about BCI though. Here are some examples of discussions of genetic enhancement of cognitive ability if you want to research further:

[1] As the other commenter Zach Stein-Perlman noted, there is a section in Nick Bostrom's Superintelligence books where he describes enhancing human cognition.

[2] Nick Bostrom and Carl Shulman have an article entitled "Embryo Selection for Cognitive Enhancement: Curiosity of Game-changer?"

[3] Steve Hsu, physicist and co-founder of Genomic Prediction, discusses cognitive enhancement. You can see his article "Super-Intelligent Humans Are Coming."

[4] Polymath Gwern Branwen has a very comprehensive article about genetic enhancement entitled "Embryo Selection for Intelligence." He evaluates the costs and benefits to different kinds of enhancement technology. 

[5] A group of 13 researchers published "Screening Human Embryos for Polygenic Traits Has Limited Utility" in 2019. It discusses some of the current limitations, namely the limited ability to predict IQ. More research is needed to improve selection for intelligence. Height is further along.

[6] The other limitation is the number of embryos. The number of embryos that can be used will increase greatly if it is possible to have in vitro gametogenesis in humans. This has already been achieved in mice. You can see Metaculus estimates for this

The extent to which you are excited about this might depend to some extent on what you believe will happen with artificial intelligence and how quickly. If we are all gone in 10 years, it might not matter much. If we have hundreds of millions of aligned superhuman brain emulations doing research, then maybe it doesn't matter as much how quickly we achieve these. However, if you have longer AI timelines, then it might be very useful in creating aligned artificial intelligence or mitigating other disasters if we have many superhuman geniuses, especially if they are morally enhanced. 

I have an article that discusses how the political and social environment might change depending on how possible genetic enhancement scenarios might play out. I have a few other articles defending the practice. I'm really interested in this stuff, so you can message me if you want to discuss more.

Genetic enhancement could be really useful but I feel that there are different levels of "superintelligence" we are talking about. One can help you to do research and the other takes over the world (aligned or not). People should try to do this type of intelligence augmentation but the level of intelligence that takes over the world is probably quite far in IQ points and I would guess impossible to reach with just human brains.

1
Question Mark
Alignment being solved at all would require alignment being solvable with human-level intelligence. Even though IQ-augmented humans wouldn't be "superintelligent", they would have additional intelligence that they could use to solve alignment. Additionally, it probably takes more intelligence to build an aligned superintelligence than it does to create a random superintelligence. Without alignment, chances are that the first superintelligence to exist will be whatever superintelligence is the easiest to build.
1
Pato
I don't agree with the first statement neither understand what are you arguing for or against.

On another note, I disagree that there aren't projects specifically targeted to enhance human intelligence. I wouldn't discount more mainstream and broader-target projects, including subsets of cognitive psychology (esp. heuristics & biases), developmental psychology, metascience, social epistemology, economics, educational technology, and more.[1]

More promisingly, I think projects like prediction markets, QURI, the Sequences and LessWrong more generally, note-taking/knowledge network software, spaced repetition software, are extremely promising and I predict they'll have immense impact even if society continues on a gradual path indefinitely.

  1. ^

    During the development of the Internet, there were many projects designed to improve collective intelligence, and I predict a lot of that is in the water already. We just can't see most of it because we take it for granted. Wikipedia is perhaps the most obvious example.

Yeah, you are right. I guess that I was trying to say that I haven't heard of projects that try to do it from a "hardware" standpoint. Considering the limitations that the human brain has in relation with the scalable computers and AIs.

(Update: I think I disagree with what I'm saying here, but I think it's worth saying.)

There are several things to say here, but I think the most important one is: superintelligent humans are not aligned.

If you take a random human, or even my neighbour, and you magically give them the power to do any optimisation task faster than any collection of humans and machines presently available, I would be very scared for the obvious reason that most humans kinda suck. This is sufficient as a counterargument imo.

But the more fundamental problem is that, depending on how this magical intelligence boost took place, I wouldn't even trust a superintelligent version of myself. Having that high of an intelligence changes my umwelt and the set of abstractions I can assign utility over.

Presently, I care that others are happy and that they have their wishes fulfilled. For myself, I care about what kind of "story" my life ends up being. I want to be a good book, as judged by my own quaint sensibilities. Perhaps most philosophically annoying is the idea that I want to be able to determine my own story as myself, via the exertion of "my own power". 

But what happens when I discover that my notion of "a wish" is so confused relative to underlying physical reality that, given a much more precise grasp of reality, I have to make some arbitrary decisions about what my original notion supposedly refers to? How would I rescue my values from one umwelt to another?

"Making someone superintelligent" isn't as straightforward as locating a variable in their program, multiplying it by 1000, and leaving everything unchanged. There are degrees of freedom in how you'd implement the transformation. And for most persons, I'm not sure there even are ways of doing it without what basically amounts to killing the underlying person.

Intelligence augmentation probably wouldn't result in particular humans becoming overwhelmingly powerful. (But even if it did, I'm substantially more optimistic about what smart humans would do with the universe than you are; it would be weird if a much more capable version of someone did worse according to their own values.)

More from Pato
Curated and popular this week
Relevant opportunities