This is a special post for quick takes by Rainbow Affect. Only they can create top-level comments. Comments here also appear on the Quick Takes page and All Posts page.
Sorted by Click to highlight new quick takes since:

Do you think it might be a good idea to make brain simulations of subcortical areas (like the ventral tegmental area, periaqueductal gray, and others) that "produce" affects (good or bad experiences, like the 7 basic emotions described by Jaak Pankseep, homeostatic affects like hunger or sleepiness or food satiation, or sensory affects like physical pains or disgust) in order to make aligned AIs?

We want AIs that have the same values and goals that humans have. But humans seem to have different goals and values at times. We can differ so much that we have wars. So it seems pretty hard to make AIs that are perfectly aligned with every human's goal. And we also need these goals to be intelligible for AIs (be reward functions with specific numbers). How will we do that?

I am very unsure, but I think that it isn't possible to make AGIs that are completely aligned with all humans' goals and values. I think the best we may muster is making AGIs that have affects, because our values and goals come from those basic affects we have. And I think we may make such AIs with brain simulation (with neurons similar in function to human neurons).

Do you think this is a terrible idea?

[comment deleted]-1
0
0
Curated and popular this week
Relevant opportunities