Comment author: WillPearson 08 July 2017 09:26:55AM *  2 points [-]

My criticism of the HRAD research project is that it has no empirical feedback mechanisms and that the ignored physical aspect of computation can have a large impact on the type of systems you think about and design.

I think people thinking highly formally about AI systems might be useful as long as the real world can be used to constrain their thinking

Comment author: WillPearson 05 July 2017 07:26:03PM *  0 points [-]

I think I broadly agree that take off is likely to be slow and that there is not a slam dunk argument for trying to make safe super intelligent agents.

However I think there is room for all sorts of work. Anything that can reduce the uncertainty of where AGI is going.

I think AI, as it is, is on slightly the wrong track. If we get on the right track we will get somewhere a lot quicker than the decades referenced above.

Computers as they stand are designed with the idea of having a human that looks after them and understands their inner workings, at least somewhat. Animals from the lowly nematode to humans do not have that assumption. Current deep learning assumes a human will create the input and output spaces and assign resources to that learning process.

If we can off load the administration of a computer to the computer itself, this would allow cheaper administration of computers and also the computer systems to become more complex. Computer systems are limited in complexity by the thing that debugs them.

I have an idea of what this might look like and if my current paradigm plays out, I think humanity will get the choice of creating separate agents or creating external lobes of our brains. Most likely humanity will pick the creating external lobes. The external lobes may act in a more economic fashion, but I think they still might have the capability of going bad. Minimising the probability of this is very important.

I think there is also probably a network effect, if we could get altruistically minded people to be the first to have the external brains then we might influence the future by preferentially helping other altruists to get external brains. This could create a social norms among people with external brains.

So I think technical work towards understanding administratively autonomous computers (no matter how intelligent they are) can reduce uncertainty and allow us to understand what choices face us.

Comment author: WillPearson 28 June 2017 11:08:48PM 0 points [-]

Have there been any models of other activities that might reduce existential risk.

E.g. convincing prospective AGI researchers that it is dangerous and should be handled carefully? It would seem that that might increase the pool of potential researchers and also give more time for a safe approach to be developed?

Comment author: WillPearson 17 June 2017 11:32:11AM 3 points [-]

There has recently been an effort started to make the pipeline better for getting people up to speed with AGI safety. I'm trying to champion a broad view of AGI safety including psychology.

Would anyone be interested in providing digested content? It would also be good to have an exit for the pipeline for psychology people interested in AGI. Would that be FHI? Who else would be good to talk to about what is required.

View more: Prev