KG

Kieron George

1 karmaJoined

Comments
2

Potential Test Case for AGI

Attempt to simulate an artificial general intelligence using Ouijably

Low-odds it works, but I thought if you could put enough people on a spirit board it might exhibit behaviour similar to an oracle-type AGI. This implementation(https://github.com/ably-labs/ouija) means it wouldn't take much organising to attempt. Maybe tweak so participants are predicting the direction the planchette will move rather than relying on the ideomotor effect. I thought the idea would be outside of the rationalist's window of consideration as something with a spiritualist bent . If it did work it would probably be the safest form of AGI since something made of humans should have the best chance of being human-friendly

It feels super suspicious that the smallest possible source of violent death("Individual homicides") and the largest possible source of violent death("Mass Atrocities") would have significant contributions to the violent death rate, but the middle is excluded as insignificant.

Are there other examples like this where the smallest & largest sources of something are both significant with the middle excluded as negligible?