We used to have Open Threads on this forum. I was hoping someone would bring them back. No one did. So now I'm the change I want to see in the world.
Use this thread to post things that are awesome, but not awesome enough to be full posts. This is also a great place to post if you don't have enough karma to post on the main forum.
Also Happy New Year!
In Bostromian AI Safety people often talk about human-level intelligence, defined roughly as more efficient mental performance than humans on all tasks humans care about. Has anyone ever tried to sketch out some subsets of human abilities that would still be sufficient to make a software system highly disruptive? This could develop into a stronger and more specific claim than Bostrom's. For example, perhaps a system with 'just' a superhuman ability to observe and model economics, or to find new optimization algorithms (etc) could be almost as concerning to a more vaguely defined 'human-level intelligence' AGI.