Introducing Improving autonomy

Following on from an attempt to define a measure for autonomy, I've decided to try and build a community around exploring the idea of improving autonomy for humanity.

I've created a main site and a subreddit for discussions.


My plans are to write more on the ideas of intelligence augmentation and how it might be approached to minimise the risks associated with it. Then I will continue to work on the market-based resource allocation in computers and blog about it to see if it is appropriate for intelligence augmentation.

If you think it important that humans in the future have more autonomy please get in touch.

Comments (5)

Comment author: purplepeople 10 August 2017 06:16:40PM 1 point [-]

Nitpick: On the "How" tab of the site, it should be "Humanity's autonomy", not "Humanities autonomy".

Comment author: WillPearson 10 August 2017 09:48:12PM 0 points [-]

Thanks fixed. I should put some money towards a copy editor at some point or time to figuring out an automated solution.

Comment author: MichaelPlant 10 August 2017 02:33:07PM 0 points [-]

I don't know who downvoted this, but I think it's rude and unhelpful to downvote a post without leaving an explanation of why you did so, unless the post is blatant spam, which this is not. EAs should be upholding norms of considerateness and encouraging intellectual debate.

Have upvoted to balance out. I may make a substantive comment on autonomy later.

Comment author: Telofy  (EA Profile) 14 August 2017 08:21:02PM 0 points [-]

Someone downvoted your comment without explaining it, so I upvoted to balance out. (But I suspect it was just a practical joke.)

Comment author: WillPearson 10 August 2017 05:51:39PM 0 points [-]

I suspect it might be because I've not couched the website enough in terms of EA? I like EA and would love to work with people from the community and perhaps get the website more EA friendly. I've not been massively encouraged by the EA yet.

There is lots to say about autonomy, so I look forward to any forthcoming comments.