Impact Academy is a non-profit organization that enables people to become world-class leaders, thinkers, and doers who are using their careers and character to solve our most pressing problems and create the best possible future.
I also work as an impact-driven and truth-seeking coach for people who are trying to do the most good.
I'm also a medical doctor, author, and former visiting researcher (biosecurity) at Stanford.
Thanks for sharing. This is a very insightful piece. Im surprised that folks were more concerned about larger scale abstract risks compared to more well defined and smaller scale risks (like bias). I'm also surprised that they are this pro regulation (including a Sox months pause). Given this, I feel a bit confused that they mostly support the development of AI and I wonder what had most shaped their view.
Overall, I mildly worry that the survey led people to express more concern than they feel. Because this seems surprisingly close to my perception of the views of many existential risks "experts". What do you think?
Would love to see this for other countries too. How feasible do you think that would be?
Thanks for writing this up. We're running an incubation pilot at Impact Academy and found this post very helpful as a reference class (for comparison in terms of success) as well as providing strategic clarity.
I'm curious, what were the best initiatives (inside and outside of EA) you came across in your search (e.g., y-combinator, charity entrepreneurship, etc.)?
I am honored to be part of enabling more people from around the world to contribute to the safe and responsible development of AI.