Many people here, myself included, are very concerned about the risks from rapidly improving artificial general intelligence (AGI). A significant fraction of people in that camp give to the Machine Intelligence Research Institute, or recommend others do so.
Unfortunately, for those who lack the necessary technical expertise, this is partly an act of faith. I am in some position to evaluate the arguments about whether safe AGI is an important cause. I'm also in some position to evaluate the general competence and trustworthiness of the people working at MIRI. On those counts I am satisfied, though I know not everyone is.
However, I am in a poor position to evaluate:
- The quality of MIRI's past research output.
- Whether their priorities are sensible or clearly dominated by alternatives.
- Have an existing reputation for trustworthiness and confidentiality.
- Think that AI risk is an important cause, but have no particular convictions about the best approach or organisation for dealing with it. They shouldn't have worked for MIRI in the past, but will presumably have some association with the general rationality or AI community.
- Involve 10-20 people, including a sample of present and past MIRI staff, people at organisations working on related problems (CFAR, FHI, FLI, AI Impacts, CSER, OpenPhil, etc), and largely unconnected math/AI/CS researchers.
- Results should be compiled by two or three people - ideally with different perspectives - who will summarise the results in such a way that nothing in the final report could identify what any individual wrote (unless they are happy to be named). Their goal should be purely to represent the findings faithfully, given the constraints of brevity and confidentiality.
- The survey should ask about:
- Quality of past output.
- Suitability of staff for their roles.
- Quality of current strategy/priorities.
- Quality of operations and other non-research aspects of implementation, etc.
- How useful more funding/staff would be.
- Comparison with the value of work done by other related organisations.
- Suggestions for how the work or strategy could be improved.
- Obviously participants should only comment on what they know about. The survey should link to MIRI's strategy and recent publications.
- MIRI should be able to suggest people to be contacted, but so should the general public through an announcement. They should also have a chance to comment on the survey itself before it goes out. Ideally it would be checked by someone who understand good survey design, as subtle aspects of wording can be important.
- It should be impressed on participants the value of being open and thoughtful in their answers for maximising the chances of solving the problem of AI risk in the long run.
Are there alternatives to a person like this? It doesn't seem to me like there are.
"Is broadly positive towards AI risk as a cause area" could mean "believes that there should exist effective organizations working on mitigating AI risk", or could mean "automatically gives more credence to the effectiveness of organizations that are attempting to mitigate AI risk."
It might be helpful if you elaborated more on what you mean by 'aim for neutrality'. What actions would that entail, if you did that, in the real world, yourself? What does hiring the ideal survey supervisor look like in your mind if you can't use the words "neutral" or "neutrality" or any clever rephrasings thereof?
I meant picking someone with no stake whatsoever in the outcome. Someone who, though exposed to arguments about AI risk, has no strong opinions one way or another. In other words, someone without a strong prior on AI risk as a cause area. Naturally, we all have biases, even if they are not explicit, so I am not proposing this as a disqualifying standard, just a goal worth shooting for.
An ev... (read more)