This was originally posted as a comment on an old thread. However, I think the topic is important enough to deserve a discussion of its own. I would be very interested in hearing your opinion on this matter. I am an academic working in the field of philosophy of science, and I am interested in the criteria used by funding institutions to allocate their funds to research projects.
A recent trend of providing relatively high research grants (relative to some of the most prestigious research grants across EU, such as for instance ERC starting grants ~ 1.5 mil EUR) to projects on AI risks and safety made me curious, and so I looked a bit more into this topic. What struck me as especially curious is the lack of transparency when it comes to the criteria used to evaluate the projects and to decide how to allocate the funds.
Now, for the sake of this article, I will assume that the research topic of AI risks and safety is important and should be funded (to which extent it actually is, is beside the point and deserves a discussion of its own; so let's just say it is among the most pursuit-worthy problems in view of both epistemic and non-epistemic criteria).
Particularly surprising was a sudden grant of 3.75 mil USD by Open Philanropy Project (OPP) to MIRI. Note that the funding is more than double the amount given to ERC starting grantees. Previously, OPP awarded MIRI with 500.000 USD and provided an extensive explanation of this decision. So, one would expect that for a grant more than 7 times higher, we'd find at least as much. But what we do find is an extremely brief explanation saying that an anonymous expert reviewer has evaluated MIRI's work as highly promising in view of their paper "Logical Induction".
Note that in the last 2 years since I first saw this paper online, the very same paper has not been published in any peer-reviewed journal. Moreover, if you check MIRI's publications you find not a single journal article since 2015 (or an article published in prestigious AI conference proceedings, for that matter -- *correction:* there are five papers published as conference proceedings in 2016, some of which seem to be technical reports, rather than actual publications, so I am not sure how their quality should be assessed; I see no such proceedings publications in 2017). It suffices to say that I was surprised. So I decided to contact both MIRI asking if perhaps their publications haven't been updated on their website, and OPP asking for the evaluative criteria used when awarding this grant.
MIRI has never replied (email sent on February 8). OPP took a while to reply, and last week I received the following email:
"Hi Dunja,
Thanks for your patience. Our assessment of this grant was based largely on the expert reviewer's reasoning in reviewing MIRI's work. Unfortunately, we don't have permission to share the reviewer's identity or reasoning. I'm sorry not to be more helpful with this, and do wish you the best of luck with your research.
Best,
[name blinded in this public post; I explained in my email that my question was motivated by my research topic]"
All this is very surprising given that OPP prides itself on transparency. As stated on their website:
"We work hard to make it easy for new philanthropists and outsiders to learn about our work. We do that by:
- Blogging about major decisions and the reasoning behind them, as well as what we’re learning about how to be an effective funder.
- Creating detailed reports on the causes we’re investigating.
- Sharing notes from our information-gathering conversations.
- Publishing writeups and updates on a number of our grants, including our reasoning and reservations before making a grant, and any setbacks and challenges we encounter." (emphasis added)
However, the main problem here is not the mere lack of transparency, but the lack of effective and efficient funding policy.
The question, how to decide which projects to fund in order to achieve effective and efficient knowledge acquisition has been researched within philosophy of science and science policy for decades now. Yet, some of the basic criteria seem absent from cases such as the above mentioned one. For instance, establishing that the given research project is worthy of pursuit cannot be done merely in view of the pursuit-worthiness of the research topic. Instead, the project has to show a viable methodology and objectives, which have been assessed as apt for the given task by a panel of experts in the given domain (rather than by a single expert reviewer). Next, the project initiator has to show expertise in the given domain (where one's publication record is an important criterion). Finally, if the funding agency has a certain topic in mind, it is much more effective to make an open call for project submissions, where the expert panel selects the most promising one(s).
This is not to say that young scholars, or simply scholars without an impressive track record wouldn't be able to pursue the given project. However, the important question here is not "Who could pursue this project?" but "Who could pursue this project in the most effective and efficient way?".
To sum up: transparent markers of reliability, over the course of research, are extremely important if we want to advance effective and efficient research. The panel of experts (rather than a single expert) is extremely important in assuring procedural objectivity of the given assessment.
Altogether, this is not just surprising, but disturbing. Perhaps the biggest danger is that this falls into the hands of press and ends up being an argument for the point that organizations close to effective altruism are not effective at all.
Minor points, (1) I think it is standard practice for peer review to be kept anonymous, (2) some of the things you are mentioning seem like norms about grants and writeups that will reasonably vary based on context, (3) you're just looking at one grant out of all that Open Phil has done, (4) while you are looking at computer science, their first FDT paper was accepted at Formal Epistemology Workshop, and a professional philosopher of decision theory who went there spoke positively about it.
More importantly, once MIRI's publication record is treated with the appropriate nuance, your post doesn't show how they should be viewed as inferior to any unfunded alternatives. Open Phil has funded other AI safety projects besides MIRI, and there is not much being done in this field, so the grants don't commit them to the claim that MIRI is better than most AI safety projects. So we don't have an empirical basis for doubting their loose, hits-based-giving approach. We can presume that formal, traditional institutional funding policies would do better, but it is difficult to argue that point to the level of certainty that tells us that the situation is "disturbing". Those policies are costly - they take more time and people to implement.
Problem wasn't in the reviewer being anonymous, but in the lack of access to the report
Sure, but that doesn't mean no criteria should be available.
Indeed, I am concerned with one extremely huge grant. I find the sum large enough to warrant concerns, especially since the same ca... (read more)