This was originally posted as a comment on an old thread. However, I think the topic is important enough to deserve a discussion of its own. I would be very interested in hearing your opinion on this matter. I am an academic working in the field of philosophy of science, and I am interested in the criteria used by funding institutions to allocate their funds to research projects.

A recent trend of providing relatively high research grants (relative to some of the most prestigious research grants across EU, such as for instance ERC starting grants ~ 1.5 mil EUR) to projects on AI risks and safety made me curious, and so I looked a bit more into this topic. What struck me as especially curious is the lack of transparency when it comes to the criteria used to evaluate the projects and to decide how to allocate the funds.

Now, for the sake of this article, I will assume that the research topic of AI risks and safety is important and should be funded (to which extent it actually is, is beside the point and deserves a discussion of its own; so let's just say it is among the most pursuit-worthy problems in view of both epistemic and non-epistemic criteria).

Particularly surprising was a sudden grant of 3.75 mil USD by Open Philanropy Project (OPP) to MIRI. Note that the funding is more than double the amount given to ERC starting grantees. Previously, OPP awarded MIRI with 500.000 USD and provided an extensive explanation of this decision. So, one would expect that for a grant more than 7 times higher, we'd find at least as much. But what we do find is an extremely brief explanation saying that an anonymous expert reviewer has evaluated MIRI's work as highly promising in view of their paper "Logical Induction".

Note that in the last 2 years since I first saw this paper online, the very same paper has not been published in any peer-reviewed journal. Moreover, if you check MIRI's publications you find not a single journal article since 2015 (or an article published in prestigious AI conference proceedings, for that matter -- *correction:* there are five papers published as conference proceedings in 2016, some of which seem to be technical reports, rather than actual publications, so I am not sure how their quality should be assessed; I see no such proceedings publications in 2017). It suffices to say that I was surprised. So I decided to contact both MIRI asking if perhaps their publications haven't been updated on their website, and OPP asking for the evaluative criteria used when awarding this grant.

MIRI has never replied (email sent on February 8). OPP took a while to reply, and last week I received the following email:

"Hi Dunja,

Thanks for your patience. Our assessment of this grant was based largely on the expert reviewer's reasoning in reviewing MIRI's work. Unfortunately, we don't have permission to share the reviewer's identity or reasoning. I'm sorry not to be more helpful with this, and do wish you the best of luck with your research.

Best,

[name blinded in this public post; I explained in my email that my question was motivated by my research topic]"

All this is very surprising given that OPP prides itself on transparency. As stated on their website:

"We work hard to make it easy for new philanthropists and outsiders to learn about our work. We do that by:

  • Blogging about major decisions and the reasoning behind them, as well as what we’re learning about how to be an effective funder.
  • Creating detailed reports on the causes we’re investigating.
  • Sharing notes from our information-gathering conversations.
  • Publishing writeups and updates on a number of our grants, including our reasoning and reservations before making a grant, and any setbacks and challenges we encounter." (emphasis added)

However, the main problem here is not the mere lack of transparency, but the lack of effective and efficient funding policy.

The question, how to decide which projects to fund in order to achieve effective and efficient knowledge acquisition has been researched within philosophy of science and science policy for decades now. Yet, some of the basic criteria seem absent from cases such as the above mentioned one. For instance, establishing that the given research project is worthy of pursuit cannot be done merely in view of the pursuit-worthiness of the research topic. Instead, the project has to show a viable methodology and objectives, which have been assessed as apt for the given task by a panel of experts in the given domain (rather than by a single expert reviewer). Next, the project initiator has to show expertise in the given domain (where one's publication record is an important criterion). Finally, if the funding agency has a certain topic in mind, it is much more effective to make an open call for project submissions, where the expert panel selects the most promising one(s).

This is not to say that young scholars, or simply scholars without an impressive track record wouldn't be able to pursue the given project. However, the important question here is not "Who could pursue this project?" but "Who could pursue this project in the most effective and efficient way?".

To sum up: transparent markers of reliability, over the course of research, are extremely important if we want to advance effective and efficient research. The panel of experts (rather than a single expert) is extremely important in assuring procedural objectivity of the given assessment.

Altogether, this is not just surprising, but disturbing. Perhaps the biggest danger is that this falls into the hands of press and ends up being an argument for the point that organizations close to effective altruism are not effective at all.

 

10

0
0

Reactions

0
0

More posts like this

Comments41
Sorted by Click to highlight new comments since: Today at 9:19 PM

I think this stems from a confusion about how OpenPhil works. In their essay essay Hits-Based Giving, written in early 2016, they list some of the ways they go about philanthropy in order to maximise their chance of a big hit (even while many of their grants may look unlikely to work). Here are two principles most relevant to your post above:

We don’t: expect to be able to fully justify ourselves in writing. Explaining our opinions in writing is fundamental to the Open Philanthropy Project’s DNA, but we need to be careful to stop this from distorting our decision-making. I fear that when considering a grant, our staff are likely to think ahead to how they’ll justify the grant in our public writeup and shy away if it seems like too tall an order — in particular, when the case seems too complex and reliant on diffuse, hard-to-summarize information. This is a bias we don’t want to have. If we focused on issues that were easy to explain to outsiders with little background knowledge, we’d be focusing on issues that likely have broad appeal, and we’d have more trouble focusing on neglected areas.

A good example is our work on macroeconomic stabilization policy: the issues here are very complex, and we’ve formed our views through years of discussion and engagement with relevant experts and the large body of public argumentation. The difficulty of understanding and summarizing the issue is related, in my view, to why it is such an attractive cause from our perspective: macroeconomic stabilization policy is enormously important but quite esoteric, which I believe explains why certain approaches to it (in particular, approaches that focus on the political environment as opposed to economic research) remain neglected.

[...]

A core value of ours is to be open about our work. But “open” is distinct from “documenting everything exhaustively” or “arguing everything convincingly.” More on this below.

And

We don’t: avoid the superficial appearance — accompanied by some real risk — of being overconfident and underinformed.

When I picture the ideal philanthropic “hit,” it takes the form of supporting some extremely important idea, where we see potential while most of the world does not. We would then provide support beyond what any other major funder could in order to pursue the idea and eventually find success and change minds.

In such situations, I’d expect the idea initially to be met with skepticism, perhaps even strong opposition, from most people who encounter it. I’d expect that it would not have strong, clear evidence behind it (or to the extent it did, this evidence would be extremely hard to explain and summarize), and betting on it therefore would be a low-probability play. Taking all of this into account, I’d expect outsiders looking at our work to often perceive us as making a poor decision, grounded primarily in speculation, thin evidence and self-reinforcing intellectual bubbles. I’d therefore expect us to appear to many as overconfident and underinformed. And in fact, by the nature of supporting an unpopular idea, we would be at risk of this being true, no matter how hard we tried (and we should try hard) to seek out and consider alternative perspectives.

In your post, you argue that OpenPhil should follow a grant algorithm that includes

  • Considerations not just of a project's importance, but also it's tractability
  • A panel of experts to confirm tractability
  • Only grantees with a strong publication record
  • You also seem to claim that this methodology is the expert consensus of the field of philanthropic funding, a claim for which you do not give any link/citation (?).

Responding in order:

  • The framework in EA of 'scope, tractability and neglectedness' was in fact developed by Holden Karnofsky (the earliest place I know of it being written down is in this GiveWell blogpost) so it was very likely in the grant-maker's mind.
  • This actually is contrary to how OpenPhil works: they attempt to give single individuals a lot of grant-making judgement. This fits in with my general expectation of how good decision making works; do not have a panel, but have a single individual who is rewarded based on their output (unfortunately OpenPhil's work is sufficiently long-term that it's hard to have local incentives, though an interesting financial setup for the project managers would be one where, should they get a win of sufficient magnitude in the next 10 years (e.g. avert a global catastrophic risk), then they get a $10 million bonus). But yeah, I believe in general a panel cannot create common knowledge of the deep models they have, and can many cases be worse than an individual.
  • A strong publication record seems like a great thing. Given the above anti-principles, it's not inconsistent that they should fund someone without it, and so I assume the grant-maker felt they had sufficiently strong evidence in this situation.
  • I've seen OpenPhil put a lot of work into studying the history of philanthropy, and funding research about it. I don't think the expert consensus is as strong as you make it out to be, and would want to see more engagement with the arguments OpenPhil has made before I would believe such a conclusion.

OpenPhil does have one of its goals as improving the global conversation about philanthropy, which is one of the reasons the staff spend so much time writing down their models and reasons (example, meta-example). In general it seems to me that 'panels' are the sorts of thing an organisation develops when it's trying to make defensible decisions, like in politics. I tend to see OpenPhil's primary goals here as optimising more for communicating what it's core beliefs to those interested in (a) helping OpenPhil understand things better or (b) use the info to inform their own decisions, rather than just broadcasting every possible detail but in a defensible way (especially if it's costly in terms of time).

Thanks for the comment! I think, however, your comment doesn't address my main concerns: the effectiveness and efficiency of research within the OpenPhil funding policy. Before I explain why, and reply to each of your points, let me clarify what I mean by effectiveness and efficiency.

By effective I mean research that achieves intended goals and makes an impact in the given domain, thus serving as the basis for (communal) knowledge acquisition. The idea that knowledge is essentially social is well known from the literature in social epistemology, and I think it'd be pretty hard to defend the opposite, at least with respect to scientific research.

By efficient I mean producing as much knowledge by means of as little resources (including time) as possible (i.e. epistemic success/time&costs of research).

Now, understanding how OpnPhil works doesn't necessarily show that such a policy results in effective and efficient research output:

  • not justifying their decisions in writing: this indeed doesn't suggest their policy is ineffective or inefficient, though it goes against the idea of transparency and it contributes to the difficultly of assessing the effectiveness and efficiency of their projects;

  • not avoiding the "superficial appearance of being overconfident and uninformed": again this hardly shows why we should consider them effective and efficient; their decision may very well be effective/efficient, but all that is stated here is that we may never know why.

Compare this with the assessment of effective charities: while a certain charity may state the very same principles on their website, we may agree that we understand how they work; but this will in no way help us to assess whether they should count as effective charity or not.

In the same vane all I am asking is: should we, and if so why, consider the funding policy of OpenPhil effective and efficient? Why is this important? Well, I take it to be important in case we value effective and efficient research as an important ingredient of funding allocation within EA. If effective altruism is supposed to be compatible with ineffectiveness and inefficiency of philanthropic research, the burden of proof is on the side that would hold this stance (similarly to the idea that EA would be compatible with ineffective and inefficient charity work).

Now to your points on the grant algorithm:

1.Effectiveness and efficiency

The framework in EA of 'scope, tractability and neglectedness' was in fact developed by Holden Karnofsky (the earliest place I know of it being written down is in this GiveWell blogpost) so it was very likely in the grant-maker's mind.

In the particular case I discuss above, it may have been likely, but unfortunately, it is entirely unclear why it was so. That's all I sam saying. I see no argument except for "trust a single anonymous reviewer". Note that the reasoning of the reviewer could easily be blinded for public presentation to preserve their anonymity. However, none of that is accessible. As a result, it is impossible to judge why the funding policy should be considered effective or efficient, which is precisely my point.

2.A panel of expert reviewers

This actually is contrary to how OpenPhil works: they attempt to give single individuals a lot of grant-making judgement. This fits in with my general expectation of how good decision making works; do not have a panel, but have a single individual who is rewarded based on their output (unfortunately OpenPhil's work is sufficiently long-term that it's hard to have local incentives, though an interesting financial setup for the project managers would be one where, should they get a win of sufficient magnitude in the next 10 years (e.g. avert a global catastrophic risk), then they get a $10 million bonus). But yeah, I believe in general a panel cannot create common knowledge of the deep models they have, and can many cases be worse than an individual.

I beg to differ: a board of reviewers may very well consist of individuals who do precisely what you assign to a single reviewer: "a lot of grant-making judgment". As it is well known from journal publication procedures, a single reviewer may easily be biased in a certain way, or have a blind spot concerning some points of research. Introducing at least two reviewers is done in order to keep biases in check and avoid blind spots. Defending the opposite goes against basic standards of social epistemology (starting already from Millian views on scientific inquiry, to critical rationalists' stance, to the points raised by contemporary feminist epistemologists). Finally, if this is how OpenPhil works, that doesn't tell us anything concerning the effectiveness/efficiency of such a policy.

3.One's track record (including one's publication record)

A strong publication record seems like a great thing. Given the above anti-principles, it's not inconsistent that they should fund someone without it, and so I assume the grant-maker felt they had sufficiently strong evidence in this situation.

But why should we take that to be effective and efficient funding policy? That the grant-maker felt so is hardly an argument. I am sure many ineffective charities feel they are doing the right thing, yet we wouldn't call them effective for that, would we?

4.The applicability of the above methodology to philanthropic funding

I've seen OpenPhil put a lot of work into studying the history of philanthropy, and funding research about it. I don't think the expert consensus is as strong as you make it out to be, and would want to see more engagement with the arguments OpenPhil has made before I would believe such a conclusion.

Again, they may have done so up to now, but my question is really: why is this effective or efficient? Philanthropic research that falls into the scope of scientific domain is essentially scientific research. The basic ideas behind the notion of pursuit worthiness have been discussed e.g. by Anne-Whitt and Nickles, but see also the work by Kitcher, Longino, Douglas, Lacey - to name just a few authors who have emphasized the importance of social aspects of scientific knowledge and the danger of biases. Now if you wish to argue that philanthropic funding of scientific research does not and (more importantly) should not fall under the scope of criteria that cover the effectiveness and efficiency of scientific research in general, the burden of proof will again be on you (I honestly can't imagine why this would be the case, especially since all of the above mentioned scholars pay close attention to the role of non-epistemic (ethical, social, political, etc.) values in the assessment of scientific research).

Ah, I see. Thanks for responding.

I notice until now I’ve been conflating whether the OpenPhil grant-makers themselves should be a committee, versus whether they should bring in a committee to assess the researchers they fund. I realise you’re talking about the latter, while I was talking about the former. Regarding the latter (in this situation) here is what my model of a senior staff member at OpenPhil thinks in this particular case of AI.

If they were attempting to make grants in a fairly mainstream area of research (e.g. transfer learning on racing games) then they would have absolutely wanted to use a panel if they were considering some research. However, OpenPhil is attempting to build a novel research field, that is not very similar to existing fields. One of the big things that OpenPhil has changed their mind about in the past few years, is going from believing there was expert consensus in AI that AGI would not be a big problem, to believing that there is not relevant expert class on the topic of forecasting AGI capabilities and timelines; the expert class most people think about (ML researchers) is much better at assessing the near-term practicality of ML research.

As such, there was not a relevant expert class in this case, and OpenPhil picked an unusual method of determining whether to give the grant (that heavily included variables such as the fact that MIRI has a strong track record of thinking carefully about long-term AGI related issues). I daresay MIRI and OpenPhil would not expect MIRI to pass the test you are proposing, because they are trying to do something qualitatively different than anything currently going on in the field.

Does that feel like it hits the core point you care about?


If that does resolve your confusion about OpenPhil’s decision, I will further add:

If your goal is to try to identify good funding opportunities, then we are in agreement: the fact that OpenPhil has funded an organisation (plus the associated write-up about why) is commonly not sufficient information to persuade me that it's sufficiently cost-effective that I should donate to it over, say, a GiveWell top charity.

If your goal however is to figure out whether OpenPhil’s organisation in general is epistemically sound, I would look to other variables than the specific grants where the reasoning is least transparent and looks the most wrong. The main reasons I have an unusually high amount of trust in OpenPhil's decisions is from seeing other positive epistemic signs from its leadership key research staff, not from assessing single grant datapoint. My model of OpenPhil’s competence instead weights more:

  • Their hiring process
  • Their cause selection process
  • The research I’ve seen from their key researchers (e.g. Moral Patienthood, Crime Stats Relpication)
  • Significant epistemic signs from the leadership (e.g. Three Key Things I've Changed My Mind About, building GiveWell)
  • When assessing the grant making in a particular cause, I think look to the particular program manager and see what their output has been like.

Personally in the first four cases, I’ve seen remarkably strong positive evidence. Regarding the latter I actually haven’t got much evidence, the individual program managers do not tend to publish much. Overall I’m very impressed with OpenPhil as an org.

(I'm about to fly on a plane, can find more links to back up some claims later.)

Thanks, Benito, there are quite some issues we agree on, I think. Let me give names to some points in this discussion :)

General work of OpenPhil. First, let me state clearly that my post in no way challenges (or aimed to do so) overall OpenPhil as an organization. To the contrary: I thought this one hick-up is a rather bad example and poses a danger to the otherwise great stuff. Why? Because the explication is extremely poor and the money extremely large. So this is my general worry concerning their PR (taking into account their notes on not needing to justify their decision etc. - in this case I think this should have been done, just as they did it in the case of their previous (much smaller) grant to MIRI.

Funding a novel research field. I do understand their idea was to fund a novel a new approach to this topic or even a novel research field. Nevertheless, I still don't see why this was a good way to go about it, since less risky paths are easily available. Consider the following:

  • OpenPhil makes an open call for research projects targeting the novel domain: the call specifies precisely which questions the projects should tackle;

  • OpenPhil selects a panel of experts who can evaluate both the given projects as well as the competence of the applicants to carry the project;

  • OpenPhil provides milestone criteria, in view of which the grant would be extended: e.g. the grant may initially be for the period of 5 years (e.g. 1.5 mil EUR is usually considered sufficient to fund a team of 5 members over the course of 5 years) , after which the project participants have to show the effectiveness of their project and apply for additional funding.

The benefits of such a procedure would be numerous:

  1. avoiding confirmation bias: as we all here very well know, confirmation bias can be easily present when it comes to controversial topics, which is why a second opinion is extremely important. This doesn't mean we shouldn't allow for hard-headed researchers to pursue their provocative ideas, nor that only dominant-theory-compatible ideas should be considered worthy of pursuit. Instead, what needs to be assured is that prospective values, suggesting the promising character of the project, are satisfied. Take for instance Wegener's hypothesis of continental drift, which he proposed in 1912. Wegener was way too confident of the acceptability of his theory, which is why many prematurely rejected his ideas (where my coauthor and I argue that such a rejection was clearly unjustified). Nevertheless, his ideas were indeed worthy of pursuit, and the whole research program had clear paths that could have been pursued (despite the numerous problems and anomalies). So, a novel surprising idea, challenging an established one isn't the same as a junk-scientific hypothesis which shows no prospective values whatsoever. We can assess its promise, no matter how risky it is. For that we need experts who can check its methodology. And since MIRI's current work concerns decision theory and ML, it's not as if their methodology can't be checked in this way, and in view of the goals of the project set in advance by OpenPhil (so the check-up would have to concern the question: how well does this method satisfy the required goals?).

  2. Another benefit of the above procedure is assuring that the most competent scholars lead the given project. MIRI may have good intentions, but how do we know that some other scholars wouldn't perform the same job even better? There must be some kind of competence check-up, and a time-sensitive effectiveness measure. Number of publications is one possible measure, but not the most fortunate one (I agree on this with others here). But then we need something else, for example: a single publication with a decent impact. Or a few publications over the course of a few years, each of which exhibits a strong impact. Otherwise, how do we know there'll be anything effective done within the project? How do we know these scholars rather than some others will do the job? Even if we like their enthusiasm, unless they reach the scientific community (or the community of science policy makers), how will they be effective? And unless they manage to publish in high-impact venues (say, conference proceedings), how will they reach these communities?

  3. Financing more than one project and thus hedging one's bets: why give all 3.75 mil USD to one project instead of awarding them to, say, two different groups (or as suggested above, to one group, but in phases)?

While I agree that funding risky, potentially ground-breaking research is important and may follow different standards than the regular academic paths, we still need some standards, and those I just suggested seem to me strictly better than the ones employed by OpenPhil in the case of this particular grant. Right now, it all just seems like a buddy system: my buddies are working on this ground-breaking stuff and I trust them, so I'll give them cash for that. Doesn't sound very effective to me :p

Gotcha. I’ll probably wrap up with this comment, here’s my few last thoughts (all on the topic of building a research field):

(I’m commenting on phone, sorry if paragraphs are unusually long, if they are I’ll try to add more breaks later.)

  • Your list of things that OpenPhil could do (e.g specify the exact questions this new field is trying to solve, or describe what a successful project should accomplish in this field in five years) sound really excellent. I do not think they’re at all easy in this case however.
  • I think one of the things that makes Alignment a difficult problem (and is the sort of thing you might predict if something were correctly in the reference class of ‘biggest problem for humanity’) is that there is not agreement on what research in the field should look like, or even formal specification of the questions - it is in a pre-paradigmatic stage. It took Eliezer 3 years of writing to convey some of the core intuitions, and even then that only worked for a small set of people. I believe Paul Christiano has not written a broadly understandable description of his research plans for similar reasons.
  • However, I’m strongly in agreement that this would be awesome for the field. I recently realised how much effort MIRI themselves have put into trying to set up the basic questions of the field, even though it’s not been successful so far. I can imagine that doing so would be a significant success marker for any AI Alignment researcher group that OpenPhil funds, and it’s something I think about working on myself from time to time.
  • I have a different feeling to you regarding the funding/writing ratio. I feel that OpenPhil’s reasons for funding MIRI are basically all in the first write-up, and the consequent (short) write-up contains just the variables that are now different.
  • In particular, they do say this typically wouldn’t be sufficient for funding a research org, but given the many other positive signs in the first write up, it was sufficient to 2.5x the grant amount (500k/year to 1.25mil/year). I think this is similar to grant amounts to various other grantee in this area, and also much smaller than the total amount OpenPhil is interested in funding this area with (so it doesn’t seem a surprising amount to me).
  • I see this as a similar problem for the other grants to more ‘mainstream’ AI Alignment researchers OpenPhil Funds; it’s not clear to me that they’re working on the correct technical problems either, because the technical problems have not been well specified, because they’re difficult to articulate.
  • My broad strokes thoughts again are that, when you choose to make grants that your models say have the chance of being massive hits, you just will look like you’re occasionally making silly mistakes, even once people take into account that this is what to expect you to look like. Given my personally having spent a bunch of time thinking about MIRI’s work, I have an idea of what models OpenPhil has built that are hard to convey, but it seems reasonable to me that in your epistemic position this looks like a blunder. I think that OpenPhil probably knew it would look like this to some, and decided to make the call anyway.

Final note: of your initial list of three things, the open call for research is the one I think is least useful for OpenPhil. When you’re funding at this scale in any field, the thought is not “what current ideas do people have that I should fund”, but “what new incentives can I add to this field”? And when you’re adding new incentives that are not those that already exist, it’s useful to spend time initially talking a lot with the grandees to make sure they truly understand your models (and you theirs) so that the correct models and incentives are propagated.

For example, I think if OpenPhil has announced a $100 grant scheme for Alignment research, many existing teams would’ve explained why their research already is this, and started using these terms, and it would’ve impeded the ability to build the intended field. I think this is why, even in cause areas like criminal justice and farm animal welfare, OpenPhil has chosen to advertise less and instead open 1-1 lines of communication with orgs they think are promising.

Letting e.g. a criminal justice org truly understand what you care about, and what sorts of projects you are and aren’t willing to fund, helps them plan accordingly for the future (as opposed to going along as usual and then suddenly finding out you aren’t interested in funding them any more). I think the notion that they’d be able to succeed by announcing a call for grants to solve a problem X, is too simplistic a view of how models propagate; in general to cross significant inferential gaps you need (on the short end) several extensive 1-1 conversations, and (on the longer end) textbooks with exercises.

Added: More generally, how many people you can fund quickly to do work is a function of how inferentially far you are away from the work that the people you hope to fund are already doing.

(On the other hand, you want to fund them well to signal to the rest of a field that there is real funding here if they provide what you’re looking for. I’m not sure exactly how to make that tradoeff.)

Re: Pre-paradigmatic science: see the above example of Wegener. If you want to discuss pre-paradigmatic research let's discuss them seriously. Let's go into historical examples (or contemporary ones, all the same to me), and analyze the relevant evaluative criteria. You haven't given me a single reason why my proposed criteria wouldn't work in the case of such research. Just because there is a scientific disagreement in the given field doesn't imply that no experts can be consulted (except for a singel one) to evaluate the promise of the given innovative idea. Moreover, you haven't shown at all why MIRI should be taken as effective in this domain. Again, my question is very simple: in view of which criteria? Check again the explanation given by OpenPhil: they call upon the old explanation, when they were hardly certain of giving them 0.5 mil USD, and the reviewer's conviction that a non-peer-reviewed paper is great. And then they give them 7 times the same amount of money.

All that you're telling me in your post is that we should trust them. Not a single standard' has been offered as for why* this should count as effective/efficient research funding.

But, let me go through your points in order:

Your list of things that OpenPhil could do (e.g specify the exact questions this new field is trying to solve, or describe what a successful project should accomplish in this field in five years) sound really excellent. I do not think they’re at all easy in this case however.

Sorry, this is no argument. Do explain why. If the next point is why, see the response to it below.

I think one of the things that makes Alignment a difficult problem (and is the sort of thing you might predict if something were correctly in the reference class of ‘biggest problem for humanity’) is that there is not agreement on what research in the field should look like, or even formal specification of the questions - it is in a pre-paradigmatic stage. It took Eliezer 3 years of writing to convey some of the core intuitions, and even then that only worked for a small set of people. I believe Paul Christiano has not written a broadly understandable description of his research plans for similar reasons.

So are you saying that because we have a pre-paradigmatic stage there are no epistemic standards we can call upon? So, anything goes? Sorry, but not even Kuhn would agree with that. We still have shared epistemic values even though we may interpret them differently. Again: communication breakdown is not necessary despite potential incommensurabilities between the approaches. The least that can be done is that within the given novel proposal, the epistemic standards are explicated and justified. Otherwise, you are equating novel scientific research with any nonsense approaches. No assessment means anything goes, and I don't think you wanna go that path (or next we'll have pseudo-scientific crackpots running wild, arguing their research agenda is simply in a "pre-paradigmatic state").

However, I’m strongly in agreement that this would be awesome for the field. I recently realised how much effort MIRI themselves have put into trying to set up the basic questions of the field, even though it’s not been successful so far. I can imagine that doing so would be a significant success marker for any AI Alignment researcher group that OpenPhil funds, and it’s something I think about working on myself from time to time.

This is just your personal opinion, hardly an argument (unless you're an expert in the field of AI, in which case it could count as higher order evidence, but then please provide some explanation as for why their research is promising, and why we can expect it to be effective).

I have a different feeling to you regarding the funding/writing ratio. I feel that OpenPhil’s reasons for funding MIRI are basically all in the first write-up, and the consequent (short) write-up contains just the variables that are now different. In particular, they do say this typically wouldn’t be sufficient for funding a research org, but given the many other positive signs in the first write up, it was sufficient to 2.5x the grant amount (500k/year to 1.25mil/year). I think this is similar to grant amounts to various other grantee in this area, and also much smaller than the total amount OpenPhil is interested in funding this area with (so it doesn’t seem a surprising amount to me).

Their grant is way higher than the most prestigious ERC grants, so no... it's not a usual amount of money. And the justification given for their initial grant can hardly count for this one with no added explication.

I see this as a similar problem for the other grants to more ‘mainstream’ AI Alignment researchers OpenPhil Funds; it’s not clear to me that they’re working on the correct technical problems either, because the technical problems have not been well specified, because they’re difficult to articulate.

Precisely: which is why it may very well be the case that at this point there is hardly anything that can be done (the research program has no positive and negative heuristics, to use Lakatosian terms), which is why I wonder why is it worthy of pursuit to begin with? Again, we need criteria and currently there is nothing. Just hope that some research will result in something. And why assume others couldn't do the same job? This is extremely poor view on an extremely broad scientific community. It almost sounds as if you're saying "scientific community thinks X, but my buddies think X is not the case, so we need to fund my buddies." I don't think you wanna take that road or we'll again slip into junk science.

I agree Open Phil ought stick with their current approach rather than the panel-based approach Dunja suggests. But this response still doesn't address the problem Dunja was originally noting: the lack of transparency in Open Phil's recent $3.75 million grant to MIRI. Highlighting that Open Phil has transitioned to a policy of generally not avoiding appearing underinformed and overconfident, or not expecting to always be able to justify themselves in writing, doesn't bear on the claim in this particular case Open Phil wasn't transparent enough. This is especially the case in light of how much justification was given by Open Phil for their much smaller grant to MIRI the prior year, and with so little apparently having changed between 2016 and 2017.

I agree doing grant-making using the methodology and style used in the public sphere doesn't make sense when our goals don't necessarily entail using the standards reserved for funding things in the public interest. I don't think it's in the interest of the EA community itself to hold Open Phil specifically to these standards as a private organization. However, Open Phil as an organization still identifies as part of the effective altruism movement, entailing holding them to the standards of the movement.

While Open Phil may not expect to be able to fully justify themselves in writing, and won't avoid superficially appearing overconfident and underinformed doesn't mean the rest of EA can't differently evaluate them. Effective altruists are free to criticize Open Phil for taking too much risk in being actually uninformed and overconfident, based in part on a judgement they aren't being transparent enough. So while Open Phil may form its policies with not only the effective altruism movement but the whole public in mind, EA can still function as a special interest which demands more from Open Phil regardless. If other effective altruists believe Open Phil isn't being transparent enough regardless of Open Phil's own self-evaluation, they should bring that up.

Ah thanks for these! i have to get some sleep now, but these seem to be relevant posts, so i'll try to read them tomorrow :) On the first scan, this topic overlaps with my reply below to Benito, and I disagree with idea that just because standards are hard to employ, it's impossible to find them. My impression is that this confusion tends to stem from the confusion concerning two types of assessments of scientific hypotheses: assessment of their acceptability (how much is the theory confirmed in view of evidence?) and their promising character (how promising is this theory/hypothesis?). A problem (e.g. in scientific debates) appears when the criteria of the former assessment are (inadequately) applied to the latter. And as a result, it may seem as if there are no standards we can apply in the latter case.

Anyway, I'll get back to this when I read these posts in more detail.

Disclosure: I'm both a direct and indirect beneficiary of Open Phil funding. I am also a donor to MIRI, albeit an unorthodox one

[I]f you check MIRI's publications you find not a single journal article since 2015 (or an article published in prestigious AI conference proceedings, for that matter).

I have a 2 year out-of-date rough draft on bibliometrics re. MIRI, which likely won't get updated due to being superceded by Lark's excellent work and other constraints on my time. That said:

My impression of computer science academia was that (unlike most other fields) conference presentations are significantly more important than journal publications. Further when one looks at work on MIRI's page from 2016-2018, I see 2 papers at Uncertainty of AI, which this site suggests is a 'top-tier' conference. (Granted, for one of these neither or the authors have a MIRI institutional affiliation, although 'many people at MIRI' are acknowledged).

Also, parts of their logical induction paper were published/presented at TARK-2017, which is a reasonable fit for the paper, and a respectable though not a top conference.

Oh I haven't seen that publication on their website. If it was a peer-reviewed publication, that would indeed be something (and a kind of stuff I've been looking for). Could you please link to the publication?

Thanks for the comment, Gregory! I must say though that I don't agree with you that conference presentations are significantly more important than journal publications in the field of AI (or humanities for that matter). We could discuss this in terms of personal experiences, but I'd go for a more objective criterion: effectiveness in terms of citations. Only once your research results are published in a peer-reviewed journal (including peer-reviewed conference proceedings) can other scholars in the field take them as a (minimally) reliable source for further research that would build on it. By the way, many prestigious AI conferences actually come with a peer-reviewed proceedings (take e.g. AAAI or IJCAI), so you can't even present at the conference without submitting a paper.

Again, MIRI might be doing excellent work. All i am asking is: in view of which criteria can we judge this to be the case? What are the criteria of assessment, which EA community finds extremely important when it comes to the assessment of charities, and which I think we should find just as important when it comes to the funding of scientific research?

I must say though that I don't agree with you that conference presentations are significantly more important than journal publications in the field of AI (or humanities for that matter). We could discuss this in terms of personal experiences, but I'd go for a more objective criterion: effectiveness in terms of citations.

Technical research on AI generally (although not exclusively) falls under the heading of computer science. In this field, it is not only the prevailing (but not universal) view of practitioners that conference presentations are academically 'better' (here, here, etc.), but that they tend to have similar citation counts too.

Oh but you are confusing conference presentations with conference publications. Check the links you've just sent me: they discuss the latter, nor the former. You cannot cite conference presentation (or that's not what's usually understood under "citations", and definitely not in the links from your post), but only a publication. Conference publications in the field of AI are usually indeed peer-reviewed and yes, indeed, they are often even more relevant than journal publications, at least if published in prestigious conference proceedings (as I stated above).

Now, on MIRI's publication page there are no conference publications in 2017, and for 2016 there are mainly technical reports, which is fine, but should again not be confused with regular (conference) publications, at least according to the information provided by the publisher. Note that this doesn't mean technical reports are of no value! To the contrary. I am just making an overall analysis of the state of the art of MIRI's publications, and trying to figure out what they've published, and then how this compares with a publication record of similarly sized research groups in a similar domain. If I am wrong in any of these points, I'll be happy to revise my opinion!

This paper was in 2016, and is included in the proceedings of the UAI conference that year. Does this not count?

Sure :) I saw that one on their website as well. But a few papers over the course of 2-3 years isn't very representative for an effective research group, is it? If you look at groups by scholars who do get (way smaller) grants in the field of AI, their output is way more effective. But even if we don't count publications, but speak in terms of effectiveness of a few publications, I am not seeing anything. If you are, maybe you can explain it to me?

I regret I don't have much insight to offer on the general point. When I was looking into the bibliometrics myself, very broad comparison to (e.g.) Norwegian computer scientists gave figures like '~0.5 to 1 paper per person year', which MIRI's track record seemed about on par if we look at peer reviewed technical work. I wouldn't be surprised to find better performing research groups (in terms of papers/highly cited papers), but slightly moreso if these groups were doing AI safety work.

I think the part which is lacking in your understanding is part of MIRIs intellectual DNA.

In it you can find lot of Eliezer Yudkowsky's thought - I would recommend reading his latest book " Inadequate equilibria" where he explains some of the reasons why the normal reserach environment may be inadequte in some respects.

MIRI is explicitly founded on the premise to free some people from the "publish or perish" pressure, which is severely limiting what people in normal academia work on and care about. If you give enough probability to this beeing approach worth taking, it does make sense to base decisions on funding MIRI on different criteria.

Hi Jan, I am aware of the fact that "publish or perish" environment may be problematic (and that MIRI isn't very fond of it), but we should make a difference between publishing as many papers as possible, and publishing at least some papers in high impact journals.

Now, if we don't want to base our assessment of effectiveness and efficiency on any publications, then we need something else. So what would be these different criteria you mention? How do we assess the research project as effective? And how do we assess that the project has shown to be effective over the course of time?

What I would do when evaluating potentially high-impact, high uncertainty "moonshot type" research project would be to ask some trusted highly knowledgeable researcher to assess the thing. I would not evaluate publication output, but whether the effort looks sensible, people working on it are good, and some progress is made (even if in discovering things which do not work)

OK, but then, why not the following:

  1. why not asking at least 2-3 experts? Surely, one of them could be (unintentionally) biased or misinformed, or she/he may simply omit an important point in the project and assess it too negatively or too positively?
  2. if we don't assess the publication output of the project initiator(s), how do we assure that these very people, rather than some other scholars would pursue the given project most effectively and efficiently? Surely, some criteria will matter: for example, if I have a PhD in philosophy, I will be quite unqualified to conduct a project in the domain of experimental physics. So some competence seems necessary. How do we assure it, and why not care about effectiveness and efficiency in this step?
  3. I agree that negative results are valuable, and that some progress should be made. So what is the progress MIRI has shown over the course of last 3 years, such that this can be identified as efficient and effective research?
  4. Finally, don't you think making an open call for projects on the given topic, and awarding the one(s) that seem most promising is a method that would be more reliable in view of possible errors in judgment than just evaluating whoever is the first to apply for the grant?

I agree with Benito and others that this post would benefit from a deeper engagement with already-stated OPP policy (see, for instance, this recent interview with Holden Karnofsky: https://soundcloud.com/80000-hours/21-holden-karnofsky-open-philanthropy), but I do think it is good to have this conversation.

There are definitely arguments for OPP's positions on the problems with academia, and I think taking a different approach may be worthwhile. At the same time, I am a bit confused about the lack of written explanations or the opposition to panels. There are ways to try to create more individualized incentives within panels. Re: written explanations, while it does make sense to avoid being overly pressured by public opinion, having to make some defense of a decision is probably helpful to a decision's efficacy. An organization can just choose to ignore public responses to its written grant justification and to listen only to experts' responses to a grant. I would think that some critical engagement would be beneficial.

re: justification, i agree especially when the grant is controversial (a lot of money, unusual choice of a group, etc.), and thanks for the link!

Minor points, (1) I think it is standard practice for peer review to be kept anonymous, (2) some of the things you are mentioning seem like norms about grants and writeups that will reasonably vary based on context, (3) you're just looking at one grant out of all that Open Phil has done, (4) while you are looking at computer science, their first FDT paper was accepted at Formal Epistemology Workshop, and a professional philosopher of decision theory who went there spoke positively about it.

More importantly, once MIRI's publication record is treated with the appropriate nuance, your post doesn't show how they should be viewed as inferior to any unfunded alternatives. Open Phil has funded other AI safety projects besides MIRI, and there is not much being done in this field, so the grants don't commit them to the claim that MIRI is better than most AI safety projects. So we don't have an empirical basis for doubting their loose, hits-based-giving approach. We can presume that formal, traditional institutional funding policies would do better, but it is difficult to argue that point to the level of certainty that tells us that the situation is "disturbing". Those policies are costly - they take more time and people to implement.

(1) I think it is standard practice for peer review to be kept anonymous,

Problem wasn't in the reviewer being anonymous, but in the lack of access to the report

(2) some of the things you are mentioning seem like norms about grants and writeups that will reasonably vary based on context,

Sure, but that doesn't mean no criteria should be available.

(3) you're just looking at one grant out of all that Open Phil has done,

Indeed, I am concerned with one extremely huge grant. I find the sum large enough to warrant concerns, especially since the same can happen with future funding strategies.

(4) while you are looking at computer science, their first FDT paper was accepted at Formal Epistemology Workshop, and a professional philosopher of decision theory who went there spoke positively about it.

I was raising an issue concerning journal articles, which are nonetheless important even in computer science to solidify the research results. Proceedings are important for novel results, but the actual rigor of reviews comes through in journal publications (otherwise, journals would be pointless in this domain).

As for the rest of your post, I advice comparing an output of groups of smaller or similar size that have been funded via prestigious grants, you'll notice a difference.

Open Phil gave $5.6MM to Berkeley for AI, even though Russell's group is new and its staff/faculty are still fewer than the staff of MIRI. They gave $30MM to OpenAI. And $1-2MM for many other groups. Of course EAs can give more to a particular groups, that's because we're EAs, we're willing to give a lot of money to wherever it will do the most good in expectation.

Again, you are missing the point: my argument concerns the criteria in view which projects are assessed as worthy of funding. These criteria exist and are employed by various funding institutions across academia. I haven't seen any such criteria (and the justification thereof, such that they are conducive to effective and efficient research) in this case, which is why I've raised the issue.

we're willing to give a lot of money to wherever it will do the most good in expectation.

And my focus is on: which criteria are used/should be used in order to decide which research projects will do the most good in expectation. Currently such criteria are lacking, including their justification in terms of effectiveness and efficiency.

Open Phil has a more subjective approach, others have talked about their philosophy here. That means it's not easily verifiable to outsiders, but that's of no concern to Open Phil, because it is their own money.

Again: you are missing my point :) I don't care if it's their money or not, that's beside my point.

What I care about is: are their funding strategies rooted in the standards that are conducive to effective and efficient scientific research?

Otherwise, makes no sense to label them as an organization that's conforming to the standards of EA, at least in the case of such practices.

Subjective, unverifiable, etc. has nothing to do with such standards (= conducive to effective & efficient scientific research).

are their funding strategies rooted in the standards that are conducive to effective and efficient scientific research?

As I stated already, "We can presume that formal, traditional institutional funding policies would do better, but it is difficult to argue that point to the level of certainty that tells us that the situation is "disturbing". Those policies are costly - they take more time and people to implement." It is, in short, your conceptual argument about how to do EA. So, people disagree. Welcome to EA.

Subjective, unverifiable, etc. has nothing to do with such standards

It has something to do with the difficulty of showing that a group is not conforming to the standards of EA.

Oh no, this is not just a matter of opinion. There are numerous articles written in the field of philosophy of science aimed precisely to determine which criteria help us to evaluate promising scientific research. So there is actually quite some scholarly work on this (and it is a topic of my research, as a matter of fact).

So yes, I'd argue that the situation is disturbing since immense amount of money is going into research for which there is no good reason to suppose that it is effective or efficient.

kbog
6y-1
0
0

Oh no, this is not just a matter of opinion.

Part of being in an intellectual community is being able to accept that you will think that other people are very wrong about things. It's not a matter of opinion, but it is a matter of debate.

There are numerous articles written in the field of philosophy of science aimed precisely to determine which criteria help us to evaluate promising scientific research

Oh, there have been numerous articles, in your field, claimed by you. That's all well and good, but it should be clear why people will have reasons for doubts on the topic.

Part of being in an intellectual community is being able to accept that you will think that other people are very wrong about things. It's not a matter of opinion, but it is a matter of debate.

Sure! Which is why I've been exchanging arguments with you.

Oh, there have been numerous articles, in your field, claimed by you.

Now what on earth is that supposed to mean? What are you trying to say with this? You want references, is that it? I have no idea what this claim is supposed to stand for :-/

That's all well and good, but it should be clear why people will have reasons for doubts on the topic.

Sure, and so far you haven't given me a single good reason. The only thing you've done is reiterate the lack of transparency on the side of OpenPhil.

Sure! Which is why I've been exchanging arguments with you.

And, therefore, you would be wise to treat Open Phil in the same manner, i.e. something to disagree with, not something to attack as not being Good Enough for EA.

Now what on earth is that supposed to mean? What are you trying to say with this? You want references, is that it? I have no idea what this claim is supposed to stand for :-/

It means that you haven't argued your point with the sufficient rigor and comprehensiveness that is required for you to convince every reasonable person. (no, stating "experts in my field agree with me" does not count here, even though it's a big part of it)

Sure, and so far you haven't given me a single good reason.

Other people have discussed and linked Open Phil's philosophy, I see no point in rehashing it.

I don't have the time to join the debate, but I'm pretty sure Dunja's point isn't "I know that OpenPhil's strategy is bad" but "Why does everyone around here act as though it is knowable that their strategy is good, given their lack of transparency?" It seems like people act OpenPhil's strategy is good and aren't massively confused / explicitly clear that they don't have the info that is required to assess the strategy.

Dunja, is that accurate?

(Small note: I'd been meaning to try to read the two papers you linked me to above a couple months ago about continental drift and whatnot, but I couldn't get non-paywalled versions. If you have them, or could send them to me at gmail.com preceeded by 'benitopace', I'd appreciate that.)

Thanks, Benito, that sums it up nicely!

It's really about the transparency of the criteria, and that's all I'm arguing for. I am also open for changing my views on the standard criteria etc. - I just care we start the discussion with some rigor concerning how best to assess effective research.

As for my papers - crap, that's embarrassing that I've linked paywall versions, I have them on academia page too, but guess those can be accessed also only within that website... have to think of some proper free solution here. But in any case: please don't feel obliged to read my papers, there's for sure lots of other more interesting stuff out there! If you are interested in the topic it's enough the scan to check the criteria I use in these assessments :) I'll email them in any case.

kbog
6y-1
0
0

Yeah that's a worthy point, but people are not really making decisions on this basis. It's not like Givewell, which recommends where other people should give. Open Phil has always ultimately been Holden doing what he wants and not caring about what other people think. It's like those "where I donated this year" blogs from the Givewell staff. Yeah, people might well be giving too much credence to their views, but that's a rather secondary thing to worry about.