Dunja comments on How effective and efficient is the funding policy of Open Philanthropy concerning projects on AI risks? - Effective Altruism Forum

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (25)

You are viewing a single comment's thread. Show more comments above.

Comment author: Gregory_Lewis 28 February 2018 06:06:00AM *  4 points [-]

Disclosure: I'm both a direct and indirect beneficiary of Open Phil funding. I am also a donor to MIRI, albeit an unorthodox one

[I]f you check MIRI's publications you find not a single journal article since 2015 (or an article published in prestigious AI conference proceedings, for that matter).

I have a 2 year out-of-date rough draft on bibliometrics re. MIRI, which likely won't get updated due to being superceded by Lark's excellent work and other constraints on my time. That said:

My impression of computer science academia was that (unlike most other fields) conference presentations are significantly more important than journal publications. Further when one looks at work on MIRI's page from 2016-2018, I see 2 papers at Uncertainty of AI, which this site suggests is a 'top-tier' conference. (Granted, for one of these neither or the authors have a MIRI institutional affiliation, although 'many people at MIRI' are acknowledged).

Comment author: Dunja 28 February 2018 11:17:05AM *  1 point [-]

Thanks for the comment, Gregory! I must say though that I don't agree with you that conference presentations are significantly more important than journal publications in the field of AI (or humanities for that matter). We could discuss this in terms of personal experiences, but I'd go for a more objective criterion: effectiveness in terms of citations. Only once your research results are published in a peer-reviewed journal (including peer-reviewed conference proceedings) can other scholars in the field take them as a (minimally) reliable source for further research that would build on it. By the way, many prestigious AI conferences actually come with a peer-reviewed proceedings (take e.g. AAAI or IJCAI), so you can't even present at the conference without submitting a paper.

Again, MIRI might be doing excellent work. All i am asking is: in view of which criteria can we judge this to be the case? What are the criteria of assessment, which EA community finds extremely important when it comes to the assessment of charities, and which I think we should find just as important when it comes to the funding of scientific research?

Comment author: Gregory_Lewis 01 March 2018 05:54:42AM 3 points [-]

I must say though that I don't agree with you that conference presentations are significantly more important than journal publications in the field of AI (or humanities for that matter). We could discuss this in terms of personal experiences, but I'd go for a more objective criterion: effectiveness in terms of citations.

Technical research on AI generally (although not exclusively) falls under the heading of computer science. In this field, it is not only the prevailing (but not universal) view of practitioners that conference presentations are academically 'better' (here, here, etc.), but that they tend to have similar citation counts too.

Comment author: Dunja 01 March 2018 09:17:16AM *  2 points [-]

Oh but you are confusing conference presentations with conference publications. Check the links you've just sent me: they discuss the latter, nor the former. You cannot cite conference presentation (or that's not what's usually understood under "citations", and definitely not in the links from your post), but only a publication. Conference publications in the field of AI are usually indeed peer-reviewed and yes, indeed, they are often even more relevant than journal publications, at least if published in prestigious conference proceedings (as I stated above).

Now, on MIRI's publication page there are no conference publications in 2017, and for 2016 there are mainly technical reports, which is fine, but should again not be confused with regular (conference) publications, at least according to the information provided by the publisher. Note that this doesn't mean technical reports are of no value! To the contrary. I am just making an overall analysis of the state of the art of MIRI's publications, and trying to figure out what they've published, and then how this compares with a publication record of similarly sized research groups in a similar domain. If I am wrong in any of these points, I'll be happy to revise my opinion!

Comment author: Gregory_Lewis 01 March 2018 10:18:30AM *  1 point [-]

This paper was in 2016, and is included in the proceedings of the UAI conference that year. Does this not count?

Comment author: Dunja 01 March 2018 10:31:27AM 2 points [-]

Sure :) I saw that one on their website as well. But a few papers over the course of 2-3 years isn't very representative for an effective research group, is it? If you look at groups by scholars who do get (way smaller) grants in the field of AI, their output is way more effective. But even if we don't count publications, but speak in terms of effectiveness of a few publications, I am not seeing anything. If you are, maybe you can explain it to me?

Comment author: Gregory_Lewis 04 March 2018 12:20:40PM 0 points [-]

I regret I don't have much insight to offer on the general point. When I was looking into the bibliometrics myself, very broad comparison to (e.g.) Norwegian computer scientists gave figures like '~0.5 to 1 paper per person year', which MIRI's track record seemed about on par if we look at peer reviewed technical work. I wouldn't be surprised to find better performing research groups (in terms of papers/highly cited papers), but slightly moreso if these groups were doing AI safety work.

Comment author: Jan_Kulveit 28 February 2018 01:11:40PM 1 point [-]

I think the part which is lacking in your understanding is part of MIRIs intellectual DNA.

In it you can find lot of Eliezer Yudkowsky's thought - I would recommend reading his latest book " Inadequate equilibria" where he explains some of the reasons why the normal reserach environment may be inadequte in some respects.

MIRI is explicitly founded on the premise to free some people from the "publish or perish" pressure, which is severely limiting what people in normal academia work on and care about. If you give enough probability to this beeing approach worth taking, it does make sense to base decisions on funding MIRI on different criteria.

Comment author: Dunja 28 February 2018 02:04:25PM *  3 points [-]

Hi Jan, I am aware of the fact that "publish or perish" environment may be problematic (and that MIRI isn't very fond of it), but we should make a difference between publishing as many papers as possible, and publishing at least some papers in high impact journals.

Now, if we don't want to base our assessment of effectiveness and efficiency on any publications, then we need something else. So what would be these different criteria you mention? How do we assess the research project as effective? And how do we assess that the project has shown to be effective over the course of time?

Comment author: Jan_Kulveit 28 February 2018 02:54:25PM 1 point [-]

What I would do when evaluating potentially high-impact, high uncertainty "moonshot type" research project would be to ask some trusted highly knowledgeable researcher to assess the thing. I would not evaluate publication output, but whether the effort looks sensible, people working on it are good, and some progress is made (even if in discovering things which do not work)

Comment author: Dunja 28 February 2018 03:33:35PM *  3 points [-]

OK, but then, why not the following:

  1. why not asking at least 2-3 experts? Surely, one of them could be (unintentionally) biased or misinformed, or she/he may simply omit an important point in the project and assess it too negatively or too positively?
  2. if we don't assess the publication output of the project initiator(s), how do we assure that these very people, rather than some other scholars would pursue the given project most effectively and efficiently? Surely, some criteria will matter: for example, if I have a PhD in philosophy, I will be quite unqualified to conduct a project in the domain of experimental physics. So some competence seems necessary. How do we assure it, and why not care about effectiveness and efficiency in this step?
  3. I agree that negative results are valuable, and that some progress should be made. So what is the progress MIRI has shown over the course of last 3 years, such that this can be identified as efficient and effective research?
  4. Finally, don't you think making an open call for projects on the given topic, and awarding the one(s) that seem most promising is a method that would be more reliable in view of possible errors in judgment than just evaluating whoever is the first to apply for the grant?