Comment author: Benito 24 April 2018 11:16:09AM 6 points [-]

I’m such a big fan of “outreach is an offer, not persuasion”.

In general, my personal attitude to outreach in student groups is not to ‘get’ the best people via attraction and sales, but to just do something awesome that seems to produce value (e.g. build a research group around a question, organise workshops around a thinking tool, write a talk on a topic you’re confused about and want to discuss), and then the best people will join you on your quest. (Think quests, not sales.)

If your quest involves sales as a side-effect (e.g. you’re running an EAGx) then that’s okay, as long as the core of what you’re doing is trying to solve a real problem and make progress on an open question you have. Run EAGxes around a goal of moving the needle forward on certain questions, on making projects happen, solving some coordination problem in the community, or some other concrete problem-based metric. Not just “get more EAs”.

I think the reason this post (and all other writing on the topic) has had difficulty suggesting particular quests is that they tend to be deeply tied up in someone’s psyche. Nonetheless l think this is what’s necessary.

Comment author: itaibn 05 April 2018 12:13:20PM 13 points [-]

On this very website, clicking the link "New to Effective Altruism?" and a little browsing quickly leads to recommendations to give to EA funds. If EA funds really is intended to be a high-trust option, CEA should change that recommendation.

Comment author: Benito 05 April 2018 07:22:51PM *  1 point [-]

Yup. I suppose I wrote down my assessment of the information available about the funds and the sort of things that would cause me to donate to it, not the marketing used to advertise it - which does indeed feel disconnected. It seems that there's a confusing attempt to make this seem reasonable to everyone whilst in fact not offering the sort of evidence that should make it so.

The evidence about it is not the 'evidence-backed charities' that made GiveWell famous/trustworthy, but is "here is a high status person in a related field that has a strong connection to EA", which seems not that different from the way other communities ask their members to give funding - it's based on trust in the leaders in the community, not on objectively verifiable metrics to outsiders. So you should ask yourself what causes you to trust CEA and then use that, as opposed to the objective metrics associated with the EA funds (which there are far fewer of than with GiveWell). For example if CEA has generally made good philosophical progress in this area and also made good hiring decisions, that would make you trust the grant managers more.

Comment author: Ervin 04 April 2018 10:58:49PM 18 points [-]

Looking at the EA Community Fund as an especially tractable example (due to the limited field of charities it could fund):

  • Since its launch in early 2017 it appears to have collected $289,968, and not regranted any of it until a $83k grant to EA Sweden currently in progress. I am basing this on https://app.effectivealtruism.org/funds/ea-community - it may not be precisely right.

  • On the one hand, it's good that some money is being disbursed. On the other hand the only info we have is https://app.effectivealtruism.org/funds/ea-community/payouts/1EjFHdfk3GmIeIaqquWgQI . All we're told about the idea and why it was funded is that it's an "EA community building organization in Sweden" and Will McAskill recommended Nick Beckstead fund it "on the basis of (i) Markus's track record in EA community building at Cambridge and in Sweden and (ii) a conversation he had with Markus." Putting it piquantly (and over-strongly I'm sure, for effect), this sounds concerningly like an old boy's network: Markus > Will > Nick. (For those who don't know, Will and Nick were both involved in creating CEA.) It might not be, but the paucity of information doesn't let us reassure ourselves that it's not.

  • With $200k still unallocated, one would hope that the larger and more reputable EA movement building projects out there would have been funded, or we could at least see that they've been diligently considered. I may be leaving some out, but these would at least include the non-CEA movement building charities: EA Foundation (for their EA outreach projects), Rethink Charity and EA London. As best as I could get an answer from Rethink Charity at http://effective-altruism.com/ea/1ld/announcing_rethink_priorities/dir?context=3 this is not true in their case at least.

  • Meanwhile these charities can't make their case direct to movement building donors whose money has gone to the fund since its creation.

This is concerning, and sounds like it may have done harm.

Comment author: Benito 05 April 2018 12:52:10AM *  5 points [-]

Note: EA is totally a trust network - I don't think the funds are trying to be anything like GiveWell, who you're supposed to trust based on the publicly-verifiable rigour of their research. EA funds is much more toward the side of the spectrum of "have you personally seen CEA make good decisions in this area" or "do you specifically trust one of the re-granters". Which is fine, trust is how tightly-knit teams and communities often get made. But if you gave to it thinking "this will look like if I give to Oxfam, and will have the same accountability structure" then you'll correctly be surprised to find out it works significantly via personal connections.

The same way you'd only fund a startup if you knew them and how they worked, you should probably only fund EA funds for similar reasons - and if the startup tried to make its business plan such that anyone would have reason to fund it, the business plan probably wouldn't be very good. I think that EA should continue to be a trust-based network, and so on the margin I guess people should give less to EA funds rather than EA funds make grants that are more defensible.

Comment author: Jan_Kulveit 04 April 2018 10:27:46AM *  24 points [-]

From my observations, the biggest problem in current EA funding ecosystem is structural bottlenecks.

It seems difficult to get relatively modest funding for a promising project if you are not well connected in the network / early stage projects in general (?).

Why?

While OpenPhil has an abundance of resources, they are at the moment staff limited, unlikely to grant to subjects they don't know directly, and unlikely to grant to small projects ($10k)

EA funds seem to also be staff limited and also not capable of giving small grants.

In theory, EA Grants should fill this gap, but the program also seems staff limited (I'm familiar with one grant application where since Nov 2017 the term when the grant program will be open is pushed into future at a rate 1 month per month)

Part of the early-stage projects grant support problem is it generally means investing into people. Investing in people needs either trust or lot of resources to evaluate the people (which is in some aspects more difficult than evaluating projects which are up and running)

Trust in our setting usually comes via network links in the social network, which is quite limiting resource.

So my conclusion is, the efficient allocation is structurally limited by 1] lack of staff in grant-making organizations 2] insufficient size of the "trust network" allowing investment in promising projects based on theor founders

Individual EAs have good opportunities to get more impact from their donations than by donating to EA funds if they're overcoming the structural bottlenecks by their funding. That may mean

a] donating to projects which are under the radar of OpenPhil and EA Funds

b] using their personal knowledge of people to support early stage efforts

Comment author: Benito 04 April 2018 06:46:26PM *  12 points [-]

On trust networks: These are very powerful and effective. YCombinator, for example, say they get most of their best companies via personal recommendation, and the top VCs say that the best way to get funded by them is an introduction by someone they trust.

(Btw I got an EA Grant last year I expect in large part because CEA knew me because I successfully ran an EAGx conference. I think the above argument is strong on its own but my guess is many folks around here would like me to mention this fact.)

On things you can do with your money that are better than EA funds: personally I don’t have that much money, but with my excess I tend to do things like buy flights and give money to people I’ve made friends with who seem like they could get a lot of value from it (e.g. buy a flight to a CFAR workshop, fund them living somewhere to work on a project for 3 months, etc). This is the sort of thing only a small donor with personal connections can do, at least currently.

On EA grants:

Part of the early-stage projects grant support problem is it generally means investing into people. Investing in people needs either trust or lot of resources to evaluate the people (which is in some aspects more difficult than evaluating projects which are up and running)

Yes. If I were running EA grants I would continually be in contact with the community, finding out peoples project ideas, discussing it with them for 5 hours and getting to know them and how much I could trust them, and then handing out money as I saw fit. This is one of the biggest funding bottlenecks in the community. The place that seems most to have addressed them has actually been the winners of the donor lotteries, who seemed to take it seriously and use the personal information they had.

I haven’t even heard about EA grants this time around, which seems like a failure on all the obvious axes (including the one of letting grantees know that the EA community is a reliable source of funding that you can make multi-year plans around - this makes me mostly update toward EA grants being a one-off thing that I shouldn’t rely on).

Comment author: HoldenKarnofsky 26 March 2018 06:58:19PM *  7 points [-]

The role does include all three of those things, and I think all three things are well served by the job qualifications listed in the posting. A common thread is that all involve trying to deliver an informative, well-calibrated answer to an action-relevant question, largely via discussion with knowledgeable parties and critical assessment of evidence and arguments.

In general, we have a list of the projects that we consider most important to complete, and we look for good matches between high-ranked projects and employees who seem well suited to them. I expect that most entry-level Research Analysts will try their hand at both cause prioritization and grant investigation work, and we'll develop a picture of what they're best at that we can then use to assign them more of one or the other (or something else, such as the work listed at https://www.openphilanthropy.org/get-involved/jobs/analyst-specializing-potential-risks-advanced-artificial-intelligence) over time.

Comment author: Benito 26 March 2018 07:09:04PM 0 points [-]

Thanks Holden!

Comment author: Benito 26 March 2018 06:24:01PM *  8 points [-]

I’m pretty confused about the work of the RA role - it seems to include everything from epidemiological literature reviews to philosophical work on population ethics to following up on individual organisations you’ve funded.

Could you give some concrete info about how you and the RA determine what the RA works on?

Comment author: Dunja 02 March 2018 12:34:28AM *  1 point [-]

Thanks, Benito, there are quite some issues we agree on, I think. Let me give names to some points in this discussion :)

General work of OpenPhil. First, let me state clearly that my post in no way challenges (or aimed to do so) overall OpenPhil as an organization. To the contrary: I thought this one hick-up is a rather bad example and poses a danger to the otherwise great stuff. Why? Because the explication is extremely poor and the money extremely large. So this is my general worry concerning their PR (taking into account their notes on not needing to justify their decision etc. - in this case I think this should have been done, just as they did it in the case of their previous (much smaller) grant to MIRI.

Funding a novel research field. I do understand their idea was to fund a novel a new approach to this topic or even a novel research field. Nevertheless, I still don't see why this was a good way to go about it, since less risky paths are easily available. Consider the following:

  • OpenPhil makes an open call for research projects targeting the novel domain: the call specifies precisely which questions the projects should tackle;

  • OpenPhil selects a panel of experts who can evaluate both the given projects as well as the competence of the applicants to carry the project;

  • OpenPhil provides milestone criteria, in view of which the grant would be extended: e.g. the grant may initially be for the period of 5 years (e.g. 1.5 mil EUR is usually considered sufficient to fund a team of 5 members over the course of 5 years) , after which the project participants have to show the effectiveness of their project and apply for additional funding.

The benefits of such a procedure would be numerous:

  1. avoiding confirmation bias: as we all here very well know, confirmation bias can be easily present when it comes to controversial topics, which is why a second opinion is extremely important. This doesn't mean we shouldn't allow for hard-headed researchers to pursue their provocative ideas, nor that only dominant-theory-compatible ideas should be considered worthy of pursuit. Instead, what needs to be assured is that prospective values, suggesting the promising character of the project, are satisfied. Take for instance Wegener's hypothesis of continental drift, which he proposed in 1912. Wegener was way too confident of the acceptability of his theory, which is why many prematurely rejected his ideas (where my coauthor and I argue that such a rejection was clearly unjustified). Nevertheless, his ideas were indeed worthy of pursuit, and the whole research program had clear paths that could have been pursued (despite the numerous problems and anomalies). So, a novel surprising idea, challenging an established one isn't the same as a junk-scientific hypothesis which shows no prospective values whatsoever. We can assess its promise, no matter how risky it is. For that we need experts who can check its methodology. And since MIRI's current work concerns decision theory and ML, it's not as if their methodology can't be checked in this way, and in view of the goals of the project set in advance by OpenPhil (so the check-up would have to concern the question: how well does this method satisfy the required goals?).

  2. Another benefit of the above procedure is assuring that the most competent scholars lead the given project. MIRI may have good intentions, but how do we know that some other scholars wouldn't perform the same job even better? There must be some kind of competence check-up, and a time-sensitive effectiveness measure. Number of publications is one possible measure, but not the most fortunate one (I agree on this with others here). But then we need something else, for example: a single publication with a decent impact. Or a few publications over the course of a few years, each of which exhibits a strong impact. Otherwise, how do we know there'll be anything effective done within the project? How do we know these scholars rather than some others will do the job? Even if we like their enthusiasm, unless they reach the scientific community (or the community of science policy makers), how will they be effective? And unless they manage to publish in high-impact venues (say, conference proceedings), how will they reach these communities?

  3. Financing more than one project and thus hedging one's bets: why give all 3.75 mil USD to one project instead of awarding them to, say, two different groups (or as suggested above, to one group, but in phases)?

While I agree that funding risky, potentially ground-breaking research is important and may follow different standards than the regular academic paths, we still need some standards, and those I just suggested seem to me strictly better than the ones employed by OpenPhil in the case of this particular grant. Right now, it all just seems like a buddy system: my buddies are working on this ground-breaking stuff and I trust them, so I'll give them cash for that. Doesn't sound very effective to me :p

Comment author: Benito 02 March 2018 01:39:52PM *  2 points [-]

Gotcha. I’ll probably wrap up with this comment, here’s my few last thoughts (all on the topic of building a research field):

(I’m commenting on phone, sorry if paragraphs are unusually long, if they are I’ll try to add more breaks later.)

  • Your list of things that OpenPhil could do (e.g specify the exact questions this new field is trying to solve, or describe what a successful project should accomplish in this field in five years) sound really excellent. I do not think they’re at all easy in this case however.
  • I think one of the things that makes Alignment a difficult problem (and is the sort of thing you might predict if something were correctly in the reference class of ‘biggest problem for humanity’) is that there is not agreement on what research in the field should look like, or even formal specification of the questions - it is in a pre-paradigmatic stage. It took Eliezer 3 years of writing to convey some of the core intuitions, and even then that only worked for a small set of people. I believe Paul Christiano has not written a broadly understandable description of his research plans for similar reasons.
  • However, I’m strongly in agreement that this would be awesome for the field. I recently realised how much effort MIRI themselves have put into trying to set up the basic questions of the field, even though it’s not been successful so far. I can imagine that doing so would be a significant success marker for any AI Alignment researcher group that OpenPhil funds, and it’s something I think about working on myself from time to time.
  • I have a different feeling to you regarding the funding/writing ratio. I feel that OpenPhil’s reasons for funding MIRI are basically all in the first write-up, and the consequent (short) write-up contains just the variables that are now different.
  • In particular, they do say this typically wouldn’t be sufficient for funding a research org, but given the many other positive signs in the first write up, it was sufficient to 2.5x the grant amount (500k/year to 1.25mil/year). I think this is similar to grant amounts to various other grantee in this area, and also much smaller than the total amount OpenPhil is interested in funding this area with (so it doesn’t seem a surprising amount to me).
  • I see this as a similar problem for the other grants to more ‘mainstream’ AI Alignment researchers OpenPhil Funds; it’s not clear to me that they’re working on the correct technical problems either, because the technical problems have not been well specified, because they’re difficult to articulate.
  • My broad strokes thoughts again are that, when you choose to make grants that your models say have the chance of being massive hits, you just will look like you’re occasionally making silly mistakes, even once people take into account that this is what to expect you to look like. Given my personally having spent a bunch of time thinking about MIRI’s work, I have an idea of what models OpenPhil has built that are hard to convey, but it seems reasonable to me that in your epistemic position this looks like a blunder. I think that OpenPhil probably knew it would look like this to some, and decided to make the call anyway.

Final note: of your initial list of three things, the open call for research is the one I think is least useful for OpenPhil. When you’re funding at this scale in any field, the thought is not “what current ideas do people have that I should fund”, but “what new incentives can I add to this field”? And when you’re adding new incentives that are not those that already exist, it’s useful to spend time initially talking a lot with the grandees to make sure they truly understand your models (and you theirs) so that the correct models and incentives are propagated.

For example, I think if OpenPhil has announced a $100 grant scheme for Alignment research, many existing teams would’ve explained why their research already is this, and started using these terms, and it would’ve impeded the ability to build the intended field. I think this is why, even in cause areas like criminal justice and farm animal welfare, OpenPhil has chosen to advertise less and instead open 1-1 lines of communication with orgs they think are promising.

Letting e.g. a criminal justice org truly understand what you care about, and what sorts of projects you are and aren’t willing to fund, helps them plan accordingly for the future (as opposed to going along as usual and then suddenly finding out you aren’t interested in funding them any more). I think the notion that they’d be able to succeed by announcing a call for grants to solve a problem X, is too simplistic a view of how models propagate; in general to cross significant inferential gaps you need (on the short end) several extensive 1-1 conversations, and (on the longer end) textbooks with exercises.

Added: More generally, how many people you can fund quickly to do work is a function of how inferentially far you are away from the work that the people you hope to fund are already doing.

(On the other hand, you want to fund them well to signal to the rest of a field that there is real funding here if they provide what you’re looking for. I’m not sure exactly how to make that tradoeff.)

Comment author: Dunja 28 February 2018 10:54:39AM *  3 points [-]

Thanks for the comment! I think, however, your comment doesn't address my main concerns: the effectiveness and efficiency of research within the OpenPhil funding policy. Before I explain why, and reply to each of your points, let me clarify what I mean by effectiveness and efficiency.

By effective I mean research that achieves intended goals and makes an impact in the given domain, thus serving as the basis for (communal) knowledge acquisition. The idea that knowledge is essentially social is well known from the literature in social epistemology, and I think it'd be pretty hard to defend the opposite, at least with respect to scientific research.

By efficient I mean producing as much knowledge by means of as little resources (including time) as possible (i.e. epistemic success/time&costs of research).

Now, understanding how OpnPhil works doesn't necessarily show that such a policy results in effective and efficient research output:

  • not justifying their decisions in writing: this indeed doesn't suggest their policy is ineffective or inefficient, though it goes against the idea of transparency and it contributes to the difficultly of assessing the effectiveness and efficiency of their projects;

  • not avoiding the "superficial appearance of being overconfident and uninformed": again this hardly shows why we should consider them effective and efficient; their decision may very well be effective/efficient, but all that is stated here is that we may never know why.

Compare this with the assessment of effective charities: while a certain charity may state the very same principles on their website, we may agree that we understand how they work; but this will in no way help us to assess whether they should count as effective charity or not.

In the same vane all I am asking is: should we, and if so why, consider the funding policy of OpenPhil effective and efficient? Why is this important? Well, I take it to be important in case we value effective and efficient research as an important ingredient of funding allocation within EA. If effective altruism is supposed to be compatible with ineffectiveness and inefficiency of philanthropic research, the burden of proof is on the side that would hold this stance (similarly to the idea that EA would be compatible with ineffective and inefficient charity work).

Now to your points on the grant algorithm:

1.Effectiveness and efficiency

The framework in EA of 'scope, tractability and neglectedness' was in fact developed by Holden Karnofsky (the earliest place I know of it being written down is in this GiveWell blogpost) so it was very likely in the grant-maker's mind.

In the particular case I discuss above, it may have been likely, but unfortunately, it is entirely unclear why it was so. That's all I sam saying. I see no argument except for "trust a single anonymous reviewer". Note that the reasoning of the reviewer could easily be blinded for public presentation to preserve their anonymity. However, none of that is accessible. As a result, it is impossible to judge why the funding policy should be considered effective or efficient, which is precisely my point.

2.A panel of expert reviewers

This actually is contrary to how OpenPhil works: they attempt to give single individuals a lot of grant-making judgement. This fits in with my general expectation of how good decision making works; do not have a panel, but have a single individual who is rewarded based on their output (unfortunately OpenPhil's work is sufficiently long-term that it's hard to have local incentives, though an interesting financial setup for the project managers would be one where, should they get a win of sufficient magnitude in the next 10 years (e.g. avert a global catastrophic risk), then they get a $10 million bonus). But yeah, I believe in general a panel cannot create common knowledge of the deep models they have, and can many cases be worse than an individual.

I beg to differ: a board of reviewers may very well consist of individuals who do precisely what you assign to a single reviewer: "a lot of grant-making judgment". As it is well known from journal publication procedures, a single reviewer may easily be biased in a certain way, or have a blind spot concerning some points of research. Introducing at least two reviewers is done in order to keep biases in check and avoid blind spots. Defending the opposite goes against basic standards of social epistemology (starting already from Millian views on scientific inquiry, to critical rationalists' stance, to the points raised by contemporary feminist epistemologists). Finally, if this is how OpenPhil works, that doesn't tell us anything concerning the effectiveness/efficiency of such a policy.

3.One's track record (including one's publication record)

A strong publication record seems like a great thing. Given the above anti-principles, it's not inconsistent that they should fund someone without it, and so I assume the grant-maker felt they had sufficiently strong evidence in this situation.

But why should we take that to be effective and efficient funding policy? That the grant-maker felt so is hardly an argument. I am sure many ineffective charities feel they are doing the right thing, yet we wouldn't call them effective for that, would we?

4.The applicability of the above methodology to philanthropic funding

I've seen OpenPhil put a lot of work into studying the history of philanthropy, and funding research about it. I don't think the expert consensus is as strong as you make it out to be, and would want to see more engagement with the arguments OpenPhil has made before I would believe such a conclusion.

Again, they may have done so up to now, but my question is really: why is this effective or efficient? Philanthropic research that falls into the scope of scientific domain is essentially scientific research. The basic ideas behind the notion of pursuit worthiness have been discussed e.g. by Anne-Whitt and Nickles, but see also the work by Kitcher, Longino, Douglas, Lacey - to name just a few authors who have emphasized the importance of social aspects of scientific knowledge and the danger of biases. Now if you wish to argue that philanthropic funding of scientific research does not and (more importantly) should not fall under the scope of criteria that cover the effectiveness and efficiency of scientific research in general, the burden of proof will again be on you (I honestly can't imagine why this would be the case, especially since all of the above mentioned scholars pay close attention to the role of non-epistemic (ethical, social, political, etc.) values in the assessment of scientific research).

Comment author: Benito 01 March 2018 04:07:03PM *  2 points [-]

Ah, I see. Thanks for responding.

I notice until now I’ve been conflating whether the OpenPhil grant-makers themselves should be a committee, versus whether they should bring in a committee to assess the researchers they fund. I realise you’re talking about the latter, while I was talking about the former. Regarding the latter (in this situation) here is what my model of a senior staff member at OpenPhil thinks in this particular case of AI.

If they were attempting to make grants in a fairly mainstream area of research (e.g. transfer learning on racing games) then they would have absolutely wanted to use a panel if they were considering some research. However, OpenPhil is attempting to build a novel research field, that is not very similar to existing fields. One of the big things that OpenPhil has changed their mind about in the past few years, is going from believing there was expert consensus in AI that AGI would not be a big problem, to believing that there is not relevant expert class on the topic of forecasting AGI capabilities and timelines; the expert class most people think about (ML researchers) is much better at assessing the near-term practicality of ML research.

As such, there was not a relevant expert class in this case, and OpenPhil picked an unusual method of determining whether to give the grant (that heavily included variables such as the fact that MIRI has a strong track record of thinking carefully about long-term AGI related issues). I daresay MIRI and OpenPhil would not expect MIRI to pass the test you are proposing, because they are trying to do something qualitatively different than anything currently going on in the field.

Does that feel like it hits the core point you care about?


If that does resolve your confusion about OpenPhil’s decision, I will further add:

If your goal is to try to identify good funding opportunities, then we are in agreement: the fact that OpenPhil has funded an organisation (plus the associated write-up about why) is commonly not sufficient information to persuade me that it's sufficiently cost-effective that I should donate to it over, say, a GiveWell top charity.

If your goal however is to figure out whether OpenPhil’s organisation in general is epistemically sound, I would look to other variables than the specific grants where the reasoning is least transparent and looks the most wrong. The main reasons I have an unusually high amount of trust in OpenPhil's decisions is from seeing other positive epistemic signs from its leadership key research staff, not from assessing single grant datapoint. My model of OpenPhil’s competence instead weights more:

  • Their hiring process
  • Their cause selection process
  • The research I’ve seen from their key researchers (e.g. Moral Patienthood, Crime Stats Relpication)
  • Significant epistemic signs from the leadership (e.g. Three Key Things I've Changed My Mind About, building GiveWell)
  • When assessing the grant making in a particular cause, I think look to the particular program manager and see what their output has been like.

Personally in the first four cases, I’ve seen remarkably strong positive evidence. Regarding the latter I actually haven’t got much evidence, the individual program managers do not tend to publish much. Overall I’m very impressed with OpenPhil as an org.

(I'm about to fly on a plane, can find more links to back up some claims later.)

Comment author: Benito 28 February 2018 01:56:39AM *  15 points [-]

I think this stems from a confusion about how OpenPhil works. In their essay essay Hits-Based Giving, written in early 2016, they list some of the ways they go about philanthropy in order to maximise their chance of a big hit (even while many of their grants may look unlikely to work). Here are two principles most relevant to your post above:

We don’t: expect to be able to fully justify ourselves in writing. Explaining our opinions in writing is fundamental to the Open Philanthropy Project’s DNA, but we need to be careful to stop this from distorting our decision-making. I fear that when considering a grant, our staff are likely to think ahead to how they’ll justify the grant in our public writeup and shy away if it seems like too tall an order — in particular, when the case seems too complex and reliant on diffuse, hard-to-summarize information. This is a bias we don’t want to have. If we focused on issues that were easy to explain to outsiders with little background knowledge, we’d be focusing on issues that likely have broad appeal, and we’d have more trouble focusing on neglected areas.

A good example is our work on macroeconomic stabilization policy: the issues here are very complex, and we’ve formed our views through years of discussion and engagement with relevant experts and the large body of public argumentation. The difficulty of understanding and summarizing the issue is related, in my view, to why it is such an attractive cause from our perspective: macroeconomic stabilization policy is enormously important but quite esoteric, which I believe explains why certain approaches to it (in particular, approaches that focus on the political environment as opposed to economic research) remain neglected.

[...]

A core value of ours is to be open about our work. But “open” is distinct from “documenting everything exhaustively” or “arguing everything convincingly.” More on this below.

And

We don’t: avoid the superficial appearance — accompanied by some real risk — of being overconfident and underinformed.

When I picture the ideal philanthropic “hit,” it takes the form of supporting some extremely important idea, where we see potential while most of the world does not. We would then provide support beyond what any other major funder could in order to pursue the idea and eventually find success and change minds.

In such situations, I’d expect the idea initially to be met with skepticism, perhaps even strong opposition, from most people who encounter it. I’d expect that it would not have strong, clear evidence behind it (or to the extent it did, this evidence would be extremely hard to explain and summarize), and betting on it therefore would be a low-probability play. Taking all of this into account, I’d expect outsiders looking at our work to often perceive us as making a poor decision, grounded primarily in speculation, thin evidence and self-reinforcing intellectual bubbles. I’d therefore expect us to appear to many as overconfident and underinformed. And in fact, by the nature of supporting an unpopular idea, we would be at risk of this being true, no matter how hard we tried (and we should try hard) to seek out and consider alternative perspectives.

In your post, you argue that OpenPhil should follow a grant algorithm that includes

  • Considerations not just of a project's importance, but also it's tractability
  • A panel of experts to confirm tractability
  • Only grantees with a strong publication record
  • You also seem to claim that this methodology is the expert consensus of the field of philanthropic funding, a claim for which you do not give any link/citation (?).

Responding in order:

  • The framework in EA of 'scope, tractability and neglectedness' was in fact developed by Holden Karnofsky (the earliest place I know of it being written down is in this GiveWell blogpost) so it was very likely in the grant-maker's mind.
  • This actually is contrary to how OpenPhil works: they attempt to give single individuals a lot of grant-making judgement. This fits in with my general expectation of how good decision making works; do not have a panel, but have a single individual who is rewarded based on their output (unfortunately OpenPhil's work is sufficiently long-term that it's hard to have local incentives, though an interesting financial setup for the project managers would be one where, should they get a win of sufficient magnitude in the next 10 years (e.g. avert a global catastrophic risk), then they get a $10 million bonus). But yeah, I believe in general a panel cannot create common knowledge of the deep models they have, and can many cases be worse than an individual.
  • A strong publication record seems like a great thing. Given the above anti-principles, it's not inconsistent that they should fund someone without it, and so I assume the grant-maker felt they had sufficiently strong evidence in this situation.
  • I've seen OpenPhil put a lot of work into studying the history of philanthropy, and funding research about it. I don't think the expert consensus is as strong as you make it out to be, and would want to see more engagement with the arguments OpenPhil has made before I would believe such a conclusion.

OpenPhil does have one of its goals as improving the global conversation about philanthropy, which is one of the reasons the staff spend so much time writing down their models and reasons (example, meta-example). In general it seems to me that 'panels' are the sorts of thing an organisation develops when it's trying to make defensible decisions, like in politics. I tend to see OpenPhil's primary goals here as optimising more for communicating what it's core beliefs to those interested in (a) helping OpenPhil understand things better or (b) use the info to inform their own decisions, rather than just broadcasting every possible detail but in a defensible way (especially if it's costly in terms of time).

Comment author: Jan_Kulveit 28 December 2017 11:58:17AM 1 point [-]

For scientific publishing, I looked into the latest available paper[1] and apparently the data are best fitted by a model where the impact of scientific papers is predicted by Q.p, where p is "intrinsic value" of the project and Q is a parameter capturing the cognitive ability of the researcher. Notably, Q is independent of the total number of papers written by the scientist, and Q and p are also independent. Translating into the language of digging for gold, the prospectors differ in their speed and ability to extract gold from the deposits (Q). The gold in the deposits actually is randomly distributed. To extract exceptional value, you have to have both high Q and be very lucky. What is encouraging in selecting the talent is the Q seems relatively stable in the career and can be usefully estimated after ~20 publications. I would guess you can predict even with less data, but the correct "formula" would be trying to disentangle interestingness of the problems the person is working on from the interestingness of the results.

(As a side note, I was wrong in guessing this is strongly field-dependent, as the model seems stable across several disciplines, time periods, and many other parameters.)

Interesting heuristics about people :)

I agree the problem is somewhat different in areas not that established/institutionalized where you don't have clear dimensions of competition, or the well measurable dimensions are not that well aligned with what is important. Loooks like another understudied area.

[1] Quantifying the evolution of individual scientific impact, Sinatra et.al. Science, http://www.sciencesuccess.org/uploads/1/5/5/4/15543620/science_quantifying_aaf5239_sinatra.pdf

Comment author: Benito 31 December 2017 12:26:05AM 0 points [-]

I copied this exchange to my blog, and there were an additonal bunch of interesting comments there.

View more: Next