Hide table of contents

Sometimes, there is a reason other grantmakers aren't funding a fairly well-known EA (-adjacent) project. 

This post is written in a professional capacity, as a volunteer/sometimes contractor for EA FundsLong-Term Future Fund (LTFF), which is a fiscally sponsored project of Effective Ventures Foundation (UK) and Effective Ventures Foundation USA Inc. I am not and have never been an employee at either Effective Ventures entity. Opinions are my own and do not necessarily represent that of any of my employers or of either Effective Ventures entity. I originally wanted to make this post a personal shortform, but Caleb Parikh encouraged me to make it a top-level post instead.

There is an increasing number of new grantmakers popping up, and also some fairly rich donors in longtermist EA that are thinking of playing a more active role in their own giving (instead of deferring). I am broadly excited about the diversification of funding in longtermist EA. There are many advantages of having a diverse pool of funding: 

  • Potentially increases financial stability of projects and charities
  • Allows for a diversification of worldviews
  • Encourages accountability, particularly of donors and grantmakers – if there’s only one or a few funders, people might be scared of offering justified criticisms
  • Access to more or better networks – more diverse grantmakers might mean access to a greater diversity of networks, allowing otherwise overlooked and potentially extremely high-impact projects to be funded
  • Greater competition and race to excellence and speed among grantmakers – I’ve personally been on both sides of being faster and much slower than other grantmakers, and it’s helpful to have a competitive ecosystem to improve grantee and/or donor experience 

However, this comment will mostly talk about the disadvantages. I want to address adverse selection: In particular, if a project that you’ve heard of through normal EA channels[1] hasn’t been funded by existing grantmakers like LTFF, there is a decently high likelihood that other grantmakers have already evaluated the grant and (sometimes for sensitive private reasons) have decided it is not worth funding.

Reasons against broadly sharing reasons for rejection

From my perspective as an LTFF grantmaker, it is frequently imprudent, impractical, or straightforwardly unethical to directly make public our reasons for rejection. For example:

  • Our assessments may include private information that we are not able to share with other funders.
  • Writing up our reasons for rejection of specific projects may be time-consuming, politically unwise, and/or encourage additional ire (“punching down”). 
  • We don’t want to reify our highly subjective choices too much, and public writeups of rejections can cause informational cascades.
  • Often other funders don’t even think to ask about whether the project has already been rejected by us, and why (and/or rejected grantees don’t pass on that they’ve been rejected by another funder).
  • Sharing negative information about applicants would make applying to EA Funds more costly and could discourage promising applicants.

Select examples

Here are some (highly) anonymized examples of grants I have personally observed being rejected by a centralized grantmaker. For further anonymization, in some cases I’ve switched details around or collapsed multiple examples into one. Most, although not all, of the examples are personal experiences from working on the LTFF. Many of these examples are grants that have later been funded by other grantmakers or private donors.

  • An academic wants funding for a promising sounding existential safety research intervention in an area of study that none of the LTFF grantmakers are familiar with. I asked around until I found a respected scientist in an adjacent field. The scientist told me that the work the applicant wanted funding for is a common line of inquiry in that subfield, not a new line of research as the application claimed.
  • Someone’s application has a lot of buzzwords and a few endorsements from community members, but after reading carefully about the work, talking it over, and thinking it through, my colleagues and I cannot tell how the work is meaningfully different from other forms of ML capabilities research.
  • An applicant wants to re-evaluate AI safety from an academic perspective that’s extremely under-represented in AI safety, longtermism, and/or EA. I asked around until I found an acquaintance in that general field that colleagues can vouch for. The acquaintance told me that they’re not familiar with the applicant’s specific subfield, but by standard metrics of their field, the applicant’s published work was lacking in rigor. 
  • An applicant has a promising-sounding application and sounds smart, but we’ve funded them before for a research grant and gotten no results (including negative results) and received no explanation for the lack of results. 
  • An application sounds promising but we’ve funded them before for a research grant and we thought the results are sufficiently mediocre that scarce resources are better used elsewhere (this is maybe the most common reason for rejection on this list)
  • An application was flagged as being rejected by a different longtermist grantmaker. I asked the other grantmaker for assistance and they mentioned serious issues with the project lead’s professional competency, which is a problem as the field they (want to) work in is quite sensitive.
  • An application sounds promising but one of the other LTFF fund managers flagged rumors about the applicant. I conducted my own investigation and concluded that the applicant has enough integrity or character issues or red flags that I’m not comfortable with recommending funding to them (for example, credible evidence of plagiarism, faking data, interpersonal harm in a professional setting, or not fulfilling contractual obligations).
  • An application sounds promising but I was a bit concerned about a few yellow flags in the grantee’s history and portfolio. I attempted to investigate further, but learned soon after that a different grantmaker has already funded them, without (as far as I can tell) doing the same due diligence.
    • I’ve since followed up and am reasonably sure that none of my worries materialized. This is a good example of how an abundance of caution can be excessively costly or net negative.
  • Another LTFF fund manager talked to multiple donors who said things like "I funded this because I was confident that the LTFF would fund it, but I could do it more quickly." The fund manager investigated the grants in question, and found that in several cases the LTFF had already rejected some of the projects, and in some other cases, the fund manager was quite skeptical they’d be above the LTFF’s funding bar.

Some tradeoffs and other considerations

Note that I’ve selected these examples in part due to relevance to downside risks, or otherwise for being interesting. However, the primary reason projects get rejected by LTFF and other funders is the perception that the expected outcomes don't justify the expenses. We can, of course, make mistakes in these evaluations. I welcome differing opinions regarding our evaluations and funding choices. Assuming projects are always adversely selected is also quite risky as the EA funding landscape is far from efficient. 

Broadly speaking, in the current climate it is hard for new grantmakers to know whether a grant application was a) not looked at by other grantmakers, b) rejected for bad reasons, c) rejected for reasons orthogonal to the new grantmakers’ interest, or d) rejected for good reasons. Leaning to the side of always funding projects that object-level appear to have high positive impact run into unilateralist curse considerations, as well as straightforwardly wasting money. On the other hand, grantmakers are far from perfect and do make errors; well-coordinated grantmakers might be more likely to make correlated errors. So you might expect a network of independent funders to increase the odds that unusual-but-great projects won’t be overlooked.

 I’m not exactly sure how to navigate the tradeoffs. I mention salient costs above but of course centralization is also very dangerous. Comments are welcome.

  1. ^

    as opposed to e.g. a project you heard of through very private networks, which means they are less likely to have applied to any of the existing funds. 

Comments32
Sorted by Click to highlight new comments since: Today at 11:06 AM

I really appreciated this list of examples and it's updated me a bit towards checking in with LTFF & others a bit more. That said, I'm not sure adverse selection is a problem that Manifund would want to dedicate significant resources towards solving.

One frame: is longtermist funding more like "admitting a Harvard class/YC batch" or more like "pre-seed/seed-stage funding"? In the former case, it's more important for funders to avoid bad grants; the prestige of the program and its peer effects are based on high average quality in each cohort. In the latter case, you are "black swan farming"; the important thing is to not miss out on the one Facebook that 1000xs, and you're happy to fund 99 duds in the meantime.

I currently think the latter is a better representation of longtermist impact, but 1) impact is much harder to measure than startup financial results, and 2) having high average quality/few bad grants might be better for fundraising...

In the latter case, you are "black swan farming"; the important thing is to not miss out on the one Facebook that 1000xs, and you're happy to fund 99 duds in the meantime.

One risk of this framing is that as a seed funder your downside is pretty much capped at "you don't get any money" while with longtermist grantmaking your downside could be much larger. For example, you could fund someone to do outreach who is combative and unconvincing or someone who will use poor and unilateral judgement around information hazards. The article has an example of avoiding a grant that could potentially have had this kind of significant downside risk with "concluded that the applicant has enough integrity or character issues or red flags that I’m not comfortable with recommending funding to them".

I've heard this argument a lot (eg in the context of impact markets) and I agree that this consideration is real, but I'm not sure that it should be weighted heavily. I think it depends a lot on what the distribution of impact looks like: the size of the best positive outcomes vs the worst negative ones, their relative frequency, how different interventions (eg adding screening steps) reduces negative projects but also discourages positive ones.

For example, if in 100 projects, you have [1x +1000, 4x -100, 95x ~0], then I think black swarm farming still does a lot better than some process where you try to select the top 10 or something. Meanwhile if your outcomes look more like [2x +1000, 3x -1000, 95x ~0] then careful filtering starts to matter a lot.

My intuition is that the best projects are much better than the worst projects are bad, and also that the best projects don't necessarily look that good at the outset. (To use the example I'm most familiar with, Manifold looked pretty sketchy when we applied for ACX Grants, and got turned down by YC and EA Bahamas; I'm still pretty impressed that Scott figured we were worth funding :P)

  •  
    • I’ve since followed up and am reasonably sure that none of my worries materialized. This is a good example of how an abundance of caution can be excessively costly or net negative.


 

I'm really glad you included this. I know this post is focusing on a specific type of error but I was very worried about a potential vibe of "if LTFF didn't fund there's always a good reason that you'd agree with". Acknowledging a specific time you were wrong goes a long way to allaying that fear. 

Thanks for writing this.

I understand why you can't go public with applicant-related information, but is there a reason grantmakers shouldn't have a private Slack channel where they can ask things like "Please PM me if any of you have any thoughts on John Smith, I'm evaluating a grant request for him now"?

Yeah we're working on something like this! There are a few logistical and legal details, but I think we can at least make something like this work between legible-to-us grantmakers (from my lights, LTFF, EAIF, OP longtermism, Lightspeed, Manifund, and maybe a few of the European groups like Longview and Effective Giving). Obviously there are still limitations (eg we can't systematically coordinate with academic groups, government bodies, and individual rich donors), but I think an expectation that longtermist nonprofit grantmakers talk to each other by default would be an improvement over the status quo.

(Note that weaker versions of this already happens, just not very systematically)

(Note that weaker versions of this already happens, just not very systematically)

The LTFF and one team at Open Phil have done this semi-systematically for about a year. I think that it's still hard for data protection reasons (and general comms sensitivity reasons) to do this for the majority of applications we receive.

I think this is worth doing for large grants (eg >$50k); for smaller grants, coordination can get to be costly in terms of grantmaker time. Each additional step of the review process adds to the time until the applicant gets their response and their money.

Background checks with grantmakers are relatively easier with an application system that works in rounds (eg SFF is twice a year, Lightspeed and ACX also do open/closed rounds) -- you can batch them up, "here's 40 potential grantees, let us know if you have red flags on any". But if you have a continuous system like LTFF or Manifund, then every coordination request between two funders adds an interruption point/context switch. I think out of ~20 grants we've made on Manifund, we checked in with LTFF/Lightspeed on 2 of them, mostly not wanting to bother them too much.

Background checks also take longer the more people you're checking with; you can ask in parallel but you'll be bottlenecked by the time of the slowest respondent. Reliability can get especially hard (what if a grantmaker is sick/on vacation)? You can also try setting a fixed timeline "we're going to approve this in 48h", I guess, and try to find a tradeoff between "enough time for checks to come back" and "not delaying process overmuch"

it is frequently imprudent, impractical, or straightforwardly unethical to directly make public our reasons for rejection.

I think these are all sensible reasons, the trouble is that all of these considerations also apply to the private communication networks proposed as solutions, not in the body of the post but in thee comment section (such as common slack channel which only funders are on, norm of checking in with LTFF, etc).

It seems like a rare scenario that something is, by professional standards, too "private" or too "punching down" for a public statement, but sufficiently public and free of power disparities to be fair game for spreading around the rumor network. And concerns about reifying your subjective choices and fears by applicants that you would share negative information about them arguably become worse when the reification and spread occurs in private, rather than in public. 

I think an expectation that longtermist nonprofit grantmakers talk to each other by default would be an improvement over the status quo.

This sounds obviously good in general if we talk to each other, but I get the impression that in this context we're talking not about the latest research and best practices but specifically about the communication of sensitive applicant info which would ordinarily be a bit private... if negative evaluations of people which are too time consuming to bother with and are not made public, not even to the applicant themselves, tend to just disappear or remain privately held - maybe that's basically fine as a status quo? Does negative-tinged information that is too trivial to bother with for public statements and formal channels being spread through private informal channels that only the inner grantmaker circle can access really constitute an improvement?

I thought I'd work through how my reasoning goes for the provided examples. 

Many of these examples are grants that have later been funded by other grantmakers or private donors.

In my judgement, most of these (very helpful) concrete examples fall under either a) this deserves a public statement, or b) this represents a subjective judgement call where other funders should make that call independently. It's not that I think a private communication is never the right way to handle it, it's just that it seems to me like they usually aren't, even in the examples that are picked out.

The first three examples all involve subjective judgement calls, by a scientist, by yourself, and by an acquaintance, and it would be bad if these judgement calls (especially by just an acquantance!) propagated via a whisper network instead of other people making an independent decision. 

The next two examples, which involve grantees not delivering on promises, if they involve sufficiently large grants...well, I think a grantmaker ought to state what the impact of their grant are, and if a grant didn't have impact then that should be a made a note of publicly. This should not be an attack on the grantee, this is transparency by the grantmaker about what the impact their grant had, and bad grants should be acknowledged. However, I guess in the scenario where the grantee is intended to remain anonymous, then it is fair to propagate that info via whisper network but not public statement, but I would question the practice of giving large grants to anonymous grantees. For small grants to individuals, I guess if someone failed to deliver once isn't it best to let it go and let them try again elsewhere with someone else, the way it would be in any other professional realm? If they failed to deliver multiple times a whisper network is justified. If they seem to be running a scam then it's time for a public statement.

The rest of the examples save the last, which involve concerns about character...I mean, outright plagiarism and faking data absolutely should be called out publicly. When it's about less substantial and more vague reputational concerns, I can see the case for private comms a bit more, although it's a goldilocks scenario even then because if the concerns aren't substantially verified then shouldn't others independently make their judgement calls?

(The final example is valid for a private check, but tautologically so - yes of course, if the rational for a grantmaker is "LTFF would probably fund it" they ought to check if LTFF did in fact evaluate and reject it.)

In summary I think for the majority of these examples, I think either the public statement should be made, or the issue should be dropped, and it's only the very rare borderline case where private communications such that all grantmakers are actually secretly talking to each other and deferring to each other are the way to go.

In general, I think the bar for sharing (alleged) competency judgements to be a lot higher for sharing potential character issues.

And just so we're on the same page, I consider the first example a character issue, not a competency issue. The second and third examples are kind of borderline between character vs competency issues; I think my anonymized description does not make the assessment clear to onlookers and more details are necessary. 

You make good points. Initial thoughts:

Rejection reasons that are fine/good to spread through whisper network:

  • Poor performance on a previous grant project that you directly evaluated
  • Reasonably verified character issues

 

Rejection reasons that should not be shared through whisper network:

  • Grant proposal content did not meet granting threshold
  • Someone in a related field told you of weak job performance

(The idea being other granters should find out those reasons for themselves. Seek out the field experts, maybe talk to new ones).

Interesting, I think I share some of your intuitions and disagree with others. From my perspective, when Funder A is considering whether to share information to Funder B, the following should not be shared through whisper network:

  • Grant proposal content did not meet granting threshold of Funder A. 
  • Poor performance on a previous grant project with public outputs, especially in an area of the other grantmaker's presumed expertise (Eg for a technical AI safety project or longtermist philosophy project, other AI safety etc grantmakers can presumably make up their own minds about whether the research is sufficiently high quality; they're operating on the same information as Funder A is).
  • Any information told in implicit or explicit confidence to Funder A, eg "did not complete project Y due to a family tragedy"

Whereas I think the following should be shared:

  • Whether the grantee previously received a grant from Funder A (unless there were implicit or explicit promises to keep it anonymous even to other funders)
    • The most trivial example is if it's public information that someone received a grant from Funder A but Funder B didn't notice.
  • Grantee previously applied to Funder A for a grant with a private output; relevant judgements (since Funder B can't evaluate the same private outputs for themselves). 
  • Information Funder A received when investigating a potential grantee's character issues; if there's consent to pass the information on. 
  • Judgements Funder A formed when investigating a potential grantee's character issues, if there's no consent to directly pass the information on.
  • Information Funder A received from an expert in a related field (outside of Funder B's area of expertise) about whether a potential grantee's work is novel, considered high-quality within the field, etc.
    • My reasoning is that the following is worth passing on (very fictionalized examples) 
      • a grantseeker claims to be investigating a novel approach to asteroid risks, but an advisor tells me their approach is standard in NEO astrophysics
      •  a grantseeker claims to be an expert in a field, but the evidence they present is something experts in that field knows isn't very good evidence, but external people might be misled
        • eg they published several papers in top journals of academic field Y, but it's well-known within field Y that journal publications in Y are easy, and most high-quality is identified from other venues (conference publications or books or blog posts or w/e). 
    • I think this type of information is worth passing on partially because I have some cynicism about the time constraints/investigative abilities of others in this ecosystem, plausibly (without my prompting) others could miss key info, especially outside of their presumed areas of expertise.
      • I will be sad but not too surprised if I later learned that one of my own past grants have fallen into this category. 
    • In the case of severely misrepresenting your work to funders, I consider this to be a subset of character issues, especially if there's reasonable doubt as to whether the funder is expected to know the ground truth. 

These short summary reasons in this post forwhy grants are not made are great and very interesting to see.

Was wondering do the unsuccessful grant applicants tend to recieve this feedback (of the paragraph summary kind in this post) or do they just get told "sorry no funding"?

I wonder if this could help the situation. I think if applicants have this feedback, and if other granters know that applicants get feedback they can ask for it. I've definitely been asked "where else did you apply and what happened" and been like "I applied for x grant and got feedback xyz of which I agree with this bit but not that bit". (Or maybe that doesn't help for some of the reasons in your " against sharing reasons for rejection" section)

(Also FWIW if there is a private behind the sceens grantmaker feedback channel, I'm not sure I would be comfortable with the idea of grant makers sharing information with each other that they weren't also willing to share with the applicants.)

I can't speak about all cases, but I think for most cases in the rough cluster of situations like the above, we do not currently give reasons for rejection at the level of granularity of the above. I'm a bit sad about this but I think it's probably the right call. I remember a specific situation some months ago where I wrote fairly elaborate feedback for an applicant but I was dissuaded from sending it, in retrospect for probably the right reasons. 

If we have something like 3x the current grantmaker capacity, I'd love for us to give more feedback, but this is not a priority now and I think it won't be in the near future, as I think the following are all more important:

  1. Finding a chair to take over Asya's role to formalize separation between LTFF and Open Phil 
  2. Evaluating grants faster and improving our turnaround times
  3. Have more eyeballs per grant evaluation
  4. More transparency and public communication with donors/applicants/the broader community (like this article and future ones)
  5. Feedback for the most promising applicants/grantees
  6. Donor engagement (especially with high-net-worth individuals) and fundraising
  7. Retroactive evaluations of and comparing LTFF grants with other grantmakers like Open Phil.
  8. More experiments with active grantmaking and trying to find more of a product-market fit in other areas adjacent to LTFF's interests (eg a fund specifically for AI Safety)

(Also FWIW if there is a private behind the sceens grantmaker feedback channel, I'm not sure I would be comfortable with the idea of grant makers sharing information with each other that they weren't also willing to share with the applicants.)

Thanks, this is helpful feedback.

I'm not sure I would be comfortable with the idea of grant makers sharing information with each other that they weren't also willing to share with the applicants


One of my pet ideas is to set up a grantmaker coordination channel (eg Discord) where only grantmakers may post, but anyone may read. I think siloed communication channels are important for keeping the signal to noise ratio high, but 97% of the time I'd be happy to share whatever thoughts we have with the applicant & the rest of the world too.

Thanks Linch. Agree feedback is time consuming and often not a top priority compared to other goals.

Thanks for sharing this Linch, I found it a useful complement to the marginal grant thresholds post, which I recommend for those who enjoyed this post.

Thanks Joel for your thoughtful comment, which I'd like to build on.

I was thinking about how we can get funders to make calculated bets on those that have been discarded elsewhere, and get rewarded when they proved others right. Isn't AI Safety Impact Markets trying to solve some of the issues with adverse selection through that kind of mechanism? Sorry for the lack of depth, but I think others can weigh in better.

Yeah, agreed! I haven’t thought about impact markets through Linch’s particular lens. (I’m cofounder of AI Safety Impact Markets.) 

Distinguishing different meanings of costly: Impact markets make applying for funding more costly in terms of reputation, in the sense that people might write public critiques of proposals. But they make applying less costly in terms of time, in the sense that you can post one standardized application rather than one bespoke one per funder.

But most people I’ve talked to don’t consider costly in terms of reputation to be a cost at all because they’re eager to get feedback on their plans to improve them, and rejections from funders rarely include feedback. (Critical feedback would then reflect badly on the previous draft but not on the latest one.)

Conversely, I’ve also heard of funders reasoning like “This project falls into the purview of the LTFF, so if it hasn’t gotten funded by them, there’s probably something wrong with it, and I shouldn’t fund it either.” Public feedback like, “We decided not to fund this project because we couldn’t find an expert in the field to assess its merits” would actually be “negatively costly” or beneficial in terms of reputation. It could also help with unwarranted yellow flags because impact markets are all about aggregating and amplifying specialized local knowledge. If, for example, the rumor mill claims that someone is a drug addict, a longterm flatmate could make a symbolic donation and clarify that the person in question only microdoses LSD, no hard drugs. That could silence the incorrect rumor. The flatmate could thus become an early donor to the project and reap an outsized (compared to the size of the donation) reward in terms of their score or later Impact Marks for adding this information to the market.

Our scores will be based on evaluations of the outputs, so all the issues that have to do with lacking rigor or not publishing anything in the first place are priced in. The issue with plagiarism, low integrity, and interpersonal harm is more concerning for me. I’ll consider adding a “whistle-blowing” tab to the comment section where users can post anonymously to deter low-integrity actors from using the platform. We (GoodX) can also manually intervene if we become aware of bad actors.

Generally, my “bias” is to keep things public by default. The funding ecosystem can then make exceptions in cases where that is not possible. The current default seems to be secret by default, which seems unnecessarily costly to me in multiple ways (reapplying to multiple funders in different formats, no feedback, bad coordination between funders, few funding gaps for small donors).

A small thought that occurred to me while reading this post:

In fields where most people do a lot of independent diligence, you should defer to other evaluators more. (Maybe EA grantmaking is an example of this.)

In fields where people mostly defer to each other, you're better off doing more diligence. (My impression is VC is like this—most VCs don't want to fund your startup unless you already got funding from someone else.)

And presumably there's some equilibrium where everyone defers N% of their decisionmaking and does (100-N)% independent diligence, and you should also defer N%.

First two points sounds like a valid application of Grossman-Stiglitz 1980

I'm not familiar with that paper, can you elaborate? :)

My guess is that it's this paper. The abstract/introduction:

If competitive equilibrium is defined as a situation in which prices are such that all arbitrage profits are eliminated, is it possible that a competitive economy always be in equilibrium? Clearly not, for then those who arbitrage make no (private) return from their (privately) costly activity. Hence the assumptions that all markets, including that for information, are always in equilibrium and always perfectly arbitraged are inconsistent when arbitrage is costly.

We propose here a model in which there is an equilibrium degree of disequilibrium: prices reflect the information of informed individuals (arbitrageurs) but only partially, so that those who expend resources to obtain information do receive compensation. How informative the price system is depends on the number of individuals who are informed; but the number of individuals who are informed is itself an endogenous variable in the model.

The model is the simplest one in which prices perform a well-articulated role in conveying information from the informed to the uninformed. When informed individuals observe information that the return to a security is going to be high, they bid its price up, and conversely when they observe information that the return is going to be low. Thus the price system makes publicly available the information obtained by informed individuals to the uniformed [sic]. In general, however, it does this imperfectly; this is perhaps lucky, for were it to do it perfectly, an equilibrium would not exist.

Why not include a standard application question asking "have you been rejected from other grantors for this project (or something sufficiently close to it), and if so, which?".

The onus is on the applicant to be honest, but given we already operate in a high trust community I think there is reason to believe applicants would be willing to be straightforward here.

It adds more time, but not much, and gives whoever is evaluating it down the line a chance to reach out if they think it'd be worth it.

I think this is already done. The application asks if you are receiving OpenPhil funding for said project or have done so in the past. It also asks if you've applied. I think people also generally disclose because the payoff of not disclosing is pretty low compared to the costs. EA is a pretty small community I don't think non-disclosure ever helps.

Yep, it'd be good if this was standardized more with a set list, though deciding which funders are relevant (Eg not getting NSF funding isn't very relevant to LTFF, but getting rejected from Lightspeed might be) or not might be hard. 

Great post, this seems like a very important topic that has not previously been explained to the general public in anything like this level of detail. Thanks very much for sharing.

I appreciate this post a lot. Very thought provoking and the examples make it very concrete and useful.

I wonder to what extent one should prefer a proposal that hasn't been evaluated by other grantmakers to one that has been and was rejected (ceteris paribus). This would depend on how what fraction of rejections are due to:

  1. proposer was bed (everyone should reject)
  2. proposal was bad (likely it was improved after previous rejection)
  3. project not a good fit for grantmaker (likely better fit next time)
  4. everything great but grantmaker ran out of money

For example, if all are due to 1, previous rejection is very strong negative signal, while if all are due to 4, previous rejection is not a negative signal at all. Maybe 3 can be a positive signal, but I am not sure. 

Does anyone have a sense of the answer? 

Thanks for this post! I found it helpful to read.

This post is helpful and appropriately cautious! Thanks Linch.

It feels like adverse selection is a common enough phenomenon that there must be helpful case studies to learn from. I explored this with GPT, and got the following solutions for philanthropic grantmaking:

  1. Third-party assessments: An independent body can evaluate projects or grantees and provide a certification.
  2. Open feedback mechanisms: Existing and past donors can leave reviews or feedback on projects, helping to inform potential future donors.
  3. Tiered grants: Offer different levels of funding based on the risk or novelty of the project. Riskier projects might get smaller, initial amounts with the possibility of more significant funding later if they show promise.
  4. Pilot funding rounds: Similar to probationary periods, fund a project for a short time or with limited funds to assess its viability before committing more.
  5. Collaborative funding: Multiple grantmakers can come together to fund a project, thereby sharing the risk.
  6. Transparency in rejections: While specific details might remain confidential, grantmakers can provide general reasons for rejection, helping to guide potential donors.
  7. Mentorship or guidance: Instead of just providing funds, offer mentorship or guidance to projects, helping them to develop in areas where they might be lacking.

I'm pleased with (2) -- I've been putting time into open feedback on Manifund. And (5) is suggestive of something helpful: when it is ok for projects to receive only partial funding and each project applies to the same set of funders, then funders funding only "their part" reduces possible damage without the need to share private information. (Not putting in their part might be helpful information itself.) 

Otherwise, these suggestions seem obvious or unhelpful. But I expect that a couple-of-hours dive into how philanthropists or science funders have dealt with these dynamics would be better. Nice project for someone! (@alex lawsen (previously alexrjl)?)

Thanks for the suggestions :) 

I don't quite understand how Mentorship or guidance, Pilot funding rounds or Tiered grants would help with adverse selection effects. Could you expand?

No problem, thanks for engaging!

I wrote that most of GPT-4’s suggestions were “obvious or unhelpful.” I would include the ones you pointed to in this box. Pilot funding and tiered grants are presumably things you already do implicitly -- e.g. by not committing funding for multiple years in one go, or by not giving huge resources to grantees that you don't think highly of -- and where you wouldn't benefit much from making this more explicit. And mentorship or guidance seems unhelpful because it's much too time-costly.

I'm guessing that GPT-4 is trying to point to 'ways to lower the information asymmetry' characteristic of adverse selection. All three of these methods give money-cheap ways of gaining more information before making money-expensive decisions.

More from Linch
Curated and popular this week
Relevant opportunities