Background
Shortly after the first EA Tech Initiatives video call this past weekend, I started thinking about a lot of observations I’ve had over the years about useful EA projects not getting funded or facing funding constraints. Several years ago I thought the EA community was highly funding constrained due to these observations. My updated model is that large organizations have significant funding available (for instance, via large Open Philanthropy Project grants), but projects/grants/startups are highly funding constrained, due to the fact that there are no centralized efforts to fund them, and also no promotion of or culture of funding smaller initiatives.
In thinking about funding for EA causes, I created a 2x2 funding grid with “organizations” and “individuals” on one axis, and “existing” and “new” on the other axis. It looks like funding “existing organizations” and “new individuals” joining the movement gets a lot of funding, but funding “new organizations” and improving the efficacy of “existing individuals” in EA gets significantly less funding. It could be worth exploring these two neglected funding areas further.
I generated a lot of ideas before I went online and realized that other EAs have already been discussing the need for better project/grant funding in considerable detail, with this appearing as one of the more recent posts in response to this Facebook thread on the main EA Facebook group.
The following are three ideas to tackle EA project funding in order of most decentralized to least decentralized, followed by a proposed structure for scaling small grants to EAs.
Idea 1: Kickstarter for EA Grant Opportunities
This idea aims to fix the discoverability problem by publicly listing all EA grant opportunities that are currently requesting funding so that donors can view, communicate with, and fund grant proposals that would otherwise not be known to most donors. The EA Forum does not appear to be a place for hundreds of people to post funding requests en masse, and I’m not sure what other medium could be used. My understanding is that most small scale funding is currently done on an informal, interpersonal basis which highly limits the ease and availability of funding.
This platform may run into problems Kickstarter faces regarding problems with lowered project discoverability as a large number of projects get added to the platform, quality control issues/projects being poorly executed, and problems that arise from not having centralized due diligence (anything from outright scams to projects that simply did not realistically have the ability to execute even after meeting funding goals many times over). There is likely a good reason why Indiegogo’s equity crowdfunding system uses Microventures to screen all offerings before they are posted online with an acceptance rate of under 5%, otherwise a lot of scams and projects with low execution capacity could be posted, making the platform unusable for making good investments without investors committing a lot of their time in due diligence.
Idea 2: Distributed Grantmakers
I met someone at EA Global 2018 who is heavily involved in the AI safety space. This person mentioned they were familiar with opportunities to make small high impact grants in AI safety, and were considering using their own money to make these grants, but obviously the amount of available capital for self-financed grants is very limited despite the number of and needs of the grantees. This made me realize that as a donor, I would be very poorly equipped to judge project areas like AI safety which interest me but are outside my area of expertise. If I were to make grants in AI safety, I would want someone like the person I met guiding my donation, not only because they are subject matter experts but also because they are very familiar with small grant opportunities that I would never become aware of because I don’t do direct work in the field and am not networked with hundreds of people in the field.
Two problems with EA Grants are apparent in this September 2017 post.
Problem 1: Evaluating any project is challenging without domain expertise in the project's area(s)
We found it hard to make decisions on first-round applications that looked potentially promising but were outside of our in-house expertise. Many applicants had proposals for studies and charities we felt under-qualified to assess. Most of those applicants we turned down; some we deferred to the relevant Open Phil program manager. We are in the process of establishing relationships with domain experts who can help us do this in the future.
Problem 2: A consolidated project evaluation process takes a lot of time and effort
We then went through the list by rank and chose applicants to interview, discussing applicants about which there was large divergence in scores or general opinion. Given our £500,000 budget and most of three staff members’ time for two weeks, we decided to interview 63 candidates.
Both of these problems could be eliminated by transitioning away from a model where a very small number of people attempt to evaluate a very large number of projects they do not have any expertise in, to a model oriented around empowering people with existing subject matter expertise and existing knowledge of grant opportunities in their space to give grants to support those opportunities instead of not taking action.
This proposal could run into difficulties with the quality of distributed grantmakers, although a grantmaker evaluation process would be at least an order of magnitude less work than evaluating all of the projects themselves. This proposal could cause a bias towards funding projects that grantmakers are already familiar with, which could exclude many projects from consideration. And of course there are issues like conflicts of interest with grantmakers and grantees or even with the grantmaker's own projects, feeling obligated to donate to one’s contacts for social reasons even if the idea is poor, and facilitating the entire distributed grantmaking system itself.
Idea 3: Large-Scale Centralized Grantmaking
This idea is most similar to EA Grants, but in the form of a separate organization with a robust grantmaking team.
Due to the current scale of EA Grants, people who write grant applications are not being catered to optimally, and this could be a great reason to fund a separate organization exclusively for the purpose of facilitating grants to smaller initiatives. Rather than have CEA team members work part time on giving grants throughout the year and then switch back to their main area of work, dedicated, specialized grantmaking staff should be hired so that disparate grants can be reviewed by someone with an appropriate background, grantmaking can happen throughout the year, a high number of grants can be assessed, and the project can scale quickly and effectively if needed. Hopefully, having dedicated grantmaking staff means better grants over time as the staff becomes more experienced. Edit: CEA is hiring a full-time grants evaluator, so they appear to be moving in this direction.
Right now, EA Grants is not equipped to do things like give funding to for-profit EA initiatives or fund things like educational expenses. Setting up a new organization with the capacity to do tax-deductible grants, non-tax-deductible grants, and equity/bond impact investments could greatly help with getting different types of projects and activities off the ground.
This proposal has concerns around concentrating grantmaking among very few people, the possibility of insufficient transparency when making grants, and issues about the amount of resources it would take to run such an organization compared to less centralized solutions.
Concept Proposal: Fusing All Three Ideas to Maximize Information Sharing, Grantmaking Efficiency, and Ease of Funding
I think that elements of all three ideas are beneficial and could be included in a more optimal solution for scaling grantmaking. My proposed concept is inspired by how real world venture funds operate, with a few key differences.
The concept starts with a website that has a fully digital grant application process. Applicants create user accounts that let them edit applications, and applicants can choose from a variety of options like having the grant be hidden or publicly displayed on the website, and posting under their real names or a pseudonym. Grants have discussions sections for the public to give feedback. Anonymous project submission help people get feedback without reputation risk and judge project funding potential before committing significant time and resources to a project.
If the applicant opts to make an application public, it is displayed for everyone to see and comment on. Anyone can contact the project creator, have a public or private discussion on the grant website, and even fund a project directly.
The website is backed by a centralized organization that decides which proposals to fund via distributed grantmaking. Several part-time or full-time team members run the organization and assess the quality and performance of grantmakers. EAs in different cause areas can apply to be grantmakers. After an initial evaluation process, beginner grantmakers are given a role like “grant advisor” and given a small grantmaking budget. As grantmakers prove themselves effective, they are given higher roles and a larger grantmaking budget.
While powered by dencentralized grantmakers, the organization has centralized funding options for donors that do not want to evaluate grants themselves. Donations can be tax-deductible, non-tax-deductible, or even structured as impact investments into EA initiatives. Donors can choose cause areas to fund, and can perhaps even fund individual grantmakers.
This model greatly increases awareness for grant opportunities in all areas across effective altruism. It makes it possible for grantees to seek funding from many sources in a centralized location, and for donors to choose their own grants if they want instead of relying on something like EA Grants to make the right grant decisions. Via decentralized grantmaking, this model makes it so that large amounts of funding can be shuttled to giving small grants, with higher average evaluator subject matter expertise and lower evaluator time commitment per grant compared to EA Grants.
Conclusion
Hopefully this post inspires additional thoughts, and more importantly, actions in the area of facilitating additional funding towards grants to EAs, EA projects, and new nonprofit and for-profit EA organizations.
If feedback for this grantmaking idea is sufficiently positive, I am interested in spending time making this idea a reality. Otherwise, hopefully this post provides useful ideas for future grantmaking organizations and programs to consider.
Update
Based on this post's comments, it seems like further discussion and coordination around improving grantmaking within EA would be beneficial. I've created the #ti-funding channel within Rethink Charity's Slack team to promote greater discussion and coordination around this topic. Further updates to come.
Hi byanyothername,
I'm not sure who you are, but I appreciate the candid feedback. I would like to point out, however, that giving anonymous, discrediting feedback in a public setting is discouraging to the receiver and quite possibly harmful. I am not sure if anonymous, discrediting feedback is a useful community norm or not; I haven't thought about this in much detail. Prior examples in the community appear to have individuals give public and non-anonymous feedback in very extreme cases with a tremendous amount of supporting information. Perhaps you can share additional thoughts about your choice to provide anonymous discrediting feedback with minimal information, as opposed to pursuing another course of action, such as privately discussing your concerns with me and updating your opinion of me based on what I share privately before offering to go around offering to share negative opinions about me without my say in the matter and as someone who barely knows me.
Your post makes me feel obligated to defend myself in order to prevent possible misconceptions from spreading. I get the sense that you are judging me based on a highly limited number of data points, and that you do not have a good sense of me as a person or what I've done. I believe judging people too quickly is generally considered a bad practice.
I will respond to your examples individually.
I have attempted to launch many early stage projects before. In order to make sure projects are useful, a large amount of feedback must be obtained, and projects must be presented in their worst possible state, without much validation and with shoddy, minimum viable product execution, by virtue of being early stage. Additionally, entrepreneurs have to express both high optimism to the public and themselves, while simultaneously trying to poke holes in the idea from every possible direction, for the purpose of maintaining self motivation and the motivation of team members, funders, etc while ensuring that what is being built is actually of value. Interactions with people working on early stage ventures could induce an impression of overconfidence, incompetence, or other potentially negative traits when viewed without any additional data points (for instance, "risks" are not part of standard pitch decks to VCs, and founders are instructed to generally act highly optimistic, but not to the point of deception of course).
You mention that you have found me wildly overconfident, but I do not think most/all the people who know me think that, so perhaps you are basing this off of one datapoint, maybe by hearing me speak about an early stage project in a promotional context. I have expressed very high uncertainty about cause areas, donating now vs later, the value of projects I'm working on, the value of projects other people are working on, AI timelines, and many other EA topics as well as pretty much every belief like "the value of taking alpha lipoic acid daily" to provide completely random examples I have thought of recently. My very limited experiments with self calibration in both casual and formal settings (like using PredictionBook.com) don't indicate anything amiss to me. However, there is a probability that I may be overconfident in some ways, or appearing like that in certain contexts, so I can definitely get additional feedback regarding that from other EAs that know me well.
Regarding your comments about Antigravity Investments, I would imagine this person's opinion was rather old, perhaps from over a year ago when I solicited some very early feedback from various EAs. Many of them thought some parts of the idea were not useful. This is a normal part of validating whether an idea and various features are a good idea; I would be shocked if 100% of people expressed high enthusiasm about 100% of the idea. There is pretty much always very high variance in feedback on any sort of idea, startup or not, although curing cancer at $1 per treatment will probably get pretty much universal enthusiasm. I hope that you are not implying or thinking that I ignore feedback and pursue projects blindly. As someone who has personally shut down a lot of things I've started myself, I take outside feedback very seriously. As entrepreneurs know, market feedback and market traction is everything, not the theoretical optimality of an idea. In fact, I terminated an idea similar to EA Funds solely based on one EA's opinion that a web-based donor advised fund would experience nearly zero traction due to a lack of broad market demand and was thus not worth pursuing. There are now many startups and projects doing something very similar now, maybe over 5 separate teams in the EA community alone. The balance between weighing an internal model versus weighing external feedback on parts of that model can be challenging to balance in both directions.
Finally, I spent literally one week at Leverage Research many years ago as a 13–14 year old high school freshman doing an "internship" learning about creating strategic plans using the yEd graphing software. I have not had any contact with Leverage since then, have not worked on any Leverage Research projects at any point in my life, etc. Hopefully this is obvious, but I think claiming an association between a one week internship spent learning yEd as a young teenager and my current level of confidence is a little bit of a stretch.
No offense, but I would personally feel uncomfortable having you in the community warning about others in the community based on the quality of your analysis and your judgement in posting this, although I am again open to changing my opinion if people have thought about this and think this sort of thing is a good norm. And my opinion about you is also based on a very limited data point, which may not indicate much about your interpersonal demeanor, personality traits, overall judgement, etc.
Ah this is awesome. Thank you. And sorry.
The anonymous thing is mostly me realising that I hardly ever criticise people, wanting to practice, but knowing I'm going to make a ton of mistakes as I'm kinda new to this! I refrain from criticising people out of fear, so I thought I'd hide a bit under a cloak of anonymity until I get more skilled at this (also criticism is a particularly emotional thing so I don't want to unfairly tarnish my reputation after a few early mistakes).
Sorry again for the initial upset this probably caused. Fortunately, I'm pretty sure the community's on your side (I mean, I am, for starters!)