# Modelling the Good Food Institute - Oxford Prioritisation Project

*By Dominik Peters*

*Cross-posted from the Oxford Prioritisation Project blog.*

*Created 2017-04-18. Revised 2017-05-19. **We're centralising all discussion on the Effective Altruism forum. To discuss this post, please comment here.*

We have attempted to build a quantitative model to estimate the impact of the Good Food Institute (GFI). We have found this exceptionally difficult due to the diversity of GFI’s activities and the particularly unclear counterfactuals. In this post, I explain some of the modelling approaches we tried, and why we are not satisfied with them. This post assumes good background knowledge about GFI, you can read more at Animal Charity Evaluators.

# Approach 0: Direct estimation

Our first model of GFI involved directly estimating by how many years a fully funded GFI would accelerate the arrival of chicken product substitutes. Our first intuition was to put this at 5 years, but we realised that we had next to no intuitive grasp at all on this figure. So we attempted to find approaches that involved estimating quantities we have a better intuitive grasp on.

# Approach 1: Multiplier on investment into animal substitutes

A donation of $1 to GFI increases the amount of investments (by VCs, government research councils, other grant-making institutions) by $X, through creating new investment opportunities (like start-ups) and by making the field more attractive in general.

For example, New Harvest was instrumental in starting companies working on yeast-based replacements for dairy and egg products, by introducing future founders to each other and giving them a small start-up grant (of about $30-50k each). These companies subsequently source additional investment of about $3m. However, we do not believe that this is very informative for estimating future multipliers, since New Harvest might have picked very low-hanging fruit in these instances.

Next, we would need some account of how this increased investment would have accelerated the development of meat alternatives (so as to produce value). For this, we would need to estimate when this investment of $X would otherwise have occurred, but it is unclear how to figure this out.

We did not come up with good strategies for breaking these difficulties down into smaller chunks that would be easier to model.

# Approach 2: Direct Investment

A very simple modelling strategy involves estimating the rough amount of research effort (capital investments and research hours) that will be required in total and eventually to get to a “solution”, i.e., availability of attractive substitutes for animal products. One could obtain such an estimate by enumerating the list of animal products that need to be replaced, and then look at how much effort was needed to develop products such as the Impossible Burger. Next, one could assume that no-one else would ever invest in these opportunities. Then, by estimating the value of having substitutes and multiplying by fraction of the total effort required that our donation financed, we would get an estimate of the impact of our donation.

However, this approach is a bit silly because it does not model the acceleration of research: If there are no other donors in the field, then our donation is futile because £10,000 will not fund the entire effort required.

# Approach 3: Acceleration Dynamics

How are we going to reach the stage at which attractive meat substitutes are widely available? Well, companies and other research groups will have to expend some amount of effort into the problem, and the more cumulative effort has been expended the closer we are to a good solution. Our donation to GFI could be modelled as an external “shock” to the amount of effort invested into the field from that point in time onwards. Graphically, this could look like this:

Whether the unperturbed curve is linear is unclear; it could be convex.

Now, with additional effort invested into the problem, we are getting closer to a solution, and in particular the quality of meat substitutes available increases. Again, it is not obvious how the quality of these products is functionally related to the amount of cumulative effort expended; one possible shape would be an S-curve (which increases rapidly after some initial breakthroughs have been achieved, and flattens out when perfecting things), or it could be a curve indicating diminishing returns throughout (if we think that increasing quality becomes harder and harder), or many other possible shapes (consisting of many separate discoveries), or exponential (like in Moore’s law). Different choices of shapes imply different magnitudes of impact, and we found no good way of figuring out which shape fits the particular situation.

# Conclusion

We quickly became dissatisfied with each of the modelling approaches we tried. They either had major flaws (like failing to model acceleration dynamics) or did not succeed in actually breaking down our uncertainty into smaller, more manageable components.

## Comments (3)

PopularMy thoughts, apologies if I am just reiterating what you already know.

It seems like there are 3 very difficult things to get a ballpark estimate of:

The likelihood of developing a successful fake chicken as a function of the number of investment dollars. This seems like a scientific/technical question. The impact-driven EA investor will want to know the impact of his/her $1 on this probability, i.e., the slope of this

given the likely level of other's investment. I.e., if I expect others will invest $1 million, I consider how the probability of a chicken differs when investment increases from $1 million to $1 million +1. (Or to $1 million + 1*leverage multiplier, see below).The multiplier effect of a (donated) investment dollar, including both a. The leverage of a dollar (how much more you could borrow at a reasonable interest rate with an additional dollar of equity collateral); I think this could be estimated under some reasonable assumptions b. The effect of an additional investment dollar on subsequent investors/altruists willingness to invest. This seems like the hardest thing to calculate, and it is not completely clear whether that effect should even be positive. This is the 'seed money' question. It might be that when altruists see a greater amount of investment

already, they see their own contribution as less vital, and invest less.The likely overall distribution of total amount invested; the $1 million in the example in part 1.

*4 points [-]Perhaps I am not understanding but isn't it possible to simplify your model by honing in on one particular thing GFI is doing and pretending that a donation goes towards only that? Oxfam's impact is notoriously difficult to model (too big, too many counterfactuals) but as soon as you only look at their disaster management programs (where they've done RCTs to showcase effectiveness) then suddenly we have far better cost-effectiveness assurance. This approach wouldn't grant a cost-effectiveness figure for all of GFI, but for one of their initiatives at least. Doing this should also drastically simplify your counterfactuals.

I've read the full report on GFI by ACE. Both it and this post suggest to me that a broad capture-everything approach is being undertaken by both ACE and OPP. I don't understand. Why do I not see a systematic list of all of GFIs projects and activities both on ACE's website and here and then an incremental systematic review of each one in isolation? I realize I am likely sounding like an obnoxious physicist encountering a new subject so do note that I am just confused. This is far from my area of expertise.

Could you explain this more clearly to me please? With some stats as an example it'll likely be much clearer. Looking at the development of the Impossible Burger seems a fair phenomena to base GFI's model on, at least for now and at least insofar as it is being used to model a GFI donation's counterfactual impact in supporting similar products GFI is trying to push to market. I don't understand why the approach is silly because $10,000 wouldn't support the entire effort and that this is somehow tied to acceleration of research.

Regarding acceleration dynamics then, isn't it best to just model based on the most pessimistic conservative curve? It makes sense to me to think this would be the diminishing returns one. This also falls in line with what I know about clean meat. If we eventually do need (might as well assume we do for sake of being conservative) to simulate all elements of meat we'll also have to go beyond merely the scaffolding and growth medium problem and also include an artificial blood circulation system for the meat being grown. No such system yet exists and it seems reasonable to suspect that the closer we want to simulate meat precisely the more our scientific problems rise exponentially. So a diminishing returns curve is expected from GFI's impact - at least insofar as its work on clean meat is concerned.

*2 points [-]There are two ways donations to GFI could be beneficial: speeding up a paradigm-change that would have happened anyway, and increasing the odds that the change happens at all. I think it's not unreasonable to focus on the former, since there aren't fundamental barriers to developing vat meat and there are some long-term drivers for it (energy/land efficiency, demand).

However, in that case, it helps to have some kind of model for the dynamics of the process. Say you think it'll take $100 million and 10 years to develop affordable vat burgers; $1million now probably represents more than .1 year of speedup, since investors will pile on as the technology gets closer to being viable. But how much does it represent? (And, also, how much is that worth?) Plus, in practice we might want to decide between different methods and target meats, but then we need to have a decent sense of the responses for each of those.

I agree that this is possible. I'd say the way to go is generating a few possible development paths (paired $/time and progress/$ curves) based on historical tech's development and domain-experts' prognostications, and then looking at marginal effects for each path.

Not having looked into this more, it seems doable but not-straightforward. Note that the Impossible Burger isn't a great model for full-on synthetic meat. Their burgers are mostly plant-based, and they use yeast to synthesize hemoglobin, a single protein--something that's very much within the purview of existing biotech. This contrasts with New Harvest and Memphis Meats' efforts synthesizing muscle fibers to make ground beef, to say nothing of the eventual goal of synthesizing large-scale muscle structure to replicate steak, etc.

And we have a lot less to go on there. Mark Post at Maastricht University made a $325,000 burger in 2013. Memphis Meats claimed to be making meat at $40,000/kg in 2016.* Mark Post also claims scaling up his current methods could get to ~$80/kg (~$10/burger) in a few years. That's still about an order of magnitude off from the mainstream, and I think you'd need someone unbiased with domain expertise to give you a better sense of how much tougher that would be.

*Note- according to Sentience Politics' report on vat meat. I haven't listened to the interview yet.