VG

Vasco Grilo

5609 karmaJoined Working (0-5 years)Lisbon, Portugal
sites.google.com/view/vascogrilo?usp=sharing

Bio

Participation
4

How others can help me

You can give me feedback here (anonymous or not). You are welcome to answer any of the following:

  • Do you have any thoughts on the value (or lack thereof) of my posts?
  • Do you have any ideas for posts you think I would like to write?
  • Are there any opportunities you think would be a good fit for me which are either not listed on 80,000 Hours' job board, or are listed there, but you guess I might be underrating them?

How I can help others

Feel free to check my posts, and see if we can collaborate to contribute to a better world. I am open to part-time volunteering, and part-time or full-time paid work. In this case, I typically ask for 20 $/h, which is roughly equal to 2 times the global real GDP per capita.

Comments
1287

Topic contributions
25

Thanks for the update, Ben!

Whether a nuclear war could become an existential catastrophe is highly uncertain — but it remains a possibility. What’s more, we think it’s unclear whether the world after a nuclear conflict would retain what resilience we currently have to other existential risks, such as potentially catastrophic pandemics or risks from currently unknown future technology. If we’re hit with a pandemic in the middle of a nuclear winter, it might be the complete end of the human story.

I think nuclear war becoming an existential catastrophe is an extremely remote possibility:

  • I estimated an annual extinction risk from nuclear war of 5.93*10^-12.
  • I believe the chance of an existential catastrophe conditional on a nuclear war causing extinction is quite low.
    • I calculated there would only be a 0.0513 % chance of a repetition of the last mass extinction 66 M years ago, the Cretaceous–Paleogene extinction event, being existential.
    • The most severe nuclear winters are way less severe than the above event. Xia 2022 considers a maximum soot injection into the stratosphere of 150 Tg. I believe this is pessimistic, but it is still only 1 % of the 15 kTg estimated by Bardeen 2017 for the above event.
  • I would say accounting for indirect effects should not change much your overall assessment of the pressingness of nuclear risk:
    • In 80,000 Hours' nuclear war profile, you say "the indirect existential risk [from nuclear war] seems around 10 times higher [than the direct one]". So, if you agreed (I am not saying you do!) with my points above suggesting direct existential risk from nuclear war is astronomically low, then the indirect one would also be astronomically low.
    • My sense is that 80,000 Hours puts significant weight on Toby Ord's views, and he commented the following yesterday (I am quoting the 2 paragraphs of his comment below):
      • "For what it's worth, my working assumption [in The Precipice] for many risks (e.g. nuclear, supervolcanic eruption) was that their contribution to existential risk via 'direct' extinction was of a similar level to their contribution via civilisation collapse. e.g. that a civilisation collapse event was something like 10 times as likely, but that there was also a 90% chance of recovery. So in total, the consideration of non-direct pathways roughly doubled my estimates for a number of risks".
      • "One thing I didn't do was to include their roles as risk factors. e.g. the effect that being on the brink of nuclear war has on overall existential risk even if the nuclear war doesn't occur".

Thanks for sharing, Garrison. I have read Yoshua's How Rogue AIs may Arise and FAQ on Catastrophic AI Risks, but I am still thinking annual extinction risk over the next 10 years is less than 10^-6. Do you know Yoshua's thoughts on the possibility of AI risk being quite low due to the continuity of potential harms? If deaths in an AI catastrophe follow a Pareto distribution (power law), which is a common assumption for tail risk, there is less than 10 % chance of such a catastrophe becoming 10 times as deadly, and this severely limits the probability of extreme outcomes. I also believe the tail distribution would decay faster than that of a Pareto distribution for very severe catastrophes, which makes my point stronger.

Thanks for sharing. It looks like this position is not on 80,000 Hours' job board:

I have asked them whether they had considered adding your position to the board, clicking on "Give us feedback" above, but you may want to reach out to them too (in case you have not done so yet).

@Don Efficace, they replied your position will go up this week. 

Thanks, Michael! For readers' reference, I have also estimated the scale of the welfare of various animal populations[1]:

PopulationIntensity of the mean experience as a fraction of the median welfare rangeMedian welfare rangeIntensity of the mean experience as a fraction of that of humansPopulation sizeAbsolute value of ETHU as a fraction of that of humans
Farmed insects raised for food and feed12.9 μ2.00 m3.87 m8.65E100.0423
Farmed pigs12.9 μ0.5151.009.86E80.124
Farmed crayfish, crabs and lobsters12.9 μ0.03050.05902.21E100.165
Humans6.67 μ1.001.007.91E91.00
Farmed shrimps and prawns12.9 μ0.03100.05991.39E111.05
Farmed fish12.9 μ0.05600.1081.11E111.52
Farmed chickens12.9 μ0.3320.6422.14E101.74
Farmed animals analysed here12.9 μ0.03620.07001.36E124.64
Wild mammals6.67 μ0.5150.5156.75E1143.9
Wild fish6.67 μ0.05600.05606.20E144.39 k
Wild terrestrial arthropods6.67 μ2.00 m2.00 m1.00E18253 k
Wild marine arthropods6.67 μ2.00 m2.00 m1.00E2025.3 M
Nematodes6.67 μ0.200 m0.200 m1.00E2125.3 M
Wild animals analysed here6.67 μ0.365 m0.365 m1.10E2150.8 M

I think it makes sense that Open Philanthropy prioritises chickens, fish and shrimp, as these are the 3 populations of farmed animals with the most suffering according to the above (1.74, 1.52 and 1.05 as much suffering as the happiness of all humans).

  1. ^

    ETHU in the header of the last column means expected total hedonistic utility.

nuclear security is getting almost no funding from the community

For reference, I collected some data on this:

Supposedly cause neutral grantmakers aligned with effective altruism have influenced 15.3 M$[17] (= 0.03 + 5*10^-4 + 2.70 + 3.56 + 0.0488 + 0.087 + 5.98 + 2.88) towards efforts aiming to decrease nuclear risk[18]:

Thanks for the context, Toby!

For what it's worth, my working assumption for many risks (e.g. nuclear, supervolcanic eruption) was that their contribution to existential risk via 'direct' extinction was of a similar level to their contribution via civilisation collapse

I was guessing you agreed the direct extinction risk from nuclear war and volcanoes was astronomically low, so I am very surprised by the above. I think it implies your annual extinction risk from:

  • Nuclear war is around 5*10^-6 (= 0.5*10^-3/100), which is 843 k (= 5*10^-6/(5.93*10^-12)) times mine.
  • Volcanoes is around 5*10^-7 (= 0.5*10^-4/100), which is 14.8 M (= 5*10^-7/(3.38*10^-14)) times mine.

I would be curious to know your thoughts on my estimates. Feel free to follow up in the comments on their posts (which I had also emailed to you around 3 and 2 months ago). In general, I think it would be great if you could explain how you got all your existential risk estimates shared in The Precipice (e.g. decomposing them into various factors as I did in my analyses, if that is how you got them).

Your comment above seems to imply that direct extinction would be an existential risk, but I actually think human extinction would be very unlikely to be an existential catastrophe if it was caused by nuclear war or volcanoes. For example, I think there would only be a 0.0513 % (= e^(-10^9/(132*10^6))) chance of a repetition of the last mass extinction 66 M years ago, the Cretaceous–Paleogene extinction event, being existential. I got my estimate assuming:

  • An exponential distribution with a mean of 132 M years (= 66*10^6*2) represents the time between i) human extinction in such catastrophe and ii) the evolution of an intelligent sentient species after such a catastrophe. I supposed this on the basis that:
    • An exponential distribution with a mean of 66 M years describes the time between:
      • 2 consecutive such catastrophes.
      • i) and ii) if there are no such catastrophes.
    • Given the above, i) and ii) are equally likely. So the probability of an intelligent sentient species evolving after human extinction in such a catastrophe is 50 % (= 1/2).
    • Consequently, one should expect the time between i) and ii) to be 2 times (= 1/0.50) as long as that if there were no such catastrophes.
  • An intelligent sentient species has 1 billion years to evolve before the Earth becomes habitable.

Thanks for clarifying, Ben!

I'd add that if if there's almost no EA-inspired funding in a space, there's likely to be some promising gaps by someone applying that mindset.

Agreed, although my understanding is that you think the gains are often exagerated. You said:

Overall, my guess is that, in an at least somewhat data-rich area, using data to identify the best interventions can perhaps boost your impact in the area by 3–10 times compared to picking randomly, depending on the quality of your data.

Again, if the gain is just a factor of 3 to 10, then it makes all sense to me to focus on cost-effectiveness analyses rather than funding.

In general, it's a useful approximation to think of neglectedness as a single number, but the ultimate goal is to find good grants, and to do that it's also useful to break down neglectedness into different types of resources, and consider related heuristics (e.g. that there was a recent drop).

Agreed. However, deciding how much to weight a given relative drop in a fraction of funding (e.g. philanthropic funding) requires understanding its cost-effectiveness relative to other sources of funding. In this case, it seems more helpful to assess the cost-effectiveness of e.g. doubling philanthropic nuclear risk reduction spending instead of just quantifying it.

Causes vs. interventions more broadly is a big topic. The very short version is that I agree doing cost-effectiveness estimates of specific interventions is a useful input into cause selection. However, I also think the INT framework is very useful. One reason is it seems more robust.

The product of the 3 factors in the importance, neglectedness and tractability framework is the cost-effectiveness of the area, so I think the increased robustness comes from considering many interventions. However, one could also (qualitatively or quantitatively) aggregate the cost-effectiveness of multiple (decently scalable) representative promising interventions to estimate the overall marginal cost-effectiveness (promisingness) of the area.

Another reason is that in many practical planning situations that involve accumulating expertise over years (e.g. choosing a career, building a large grantmaking programme) it seems better to focus on a broad cluster of related interventions.

I agree, but I did not mean to argue for deemphasising the concept of cause area. I just think the promisingness of areas had better be assessed by doing cost-effectiveness analyses of representative (decently scalable) promising interventions.

E.g. you could do a cost-effectiveness estimate of corporate campaigns and determine ending factory farming is most cost-effective.

To clarify, the estimate for the cost-effectiveness of corporate campaigns I shared above refers to marginal cost-effectiveness, so it does not directly refer to the cost-effectiveness of ending factory-farming (which is far from a marginal intervention).

But once you've spent 5 years building career capital in that factory farming, the available interventions or your calculations about them will likely very different.

My guess would be that the acquired career capital would still be quite useful in the context of the new top interventions, especially considering that welfare reforms have been top interventions for more than 5 years[1]. In addition, if Open Philanthropy is managing their funds well, (all things considered) marginal cost-effectiveness should not vary much across time. If the top interventions in 5 years were expected to be less cost-effective than the current top interventions, it would make sense to direct funds from the worst/later to the best/earlier years until marginal cost-effectiveness is equalised (in the same way that it makes sense to direct funds from the worst to best interventions in any given year).

  1. ^

    Open Phil granted 1 M$ to The Humane League's cage free campaigns in 2016, 7 years ago. Saulius Šimčikas' analysis of corporate campaigns looks into ones which happened as early as 2005, 19 years ago.

Thanks for the comment, Zach. I upvoted it.

I fully endorse expected total hedonistic utilitarianism[1], but this does not imply any reduction in extinction risk is way more valuable than a reduction in nearterm suffering. I guess you want to make this case by making a comparison like the following:

  • If extinction risk is reduced in absolute terms by 10^-10, and the value of the future is 10^50 lives, then one would save 10^40 (= 10^(50 - 10)) lives.
  • However, animal welfare or global health and development interventions have an astronomically low impact compared with the above.

I do not think the above comparison makes sense because it relies on 2 different methodologies. The way they are constructed, the 2nd will always have an impact for life-saving interventions which is limited to the global population of around 10^10, so it is bound to result in a lower impact than the 1st even if it is describing the exact same intervention. Interventions which aim to decrease the probability of a given population loss[2] achieve this via saving lives, so one could weight lives saved at lower population sizes more heavily, but still estimate their cost-effectiveness in terms of lives saved per $. I tried this, and with my assumptions interventions to save lives in normal times look more cost-effective than ones which save lives in severe catastrophes.

Less theoretically, decreasing measurable (nearterm) suffering (e.g. as assessed in standard cost-benefit analyses with estimates in DALY/$) has been a great heuristic to improve the welfare of the beings whose welfare is being considered both nearterm and longterm[3]. So I think it makes sense to a priori expect interventions which very cost-effectively decrease measurable suffering to be great from a longtermist perspective too.

  1. ^

    In principle, I am very happy to say that a 10^-100 chance of saving 10^100 lives is exactly as valuable as a 100 % chance of saving 1 life.

  2. ^

    For example, decresing the probability of population dropping below 1 k for extinction, or dropping below 1 billion for global catastrophic risk.

  3. ^

    Animal suffering has been increasing, but animals have been neglected. There are efforts to account for animals in cost-benefit analyses.

Thanks for sharing, MvK!

Going forward, I will re-evaluate whether to include or even prioritize animal welfare in my giving (I had previously decided against that, but I'm now questioning my reasoning behind that decision).  

I would be curious to know what led you to such reevaluation.

Nice post, Keyvan!

I have Fermi estimated the scale of the suffering of the various populations of farmed animals, assuming the intensity of suffering relative to the welfare range for all populations matches that of broilers in reformed scenarios. In agreement with Open Philanthropy's prioritisation, I calculated farmed chickens and fish are the 2 populations with the most suffering, with a scale equal to 1.74 and 1.52 times that of the happiness of all humans (which highlights the meat eater problem). Farmed shrimps and prawns came in 3rd, with a suffering whose scale is 1.05 times that of the happiness of all humans.

Load more