Every post, comment, or Wiki edit I authored is hereby licensed under a Creative Commons Attribution 4.0 International License.
Why do you keep referring to Ives Parr as “Ives Parr”? I don’t know if that is his real name or a nom de plume, but in either case it is the name Ives Parr has chosen to use, and I think you should respect that choice; enclosing his name in scare quotes seems disrespectful.
There is now a dedicated FHI website with lists of selected publications and resources about the institute. (Thanks to @Stefan_Schubert for bringing this to my attention.)
the result of a PR-management strategy which seems antithetical to the principles of Effective Altruism to me
This view has been asserted many times before, but to my knowledge, it has never been explicitly defended. Why is it antithetical to the principles of effective altruism to be concerned with the reputational damage certain decisions can cause, when such damage can often severely impact one’s ability to do good?
In a comment explaining his decision to seek funding for the Wytham Abbey project, Owen Cotton-Barratt expresses a similar view. Owen writes that it is “better to let decisions be guided less by what we think looks good, and more by what we think is good.” But, to state the obvious, deciding based on what is good will sometimes require giving a lot of weight to how good the decision will look to others, because those perceptions are among the circumstances affecting the impact of our actions.
There may be a more sophisticated justification for the decision procedure to never, or almost never, allow PR concerns to influence one’s decision-making. Empirically, this doesn’t look true to me, though. For better or worse, we live in a world where PR “scandals” can harm people or movements involved in them to an extreme degree. I think we should take notice of this fact, and act accordingly.
In particular, superforecasters are generally more sceptical of such short AI timelines.
Note that this is true only of some subset of superforecasters. Samotsvety’s forecasters (many of whom are superforecasters) have shorter timelines than both domain experts and general x-risk experts:
P(AGI by 2030)[4] | P(AGI by 2050) | P(AGI by 2100) | P(AGI by this year) = 10% | P(AGI by this year) = 50% | P(AGI by this year) = 90% | |
mean: | 0.31 | 0.63 | 0.81 | 2026 | 2041 | 2164 |
stdev: | 0.07 | 0.11 | 0.09 | 1.07 | 8.99 | 79.65 |
50% CI: | [0.26, 0.35] | [0.55, 0.70] | [0.74, 0.87] | [2025.3, 2026.7] | [2035, 2047] | [2110, 2218] |
80% CI: | [0.21, 0.40] | [0.48, 0.77] | [0.69, 0.93] | [2024.6, 2027.4] | [2030, 2053] | [2062, 2266] |
95% CI[5]: | [0.16, 0.45] | [0.41, 0.84] | [0.62, 0.99] | [2023.9, 2028.1] | [2024, 2059] | [2008, 2320] |
geomean: | 0.30 | 0.62 | 0.80 | 2026.00 | 2041 | 2163 |
geo odds[6]: | 0.30 | 0.63 | 0.82 |
I guess most people sympathetic to existential risk reduction think the extinction risk from AI is much higher that those from other risks (as I do). In addition, existential risk as a fraction of extinction risk is arguably way higher for AI than other risks, so the consideration you mentioned will tend to make AI existential risk even more pressing? If so, people may be more interested in either tackling AI risk, or assessing its interactions with other risks.
Yes, this seems right.
As a semi-tangential observation: your comment made me better appreciate an ambiguity in the concept of importance. When I said that this was an important consideration, I meant that it could cause us to significantly revise our estimates of impact. But by ‘important consideration’ one could also mean a consideration that could cause us to significantly alter our priorities.[1] "X-risks to all life v. to humans” may be important in the first sense but not in the second sense.
Perhaps one could distinguish between ‘axiological importance’ and ‘deontic importance’ to disambiguate these two notions.
I agree that in the absence of specific examples the criticism is hard to understand. But I would go further and argue that the NB at the beginning is fundamentally misguided and that well-meaning and constructive criticism of EA orgs or people should very rarely be obscured to make it seem less antagonistic.
I would like someone write a post expanding on X-risks to all life v. to humans. Despite the importance of this consideration, it seems to have been almost completely neglected in EA discussion.
If I were to write on this, I’d reframe the issue somewhat differently than the author does in that post. Instead of a dichotomy between two types of risks, one could see it as a gradation of risks that push things back an increasing number of possible great filters. Risks to all life and risks to humans would then be two specific instances of this more general phenomenon.
See also Anders’s more personal reflections: