P

Pablo

Director @ Tlön
10680 karmaJoined Aug 2014Working (6-15 years)Buenos Aires, Argentina
www.stafforini.com/

Bio

Every post, comment, or Wiki edit I authored is hereby licensed under a Creative Commons Attribution 4.0 International License

Sequences
1

Future Matters

Comments
1185

Topic contributions
4123

Pablo
2d21
1
0
2
1

There is a list by Sandberg here. (The other items in that post may also be of interest.)

Pablo
2d46
0
0
3

There is now a dedicated FHI website with lists of selected publications and resources about the institute. (Thanks to @Stefan_Schubert for bringing this to my attention.)

I have taken the liberty of reinstating the images and removing the notice. @Mark Xu, I assume you are okay with this?

the result of a PR-management strategy which seems antithetical to the principles of Effective Altruism to me

This view has been asserted many times before, but to my knowledge, it has never been explicitly defended. Why is it antithetical to the principles of effective altruism to be concerned with the reputational damage certain decisions can cause, when such damage can often severely impact one’s ability to do good?

In a comment explaining his decision to seek funding for the Wytham Abbey project, Owen Cotton-Barratt expresses a similar view. Owen writes that it is “better to let decisions be guided less by what we think looks good, and more by what we think is good.” But, to state the obvious, deciding based on what is good will sometimes require giving a lot of weight to how good the decision will look to others, because those perceptions are among the circumstances affecting the impact of our actions.

There may be a more sophisticated justification for the decision procedure to never, or almost never, allow PR concerns to influence one’s decision-making. Empirically, this doesn’t look true to me, though. For better or worse, we live in a world where PR “scandals” can harm people or movements involved in them to an extreme degree. I think we should take notice of this fact, and act accordingly.

In particular, superforecasters are generally more sceptical of such short AI timelines.

Note that this is true only of some subset of superforecasters. Samotsvety’s forecasters (many of whom are superforecasters) have shorter timelines than both domain experts and general x-risk experts:

 P(AGI by 2030)[4]P(AGI by 2050)P(AGI by 2100)P(AGI by this year) = 10%P(AGI by this year) = 50%P(AGI by this year) = 90%
mean:0.310.630.81202620412164
stdev:0.070.110.091.078.9979.65
       
50% CI:[0.26, 0.35][0.55, 0.70][0.74, 0.87][2025.3, 2026.7][2035, 2047][2110, 2218]
80% CI:[0.21, 0.40][0.48, 0.77][0.69, 0.93][2024.6, 2027.4][2030, 2053][2062, 2266]
95% CI[5]:[0.16, 0.45][0.41, 0.84][0.62, 0.99][2023.9, 2028.1][2024, 2059][2008, 2320]
       
geomean:0.300.620.802026.0020412163
geo odds[6]:0.300.630.82   

I guess most people sympathetic to existential risk reduction think the extinction risk from AI is much higher that those from other risks (as I do). In addition, existential risk as a fraction of extinction risk is arguably way higher for AI than other risks, so the consideration you mentioned will tend to make AI existential risk even more pressing? If so, people may be more interested in either tackling AI risk, or assessing its interactions with other risks.

Yes, this seems right.

As a semi-tangential observation: your comment made me better appreciate an ambiguity in the concept of importance. When I said that this was an important consideration, I meant that it could cause us to significantly revise our estimates of impact. But by ‘important consideration’ one could also mean a consideration that could cause us to significantly alter our priorities.[1] "X-risks to all life v. to humans” may be important in the first sense but not in the second sense.

  1. ^

    Perhaps one could distinguish between ‘axiological importance’ and ‘deontic importance’ to disambiguate these two notions.

I agree that in the absence of specific examples the criticism is hard to understand. But I would go further and argue that the NB at the beginning is fundamentally misguided and that well-meaning and constructive criticism of EA orgs or people should very rarely be obscured to make it seem less antagonistic.

Answer by PabloFeb 27, 202419
6
0

I would like someone write a post expanding on X-risks to all life v. to humans. Despite the importance of this consideration, it seems to have been almost completely neglected in EA discussion.

If I were to write on this, I’d reframe the issue somewhat differently than the author does in that post. Instead of a dichotomy between two types of risks, one could see it as a gradation of risks that push things back an increasing number of possible great filters. Risks to all life and risks to humans would then be two specific instances of this more general phenomenon.

Thanks for the clarification.

Yes, I agree that we should consider the long-term effects of each intervention when comparing them. I focused on the short-term effects of hastening AI progress because it is those effects that are normally cited as the relevant justification in EA/utilitarian discussions of that intervention. For instance, those are the effects that Bostrom considers in ‘Astronomical waste’. Conceivably, there is a separate argument that appeals to the beneficial long-term effects of AI capability acceleration. I haven’t considered this argument because I haven’t seen many people make it, so I assume that accelerationist types tend to believe that the short-term effects dominate.

I was trying to hint at prima facie plausible ways in which the present generation can increase the value of the long-term future by more than one part in billions, rather than “assume” that this is the case, though of course I never gave anything resembling a rigorous argument.

I do agree that the “washing out” hypothesis is a reasonable default and that one needs a positive reason for expecting our present actions to persist into the long-term. One seemingly plausible mechanism is influencing how a transformative technology unfolds: it seems that the first generation that creates AGI has significantly more influence on how much artificial sentience there is in the universe a trillion years from now than, say, the millionth generation. Do you disagree with this claim?

I’m not sure I understand the point you make in the second paragraph. What would be the predictable long-term effects of hastening the arrival of AGI in the short-term?

Load more