P

Pablo

Director @ Tlön
10726 karmaJoined Aug 2014Working (6-15 years)Buenos Aires, Argentina
www.stafforini.com/

Bio

Every post, comment, or Wiki edit I authored is hereby licensed under a Creative Commons Attribution 4.0 International License

Sequences
1

Future Matters

Comments
1187

Topic contributions
4123

See also Anders’s more personal reflections:

I have reached the age when I have seen a few lifecycles of organizations and movements I have followed. One lesson is that they don’t last: even successful movements have their moment and then become something else, sclerotize into something stable but useless, or peter out. This is fine. Not in some fatalistic “death is natural” sense, but in the sense that social organizations are dynamic, ideas evolve, and there is an ecological succession of things. 1990s transhumanism begat rationalism that begat effective altruism, and to a large degree the later movements suck up many people who would otherwise have been recruited by the transhumanists.

FHI did close before its time, but it is nice to know it did not become something pointlessly self-perpetuating. As we noted when summing up, 19 years is not bad for a 3-year project. Indeed, a friend remarked that maybe all organisations should have a 20-year time limit. After that, they need to be closed down and recreated if they are still useful, shedding some of the accumulated dross.

The ecological succession of organizations and movements is not all driven by good forces. A fresh structure driven by interested and motivated people is often gradually invaded by poseurs, parasites and imitators, gradually pushing away the original people (or worse, they mutate into curators, gatekeepers and administrators). Many ideas develop, flourish, become explored and then forgotten once a hype peak is passed – even if they still have merit. People burn out, lose interest, form families and have to change priorities, or the surrounding context make the movement change in nature. Dwindling activist movements may suffer “core collapse” as moderate members drift off while the hard core get more radical and pursue ever more extreme activism in order to impress each other rather than the world outside.

FHI did not do any of that. If we had a memetic failure, it was likely more along the lines of developing a shared model of the world and future that may have been in need for more constant challenge. That is one reason why I hope there will be more organizations like FHI but not thinking alike – places like CSER, Mimir, FLI, SERI, GCRI, and many others. We need the focus of a strongly connected organization to build thoughts and systems of substance but separate organizations to get mutual critique and diversity in approaches. Plus, hopefully, metapopulation resilience against individual organizational failures.

Why do you keep referring to Ives Parr as “Ives Parr”? I don’t know if that is his real name or a nom de plume, but in either case it is the name Ives Parr has chosen to use, and I think you should respect that choice; enclosing his name in scare quotes seems disrespectful.

Pablo
3d22
1
0
2
1

There is a list by Sandberg here. (The other items in that post may also be of interest.)

Pablo
3d47
0
0
3

There is now a dedicated FHI website with lists of selected publications and resources about the institute. (Thanks to @Stefan_Schubert for bringing this to my attention.)

I have taken the liberty of reinstating the images and removing the notice. @Mark Xu, I assume you are okay with this?

the result of a PR-management strategy which seems antithetical to the principles of Effective Altruism to me

This view has been asserted many times before, but to my knowledge, it has never been explicitly defended. Why is it antithetical to the principles of effective altruism to be concerned with the reputational damage certain decisions can cause, when such damage can often severely impact one’s ability to do good?

In a comment explaining his decision to seek funding for the Wytham Abbey project, Owen Cotton-Barratt expresses a similar view. Owen writes that it is “better to let decisions be guided less by what we think looks good, and more by what we think is good.” But, to state the obvious, deciding based on what is good will sometimes require giving a lot of weight to how good the decision will look to others, because those perceptions are among the circumstances affecting the impact of our actions.

There may be a more sophisticated justification for the decision procedure to never, or almost never, allow PR concerns to influence one’s decision-making. Empirically, this doesn’t look true to me, though. For better or worse, we live in a world where PR “scandals” can harm people or movements involved in them to an extreme degree. I think we should take notice of this fact, and act accordingly.

In particular, superforecasters are generally more sceptical of such short AI timelines.

Note that this is true only of some subset of superforecasters. Samotsvety’s forecasters (many of whom are superforecasters) have shorter timelines than both domain experts and general x-risk experts:

 P(AGI by 2030)[4]P(AGI by 2050)P(AGI by 2100)P(AGI by this year) = 10%P(AGI by this year) = 50%P(AGI by this year) = 90%
mean:0.310.630.81202620412164
stdev:0.070.110.091.078.9979.65
       
50% CI:[0.26, 0.35][0.55, 0.70][0.74, 0.87][2025.3, 2026.7][2035, 2047][2110, 2218]
80% CI:[0.21, 0.40][0.48, 0.77][0.69, 0.93][2024.6, 2027.4][2030, 2053][2062, 2266]
95% CI[5]:[0.16, 0.45][0.41, 0.84][0.62, 0.99][2023.9, 2028.1][2024, 2059][2008, 2320]
       
geomean:0.300.620.802026.0020412163
geo odds[6]:0.300.630.82   

I guess most people sympathetic to existential risk reduction think the extinction risk from AI is much higher that those from other risks (as I do). In addition, existential risk as a fraction of extinction risk is arguably way higher for AI than other risks, so the consideration you mentioned will tend to make AI existential risk even more pressing? If so, people may be more interested in either tackling AI risk, or assessing its interactions with other risks.

Yes, this seems right.

As a semi-tangential observation: your comment made me better appreciate an ambiguity in the concept of importance. When I said that this was an important consideration, I meant that it could cause us to significantly revise our estimates of impact. But by ‘important consideration’ one could also mean a consideration that could cause us to significantly alter our priorities.[1] "X-risks to all life v. to humans” may be important in the first sense but not in the second sense.

  1. ^

    Perhaps one could distinguish between ‘axiological importance’ and ‘deontic importance’ to disambiguate these two notions.

I agree that in the absence of specific examples the criticism is hard to understand. But I would go further and argue that the NB at the beginning is fundamentally misguided and that well-meaning and constructive criticism of EA orgs or people should very rarely be obscured to make it seem less antagonistic.

Answer by PabloFeb 27, 202419
6
0

I would like someone write a post expanding on X-risks to all life v. to humans. Despite the importance of this consideration, it seems to have been almost completely neglected in EA discussion.

If I were to write on this, I’d reframe the issue somewhat differently than the author does in that post. Instead of a dichotomy between two types of risks, one could see it as a gradation of risks that push things back an increasing number of possible great filters. Risks to all life and risks to humans would then be two specific instances of this more general phenomenon.

Load more