Hide table of contents

Tl;dr: This post introduces the term “low-key longtermism” - an alternative outreach strategy to “holy shit, X-risk”. The goal of low-key longtermism is to gather broad public support for longtermism under the slogan “creating a better future for our grandchildren''. I then discuss advantages (democratic legitimacy, bigger talent pool, easier pitch) and disadvantages (dilution, political backlash, ism-overload, and difficulty).

Imagine that you are a public opinion surveyor in an alternative world. Your task is to poll people on what the three most important issues are. You ask a random person on the street who replies: “reducing traffic, solving immigration, and creating a flourishing future for humanity”. You note it down without much thought. After all, this is a fairly typical response.

In our universe, this response would be quite surprising. In the broad population, caring about the long-term future of humanity is still a fairly fringe belief. Even parents of EAs are seldom longtermists themselves.

(Disclaimer: even though this post is part of the red-teaming challenge, I explicitly focus on longtermism and not other (important) EA activities).

One of the reasons for the lack of popular appeal might be that longtermism is simply hardcore. Committing to longtermism requires either grappling with tough questions of utilitarianism or being confronted with “Holy shit X-risk''-type arguments. Neither of these are particularly easy pills to swallow. While many EAs are successfully motivated by these arguments, it is tough for most people to face existential dread on an everyday basis. Furthermore, “X-risk reduction” can turn pretty abstract with talks centered on probabilities with hypotheticals.

There are other ways of framing longtermism that avoid these issues. I have dubbed one of them “Low-key Longtermism”. The main idea behind low-key longtermism is to create a concrete and positive framing of avoiding human extinction that people can subscribe to without being familiar with the philosophical aspects of longtermism (or even, necessarily, the term itself). The goal is to create broad democratic support for the fundamental ideas.

It is important to note that this is not an entirely novel idea. There exists a few EA organisations and initiatives, who aim to popularise longtermist thinking like the existential risk observatory, the recent Openphil/kurzgesagt collaboration, and various podcasts. FTX also lists various ways of growing the longtermist movement on their idea list. However, I still think the specifics of low-key longtermism are sufficiently distinct to merit writing about.

To illustrate I will compare low-key longtermism with the climate movement in Denmark, which was the primary inspiration for the polling example at the beginning of this post. Basically everyone agrees that climate change is man-made and important - regardless of political stance. People still disagree on what should be done with right-wing parties relying on the markets and left-wing favoring state regulations. However, “climate change” would figure as one of the top issues for most people stopped on the street.

That is not to say that most everyone is active in fighting climate change. If you ask the same person whether they are vegan, have stopped flying, or donates a significant portion of their income to the clean air task force, they will probably say no. But they will believe it's important that other smart people are working on these issues. And they will frown at politicians being too little green.

This kind of support goes somewhat against the EA ethos of “aim high, do good”. This ethos prefers one 100% highly-motivated person to ten 10% motivated people. This way, the highly motivated person can become part of the heavy-tail of impact and potentially change the world for the better. Nevertheless, low-key longtermism embraces the more modest but broader public support.

To create public support, the message has to be properly packaged. While “Flourishing future of humanity” does have a nice alliterative ring to it, it is still abstract and difficult to relate to. Instead, the message needs to point towards a concrete future, while being relatable for individuals. The concreteness ideally makes it easier to imagine (whereas “flourishing future” can contain all sorts of vaguery). Relatableness makes it easier to care about (“humanity” might be a bit too broad to viscerally relate to for many people).

At the same time, the slogan needs to encompass all different longtermist cause areas. The slogan should act as a launchpad for everything from AI safety to biorisk. At the same time it should retain the concreteness and relatability as mentioned before.

To accomplish this, I propose the slogan “Create a better future for our grandchildren”. While this slogan needn’t be literally adopted, I think it has several nice properties. For one, it is concrete and relatable. For most people grandchildren occupy the 30-100 year time horizon. This is far enough in the future for many important longtermist cause areas (AI Safety, Biorisk and other X-risks) to be relevant, but talking about grandchildren makes it relatively concrete.

Secondly, the grandchildren-framing has roots in existing projects and traditions. This notably includes the Japanese Future Design movement and the Seventh Generation Principle. The Future Design movement focuses on involving (currently non-existing) future generations in democratic decision making. Relatedly, the seventh generation principle is a Haudenosaunee code that focuses on the effects of a decision for seven generations to promote sustainable decisions. Both perspectives create accessible pictures to make the future concrete. They also resonate with common ideals like giving a voice to the voiceless (i.e. making children (or grandchildren in our case) heard in politics) or extending gratitude for the opportunities we have been given to future generations.

I think there are three main benefits of low-key longtermism

  • Democratic legitimacy
  • Widen the talent pool
  • Make longtermist causes easier to explain (and transition into)

I will go through these in turn.

Democratic Legitimacy

Longtermist interventions are having an increasing impact on the world. As particularly AI increases its footprint on the world, so too will our attempts to control it. It is essential that there is democratic support - especially for the governance solutions. This is easier when many people subscribe to low-key longtermism. Although most people won’t dabble in the details of policy agreements or technical solutions, having popular support makes it easier to make politicians care about implementing the solutions.

If low-key longtermism succeeds globally, it might also be easier to facilitate global coordinations. This helps many longtermist agendas from nuclear anti-proliferation treaties to biorisk to AI governance.

There are also drawbacks of having longtermism be a popular issue. I will return to these in the “Political Pushback”-section.

Widen the Talent Pool

Currently, EA targets highly-motivated people from top universities. This is relatively narrow and excludes most people both average and many highly-talented people from other paths. Currently, if a highly-talented individual wants to shape their career in a prosocial direction, they are more likely to work on climate change, global health or some other socially acceptable career paths. I would guess that working on AI Safety might seem too weird for most people.

It is still a semi-open question whether it is helpful to broaden the talent pool. Some problems, like AI alignment, might feasibly be better solved by super geniuses. However, making (low-key) longtermism a popular choice, more people of all talent-levels (including super geniuses) might consider working on these kinds of issues.

Making Longtermist Causes Easier to Explain

I think many (most?) longtermists have had the experience of trying (and often failing) to pitch longtermist to their friends and family. It requires several large conceptual leaps to go from “I care about other people” to “I care about humanity’s long term potential and don’t want it to be destroyed by misaligned AI / a bio-engineered pandemic / some other cause area”.

This might become easier if the starting point is instead “I care about creating a better future for our grandchildren ''. It creates a more accessible entry point into conversations about longtermist causes. Instead of going into complex reasoning about the shortcomings of temporal discounting, one can answer questions like “Why are you working on inverse scaling laws?” with something like “because I believe understanding neural networks will create a better future for our grandchildren”. Justifying this might be a more helpful (and pleasant) experience than going straight to “Holy shit X-risk” and similar explanations.

Likewise, arguments for, say, improving early detection of possible pandemics can also be reframed through this slogan. You could reframe the pitch as “we are likely to experience more pandemics in the future. Having better early detection will help us contain them and create a safer future for our grandchildren”.

While I believe these benefits are worth striving for, there are also (fairly strong) counter arguments to low-key longtermism. I will present a (non-exhaustive) selection below.

Diluting Longtermism

The first is that widening public support might dilute the movement. Many important insights (such as longtermism) have come by being extraordinarily open to weird ideas. Furthermore, there are major benefits to having highly-motivated and talented people throw all their energy at an important problem like AI safety and reducing biorisk.

This concern is particularly common among community organisers, who have had bad experiences with inviting too many people too quickly into their university EA groups. It also clashes with the idea that in some areas (like AI alignment) it may be better to have a small group of super geniuses work on the problem, rather than a (much) larger group of adequately talented people.

While both concerns are valid, I think they are answered by viewing low-key longtermism more as a branding strategy than a reframing of longtermism itself. Most core longtermist organisations would continue following their current strategies, even if low-key longtermism were fully adopted as a branding strategy.

Returning to the climate change analogy, support for the cause exists on a continuum. One can casually join a “Fridays For Future” demonstration every once in a while, become a hardcore member of extinction rebellion, or even just vote for green-ish politicians at national elections. All of these different levels of engagement co-exists and could even reinforce each other as people can gradually escalate their engagement. This is enabled by the fact that there are different levels of engagement, rather than it being all or nothing.

Political Backlash

Another concern is that broadening the appeal of longtermism might create political backlash. This is especially prescient from a US perspective as seemingly factual questions such as climate change and optimal pandemic response have been politicised, which negatively impacted these issues. Might it also become harder to implement effective but perhaps unpopular counter-measures to malignant AI or biorisk if more people have to be appeased?

This is an important concern. Public relations work can definitely backfire, and causing a political gridlock on existential risk reduction would not be great. Nevertheless, I think these issues are too important not to discuss democratically. Some of the work being done within EA might have important consequences for the future, like shaping the governance structures around AI. At some point, this might make them democratic problems. And at that point, having a broad democratic foundation might be better than having the solutions perceived as coming from a small elite group of longtermists.

Of course, it is still crucial to respect information hazards. I believe it's possible to instil a care for the future without going into info-hazard territory - after all, there are many different threats we need to safeguard humanity from that are not info hazardous.

One important potential info-hazard of spreading low-key longtermism is that more people might do drastic actions for the greater good. It only takes one paranoid person doing something radical with fatal consequences to tarnish the reputation of longtermism for good. The way to avoid this is to focus on the slogan “creating a better future for our grandchildren'' instead of the term longtermism. It seems less likely that someone will commit a potential atrocity with the aim of helping grandchildren (to me at least).

Do we need more isms?

Low-key longtermism might be just another ism to juggle around, which might cause more confusion both within and outside the EA movement. After all, do we really want to add low-key longtermism vs hardcore longtermism, to the existing ism-schism?

The resolution to this objection is not to view low-key longtermism as an ism. The goal is not to convince people to call themselves low-key longtermists; the goal is to make them care about creating a better world for their grandchildren. Low-key longtermism is simply meant as short-hand to discuss the strategy and tactics required to reach this goal. And to discuss whether this goal should even be pursued in the first place.

Convincing people is hard

The final objection I want to cover is that convincing people is just really hard. Changing your mind is difficult, and changing other people’s mind is even more difficult. Therefore, it might make more sense to spend those resources on high-potential people (like people with a track record of working on hard technical or societal problems).

I don’t think one should abstain from working on problems just because they are hard (and I think many EAs agree with me). After all, alignment is super difficult, and so are most other important problems. However, it is important to consider the effort to impact ratio. If there are other strategies that produce higher expected value (like targeting undiscovered geniuses perhaps) it might be better to pursue those.

Nevertheless, I think convincing a significant portion of the population to care about the long-term future is at least tractable. After all, the climate movement has broadly succeeded, albeit it took many years and lots of effort. But we might be able to learn from them and deploy some of the same tactics.

Conclusion and next steps

In this post, I have explored low-key longtermism, a philosophy with the goal of making everyone care about the semi long term future. By creating the simple and positive pitch creating a better future for our grandchildren, I hope to improve the democratic legitimacy of longtermism, widening the talent pool, and making it easier to explain cause areas to non-EAs. There are, of course, caveats like dilution, political backlash, and the sheer difficulty of the problem. On balance, I think the pros outweigh the cons but I am open to changing my mind. I hope that this point provides a starting point for discussing this and similar popularization strategies. I look forward to hearing your feedback!

Acknowledgement This post would not have been possible without helpful discussion with people at EAG London and the participants at the exploring existential risk summit in Oxford 2022. Thanks in particular to Per Ivar Friborg and my parents for helpful feedback drafts of this post.

26

0
0

Reactions

0
0

More posts like this

Comments6
Sorted by Click to highlight new comments since:

All else equal I definitely like the idea to popularize some sort of longtermist sentiment. I'm still unsure about the usefulness - I have some doubts about the paths to impact proposed. Personally, I think that a world with a mass-appeal version of longtermism would be a lot more pleasant for me to live in, but not necessarily much better off on the metrics that matter.

  • Climate is a very democratically legitimate issue. It's discussed all the time, lots of people are very passionate about it, and it can probably move some pretty hefty voting blocs. I think investing the amount of energy it would take to get low-key longtermism to the same level of democratic legitimacy as climate, to get the same returns from government that the climate folks are getting, would be pretty abysmal. That said, I don't really know what the counterfactual looks like, so it's hard to compare how worthwhile the mass attention really is.
  • Widening the talent pool seems most plausible, but the model here is a bit fuzzy to me. Very few people work on one of their top-three-world-issues, but EA is currently very small, so doing this would probably bring in a serious influx of people wanting to do direct work. But if this dominates the value of the proposal, is there a reason it wouldn't be better/cheaper/faster to do more targeted outreach instead of aiming for Mass Appeal? I guess it depends on how easy vs how expensive it is to try and target the folks that really do want to work on one of their top-three-world-issues.
  • I think the benefit of "making longtermist causes easier to explain" is mostly subsumed by the other two arguments? I can't think of any path-to-impact for this that doesn't run through marginal pushes towards either government action or direct work.

Also, quick flag that the slogan "creating a better future for our grandchildren" reads a bit nationalist to me - maybe because of some unpleasant similarity to the 14 words.

Thank you for your excellent points, James! Before responding to your points in turn, I do agree that a significant part of the appeal of my proposal is to make it nicer for EAs. Whether that is worth investing in is not clear to me either - there are definitely more cost-effective ways of doing that. Now to your points:

  • I think I define democratic legitimacy slightly differently. Instead of viewing it as putting pressure on politicians through them knowing that everyone cares about the long term, I see it moving long-term policies within the Overton window so to speak by making it legitimate. Thus, it acts as a multiplier for EA policy work.
  • Wrt talent pool, I think it depends on how tractable it is to "predict" the impact of a given individual. I would guess that mass appeal works better if it is harder to a priori predict the impact of a given person / group - then it becomes more of a numbers' game with getting as many interested people thinking about these issues. I am quite uncertain about whether this is the case, and I imagine there are many other constraints (in e.g. hiring capacity of EA orgs).
  • I fully agree that this is more of a "nice to have" than a huge value proposition. I'd never heard of the 14 words but I do agree that the similarity is unfortunate. The slogan was also meant more as an illustration rather than a fully fledged proposal - luckily, it facilitates discussions like these!

Thanks for the post Jonathan! I think this can be a good starting point for discussions around spreading longtermism. Personally, I like the use of "low-key longtermism" for internal use between people that are already familiar with longtermism, but I wouldn't use it for massive outreach purposes. This is because the mentioned risk posed by info-hazard seems to outweigh the potential benefits of using the term longtermism. Also, since the term doesn't add any information value to people that don't already know what it is, I am even more certain that it's best to leave the term behind when doing massive outreach. This post also shows some great examples of how the message of longtermism can be warped and misunderstood as a secular cult, adding another element of concern for longtermism outreach: How EA is perceived is crucial to its future trajectory - EA Forum (effectivealtruism.org).
 

My point is that I favor low-key longtermism outreach as long as the term longtermism is excluded. 

I totally agree that it serves more as an internal strategic shorthand rather than as a part of communication. Ideally, no one outside core EA would need to know what "low-key longtermism" even refers to.

Low-key longtermism seems like a superb framing to me. Existing framings seem to have significant risk related to radicalization, elitism, and estrangement that you also touch upon.

Framing it for the grandkids is a great idea since it both avoids longtermism and appeals to basically everyone. There might be risks of non-specificity so we'll probably need to experiment with different wordings, though this seems like a appealing starting point.

Especially when explaining longtermism to the parents et al.

[disclaimer: I work with Jonathan]

Thanks for your kind words, Esben! If anything comes out of this post, I agree that it should be a renewed focus on better framings - though James does raise some excellent points at the cost-effectiveness of this approach :))

Curated and popular this week
Relevant opportunities