Hide table of contents

We think our civilization near its meridian, but we are yet only at the cock-crowing and the morning star.

— Ralph Waldo Emerson

 

Welcome to Future Matters, a newsletter about longtermism. Each month we collect and summarize longtermism-relevant research, share news from the longtermism community, and feature a conversation with a prominent longtermist. Future Matters is crossposted to Substack and available as a podcast.


Research

We are typically confident that some things are conscious (humans), and that some things are not (rocks); other things we’re very unsure about (insects). In this post, Amanda Askell shares her views about AI consciousness. It seems unlikely that current AI systems are conscious, but they are improving and there’s no great reason to think we will never create conscious AI systems. This matters because consciousness is morally-relevant, e.g. we tend to think that if something is conscious, we shouldn’t harm it for no good reason.  Since it’s much worse to mistakenly deny something moral status than to mistakenly attribute it, we should take a cautious approach when it comes to AI: if we ever have reason to believe some AI system is conscious, we should start to treat it as a moral patient. This makes it important and urgent that we develop tools and techniques to assess whether AI systems are conscious, and related questions, e.g. whether they are suffering. 

The leadership of the ​​Global Catastrophic Risk Institute issued a Statement on the Russian invasion of Ukraine. The authors consider the effects of the invasion on (1) risks of nuclear war and (2) other global catastrophic risks. They argue that the conflict increases the risk of both intentional and inadvertent nuclear war, and that it may increase other risks primarily via its consequences on climate change, on China, and on international relations.

Earlier this year, Hunga Tonga-Hunga Ha'apai—a submarine volcano in the South Pacific—produced what appears to be the largest volcanic eruption of the last 30 years. In What can we learn from a short preview of a super-eruption and what are some tractable ways of mitigating, Mike Cassidy and Lara Mani point out that this event and its cascading impacts provide a glimpse into the possible effects of a much larger eruption, which could be comparable in intensity but much longer in duration. The main lessons the authors draw are that humanity was unprepared for the eruption and that its remote location dramatically minimized its impacts. To better prepare for these risks, the authors propose better identifying the volcanoes capable of large enough eruptions and the regions most affected by them; building resilience by investigating the role that technology could play in disaster response and by enhancing community-led resilience mechanisms; and mitigating the risks by research on removal of aerosols from large explosive eruptions and on ways to reduce the explosivity of eruptions by fracking or drilling.

The second part in a three-part series of great power conflict, Stephen Clare's How likely is World War III? attempts to estimate the probability of great power conflict this century as well as its severity, should it occur. Tentatively, Clare assigns a 45% chance to a confrontation between great powers by 2100, an 8% chance of a war much worse than World War II, and a 1% chance of a war causing human extinction. Note that some of the key sources in Clare's analysis rely on the Correlates of War dataset, which is less informative about long-run trends in global conflict than is generally assumed; see Ben Garfinkel's comment for discussion.

Holden Karnofsky emails Tyler Cowen to make a very concise case that there’s at least a 1 in 3 chance we develop transformative AI this century (summarizing his earlier blogpost). There’s some very different approaches to AI forecasting all pointing to a significant probability of TAI this century: forecasting based on ‘biological anchors’; forecasts by AI experts and Metaculus; analyses of long-run economic growth; and very outside-view arguments. On the other hand, there are few, if any, arguments that we should confidently expect TAI much later. 

Malevolent nonstate actors with access to advanced technology can increase the probability of an existential catastrophe either by directly posing a risk of collapse or extinction, or indirectly, by creating instability and thereby undermining humanity's capacity to handle other risks. The magnitude of the risk posed by these actors is a function of both their ability and their willingness to cause harm. In How big are risks from non-state actors? Base rates for terrorist attacks, Rose Hadshar attempts to inform estimates of the second of these two factors by examining base rates of terrorist attacks. She finds that attacks occur at a rate of one per 700,000 people worldwide and one per 3,000,000 people in the West. Most attacks are not committed with omnicidal intent.

Population dynamics are an important force shaping humanity over decades and centuries. In Retrospective on Shall the Religious Inherit The Earth, Isabel Juniewicz evaluates predictions in a 2010 book which claimed that: (1) within religions, fundamentalist growth will outpace moderate growth; (2) within regions, fundamentalist growth will outpace growth among non-religious and moderates; (3) globally, religious growth will outpace non-religious growth. She finds strongest evidence for (1). Evidence for (2) is much weaker— in the US, the non-religious share of the population has increased over the last decade. Secularization and deconversion are more than counterbalancing the fertility advantage of religious groups, and the fertility gap between religious and non-religious populations has been narrowing. Haredi Jews are one notable exception, and continue to grow as a share of the population in US and Israel. (3) seems true in the medium-term, but due primarily to dynamics overlooked in the book: population decline in (more irreligious) East Asia, and population growth in (increasingly religious) Africa. Fertility rates in predoninantly Muslim countries, on which the book’s argument for (3) is largely based, have been declining substantially, to near-replacement levels in many cases. For the most part, religious populations are experiencing declining fertility in parallel with secular groups. Overall it looks like the most significant trend in the coming decades and centuries will not be increasing global religiosity, but the continued convergence of global fertility rates to below-replacement levels.

We’re appalled by many of the attitudes and practices of past generations. How can we avoid making the same sort of mistakes? In Future-proof ethics (EA Forum discussion), Holden Karnofsky suggests three features of ethical systems capable of meeting this challenge: (1) Systematization: rather than relying on bespoke, intuition-based judgements, we should look for a small set of principles we are very confident in, and derive everything else from these; (2) Thin utilitarianism: our ethical system should be based on the needs and wants of others rather than our personal preferences, and therefore requires a system of consistent ethical weights for comparing any two harms and benefits; and (3) Sentientism: the key ingredient in determining how much to weigh someone’s interests should be the extent to which they have the capacity for pleasure and suffering. Combining these elements leads to the sort of ethical stance that has a good track record of being ‘ahead of the curve.’

Progress on shaping long-term prospects for humanity is to a significant degree constrained by insufficient high-quality research with the potential to answer important and actionable questions. Holden Karnofsky's Important, actionable research questions for the most important century offers a list of questions of this type that he finds most promising, in the following three areas: AI alignment, AI governance, and AI takeoff dynamics. Karnofsky also describes a process for assessing whether one is a good fit for conducting this type of research, and draws a contrast between this and two other types of research: research focused on identifying "cause X" candidates or uncovering new crucial considerations; and modest incremental research intended not to cause a significant update but rather to serve as a building block for other efforts.

Andreas Mogensen and David Thorstad's Tough enough? Robust satisficing as a decision norm for long-term policy analysis attempts to open a dialogue between philosophers working in decision theory and decision-making under deep uncertainty (DMDU), a field developed by operations researchers and engineers mostly neglected by the philosophical community. The paper focuses specifically on robust satisficing as a decision norm, and discusses decision-theoretic and voting-theoretic motivations for it. The paper may be seen as an attempt to address complaints raised by some members of the effective altruism community that longtermist researchers routinely ignore the tools and insights developed by DMDU and other relevant fields.

Fin Moorhouse wrote an in-depth profile on space governance—the most comprehensive examination of this topic by the EA community so far. His key points may be summarized as follows:

  • Space, not Earth, is where almost everyone will likely live if our species survives the next centuries or millennia. Shaping how this future unfolds is potentially enormously important.
  • Historically, a significant determinant of quality of life has been quality of governance: people tend to be happy when countries are well-governed and unhappy when countries are poorly governed. It seems plausible that the kind of space governance that ultimately prevails will strongly influence the value of humanity's long-term future.
  • Besides shaping how the future unfolds, space governance could make a difference to whether there is a future at all: effective arms control in space has the potential to significantly reduce the risk of armed conflict back on Earth, especially of great power conflict (which plausibly constitutes an existential risk factor).
  • The present and near future appear to offer unusual opportunities for shaping space governance, because of the obsolescence of the current frameworks and the growing size of the private space sector.
  • There are identifiable areas to make progress in space governance, such as reducing the risk of premature lock-in; addressing worries about the weaponization of asteroid deflection technology; establishing rules for managing space debris; and exploring possible mechanisms for deciding questions of ownership.
  • Reasons against working on space governance include the possibility that early efforts might either be nullified by later developments or entrench worse forms of governance than later generations could otherwise have put in place; that it may be intractable, unnecessary, or even undesirable; and that there may be even more pressing problems, such as positively influencing the development of transformative AI.
  • The main opportunities for individuals interested in working on this problem are doing research in academia or for a think-tank, and shaping national or international decisions as a diplomat or civil servant, or for a private space company.

Neel Nanda's Simplify EA pitches to “holy shit, x-risk” (EA Forum discussion) argues that one doesn't need to accept total utilitarianism, reject person-affecting views, have a zero rate of pure time preference or self-identify as a longtermist to prioritize existential risk reduction from artificial intelligence or biotechnology. It is enough to think that AI and bio have at least a 1% or 0.1% chance, respectively, of causing human extinction in our lifetimes. These are still "weird" claims, however, and Nanda proposes some framings for making them more plausible.

YouGov surveyed Britons on human extinction. 3% think humanity will go extinct in the next 100 years; 23% think humanity will never go extinct. Asked to choose the three most likely causes of extinction from a list: top were nuclear war (chosen by 43%); climate change (42%); and a pandemic (30%). One interesting change since the survey was last conducted: 43% now think the UK government should be doing more to prepare for AI risk, vs. 27% in 2016.

In The value of small donations from a longtermist perspective (EA Forum discussion) Michael Townsend argues that, despite recent substantial increases in funding, small longtermist donors can still have a significant impact. This conclusion follows from the simple argument that a donation to a GiveWell-recommended charity has a significant impact, and that the most promising longtermist donation opportunities are plausibly even more cost-effective than are GiveWell-recommended charities. As Townsend notes, this conclusion by itself is silent on the value of earning to give, and can be accepted even by those who believe that longtermists should overwhelmingly focus on direct work.

Open Philanthropy's Longtermist EA Movement-Building team works to increase the amount of attention and resources put towards problems that threaten the future of sentient life. The team funds established projects such as 80,000 Hours, Lightcone Infrastructure, and the Centre for Effective Altruism, and has in 2021 directed approximately $60 million in giving.  In Update from Open Philanthropy’s longtermist EA movement-building team, Claire Zabel summarizes the team's activities and accomplishments so far and shares a number of updates, two of which seem especially important. One is that they will be spending less time evaluating opportunities and more time generating opportunities. The other is that they are shifting from "cost-effectiveness" to "time-effectiveness" as the primary impact metric.  This shift was prompted by the realization that grantee and grantmaker time, rather than money available for grantmaking, was the scarcer of the two resources.


News

Open Philanthropy's Longtermist EA Movement-Building team is hiring for several roles to scale up and expand their activities (EA Forum discussion). Applications close at 5 pm Pacific Time on March 25, 2022.

The Forethought Foundation is hiring for several roles: a Director of Special Projects, a Program Associate, and a Research Analyst supporting Philip Trammell. Applications for the first two roles are due by April 4, 2022.

The Policy Ideas Database is an effort to collect and categorize policy ideas within the field of existential risk studies. It currently features over 250 policies, and the authors estimate they have only mined 15% of the relevant sources. Read the EA Forum announcement for further details.

The Eon Essay Contest offers over $40,000 in prizes, including a top prize of $15,000, for outstanding essays on Toby Ord's The Precipice written by high school, undergraduate and graduate students from any country. Essays are due June 15, 2022.

The FTX Foundation’s Future Fund has officially launched. They hope to distribute at least $100m this year to ambitious projects aimed at improving humanity’s long-term prospects, roughly doubling the level of funding available for longtermist projects. Their team is made up of Nick Beckstead (CEO), Leopold Aschenbrenner, William MacAskill and Ketan Ramakrishnan. The FTX Foundation is funded primarily by Sam Bankman-Fried.

The Future Fund also announced a prize of $5,000 for any project idea that they like enough to add it to the project ideas section of their website. The EA Forum linkpost attracted over 700 comments and hundreds of submissions. Note that the deadline for submitting projects has now passed.

Finally, the Future Fund announced a regranting program offering discretionary budgets to independent part-time grantmakers. The budgets are to be spent in the next six months or so, and will typically be in the $250k–few million range. You can apply to be a regrantor, or recommend someone else for this role, here.

The UK government is soliciting feedback from biosecurity experts to update its biological security strategy, which seeks to protect the country from domestic and global biological risks, including emerging infectious diseases and potential biological attacks. There is some discussion on the EA Forum.

Effective Ideas is a project aiming to build an ecosystem of public-facing writing on effective altruism and longtermism. To this end, they are offering five $100,000 prizes for the very best new blogs on these themes, and further grants for promising young writers. Read more and apply on their website.


Conversation with Nick Beckstead

Nick Beckstead is an American philosopher and CEO of the FTX Foundation. Nick majored in mathematics and philosophy, and then completed a PhD in philosophy in 2013. During his studies, he co-founded the first US chapter of Giving What We Can, pledging to donate half of his post-tax income to the most cost-effective organizations fighting global poverty in the developing world. Prior to joining the FTX Foundation last year, Nick worked as a Program Officer for Open Philanthropy, where he oversaw much of that organization's research and grantmaking related to global catastrophic risk reduction. Before that he was Research Fellow at Oxford’s Future of Humanity Institute.

Future Matters: Your doctoral dissertation, On the Overwhelming Importance of Shaping the Far Future, is one of the earliest attempts to articulate the case for longtermism—the view that "what matters most (in expectation) is that we do what is best (in expectation) for the general trajectory along which our descendants develop over the coming millions, billions, and trillions of years" (p. ii). We are curious about the intellectual journey that finally led you to embrace this position.

Nick Beckstead: I guess I became sympathetic to utilitarianism in college through reading Peter Singer and John Stuart Mill, and taking a History of Moral Philosophy class. I had this sense that the utilitarian version was the only one that turns out sensible answers in a systematic fashion, as if turning a crank or something. And so I got interested in that. I was interested in farm animal welfare and global health due to Peter Singer's influence, but had a sense of open-mindedness: is this the best thing? I don’t know. My conclusion reading Famine, affluence, and morality was—well you can do at least this [much] good surprisingly easily; who knows what the best thing is.

I guess there were a number of influences there, but maybe most notably I remember I came across Nick Bostrom’s work, first on infinite ethics and then I read Astronomical waste and I had a sense "wow this seems kinda crazy but could be right; seems like maybe there’s a lot of claims here that seem iffy to me, about exactly how nanotechnology is going to go down or how many people you can fit onto a planet." But it seems like there’s an argument here that has some legs. And then I thought about exactly how that argument worked and what the best version of it [was] and exactly what it required, and ended up in a place that’s slightly different from what Bostrom said, but very much inspired by it—I think of it as a slight generalization.

Future Matters: How have your views on this topic evolved over the last decade or so since writing your dissertation?

Nick Beckstead: So there’s the baseline philosophy, and then there’s the speculative details about what matters for the long-term future. I think the main way my views have changed on the speculative details side is that I’ve got more confidence in the AI story and some of the radical claims about what is possible with technology, and maybe possible with technology this century, and how that affects the long-term future. In some sense, the main way that developed when I was writing this dissertation is that it was an extremely weird and niche thing to be talking about, and I had some sense of "well, I don’t know, this seems kinda crazy but could be right; I wonder if it will survive scrutiny from relevant experts?"

I think the expert engagement that has been gotten has not been totally ideal because there isn’t a particular field where you can have somebody [e.g.] read Superintelligence and say “this is the definitive evaluation of exactly what is true and false in it.” It’s not exactly computer science, it’s not exactly economics, it’s not exactly philosophy—it's a lot of judgment, no one is really authoritative. A thing that hasn’t happened, but could well have happened, is that it was strongly refuted by excellent arguments. I think that decisively has not happened. It’s a pretty interesting fact and it’s updated my belief in a bunch of that picture. So that’s the main way that has changed.

And then a little thing, principles-wise: when I started writing my dissertation I was very intensely utilitarian and I was like “this is the way”, and toward the end of writing the dissertation I became more like “well, this is the best systematic framework that I know of, but I think it has issues around infinities and I don’t know how to resolve them in any nice systematic fashion and maybe there is no nice systematic resolution of that.” And what does that mean for thinking about ethics? I think it’s made me a little bit more antirealist, a little bit more reluctant to give up common-sense (to me) things that are inconsistent with [utilitarianism] and deeply held, while still retaining it as a powerful generative framework for identifying important conclusions and actions—that’s kinda how I use it now.

Future Matters: Open Philanthropy has been the major longtermist funder for ten or so years. Now the Future Fund is on track to being the biggest funder this coming year. We are curious if you could say a bit about the differences in grantmaking philosophy between them and how you expect the Future Fund to differ in its approach to longtermist world-improvement.

Nick Beckstead: I don't know whether it will be larger in total grantmaking than Open Phil—I don't actually have the numbers to hand of exactly how much Open Phil spent this year, and maybe they will spend more than $100 million this year. I think it's a good question how they are going to be different. I loved working at Open Phil and have the utmost respect for the team, so a lot of things are going to be really similar, and I'm going to try to carry over a lot of the things that I thought were the best about what Open Phil is doing. And then it's to be determined what exactly is going to be different.

The Future Fund website is pretty straightforward about what it is and what it's trying to do right now. So if I just think about what's different, one thing is that we're experimenting with some very broad, decentralized grantmaking programs. And we're doing this "bold and decisive tests" philosophy for engaging with them. Our current plan is, we launched this regranting program for about 20 people, who are deputized as grantmakers who can make grants on our behalf. I think that's exciting, because if we are making big mistakes, these people may be able to fill in the gaps and bring us good ideas that we would have missed. It could be a thing that, if it goes well, could scale a lot with limited oversight, which is great if you are a major funder. So it's something it makes sense to experiment with.

Another one is open calls for proposals. And I don't know if this will be the way we will always do things. All of these things are experiments we are doing this year. The intention is to test them in a really big way, where there's a big enough test that if it doesn't work you are, like, "eh, I guess that thing doesn't work", and you are not, like, "oh, well, maybe if we had made it 50% larger it would have been good". So, yeah, trying to boldly and decisively test these things. Open Phil has had some open calls for proposals, but ours was very widely promoted, and has very wide scope in terms of what people can apply for and range of funding. So that's an interesting experiment and we'll see what comes with that.

We're interested in experimenting with prizes this year. Our thoughts on that are less developed in exactly what form it's going to take. And then the other big thing we want to do some of this year is just trying to get people to launch new projects. We've written down this list of 35–40 things that we are interested in seeing people do. As examples, and as a way to get people started. So we are trying to connect founders with those projects and see where that goes. Open Phil does some of that too, so I don't know if that's incredibly different. But it's a bit different in that we have a longer list, it's all in one place, and we’re doing a concentrated push around it.

So we'll see how these things go. And if one of these things is great, maybe we'll do a lot more of it. If none of them are great, then maybe we'll be a whole lot like Open Phil. [laughs]

Oh, I have one other way in which we are interestingly different. We don't have Program Officers that are dedicated to specific things right now. We may and probably will in the future, but right now we are testing mechanisms more than we are saying "we are hiring a Program Officer that does AI alignment, and we are going to make AI alignment grants under this following structure for a while". It's more like we are testing these mechanisms and people can participate through these mechanisms in many of our areas of interest, or anything else that they think is particularly relevant to the long-term future that we might have overlooked.

Future Matters: Where do you imagine the Future Fund being in ten years' time? Given what you have said, there's a lot of uncertainty here, contingent on the results of these experiments. But can you see the outlines of how things may look a decade from now?

Nick Beckstead: I think I can't, really. And it probably depends more on how these experiments go. We'll hopefully know a lot more about how it's gone for these different types of grantmaking. One answer is that if one of these things is great, we'll do a bunch more of it. But I think it's too early to say.

Future Matters: And this experimentation over the next year, is there a hard limit there or could you imagine keeping this exploratory phase for many years?

Nick Beckstead: Good question. I think I am mostly just thinking about the next year, and then we'll see where it goes. I think there will be some interesting tests. I'm excited about that: the intention is really to learn as much as possible, so we are optimizing around that. And for that reason, there's a lot of openness.

Future Matters: Thanks, Nick!


Thanks to Thomas Moynihan for discovering the epigram quote and to Tereza Flídrová for help with logo design.

Comments20
Sorted by Click to highlight new comments since:

Thanks for making this a podcast as well as written! Do you expect to get it on other platforms than Spotify?

Thanks. Yes, we have already submitted it to Apple Podcasts and Google Podcasts, but it's currently pending approval.

Awesome! Also, the whole thing looks great (which was hopefully implied by my excitement at an audio form, but seems worth actually stating).

I couldn't find the podcast on CastBox (which I use). I see that it's the 15th most popular medium for listening to podcasts in the US in 2019-20.

According to the data I was using, it seems Pandora and Audible are the objectively best mediums to target next with the podcast, but I have a vested interest in you allowing me to listen to it on the app I use.

The podcast should be on CastBox now.

Thanks. I will take a look at this later today.

Incidentally, the podcast is now available on Google Podcasts. We are still trying to get Apple Podcasts (whose website seems kind of broken) to accept it.

Have you looked at submitting it to Youtube as well? Tbh I think that's the best podcast medium - seeing someone's expression as they talk is worth a lot.

Alas, for the time being at least the audio version of our podcast uses TTS, so there are no expressions to be seen.

I use Pocket Casts and I couldn't find it there. Apparently one can submit to their database here. Also: Is there an RSS feed for the podcast?

I just added it to Pocket Casts. You should now be able to find it.

You can find an RSS feed, as well as links to other podcast platforms, here. (Note that the podcast may not yet be available on some platforms, but will soon.)

Thanks for putting this together! This is a really helpful summary of recent research, projects and job openings related to longtermism - it can be tough to track all these things otherwise. Looking forward to future issues.

Some feedback on the audio version: I would try using multiple TTS voices, e.g. for different sections. I would especially recommend reading the interviewer and Nick Beckstead's lines in different voices in cases like the interview.

I love that you chose the "meme voice", but I also found it somewhat distracting at the beginning. (A TTS voice that sounds like this one is used in many internet memes.)

Thanks for the feedback. Our plan was to actually use two different voices, and alternate between the two for the research summaries, as well as using different voices for the  interview questions and answers. The only reason that we didn't end up doing this is that, annoyingly, the app we use to generate the voices doesn't let you mix British and American voices, and there's only one male British voice and one male American voice we liked (our assumption has been that we should use male voices since we are male ourselves, although on reflection this doesn't look so obvious). How would you feel about mixing male and female voices? In particular, how would you feel about using a female voice for the questions we ask the person being interviewed, when that person is male?

I hadn't realized that our chosen voice was a "meme voice" (probably because I'm not a fan of memes). How do you like the voice used by the Nonlinear Library? We could use that one instead.

Very cool! I'm super happy that this exists, and I'm excited by this first issue. On the constructive criticism side, I think this is too long for a newsletter. I think it's unlikely that I fully read future editions, and if they're all this long, I might unsubscribe at some point. So consider this one vote for trying to make the newsletter shorter :)

We've decided to shorten the Substack version by excluding the interview. So if you subscribe on Substack, the issue will be approximately 2.5k words in length. The EA Forum version will include the interview and will be approximately 4k words in length.

That seems like a positive adjustment from my perspective! I think the interviews are valuable content, so I'd still encourage you to add a link to the interview, with the name and topic of the person you interviewed. That way interested Substack readers will still see it.

Thanks. Yes, that is exactly our plan.

+1. Fwiw, I was going to subscribe and then didn't when I saw how long it was.

Thanks, this is useful feedback.

One option is to publish the newsletter more often, rather than include less content. We haven't considered that but it's something we could think about more.

Looks awesome – just subscribed to the Substack and Spotify feeds! 😎

More from Pablo
Curated and popular this week
Relevant opportunities