Hide table of contents

Update: The video with Toby's answers is available on CEA's YouTube channel.

Note: Aaron Gertler, a Forum moderator, is posting this with Toby's account. (That's why the post is written in the third person.)

 

This is a Virtual EA Global AMA: several people will be posting AMAs on the Forum, then recording their answers in videos that will be broadcast at the Virtual EA Global event this weekend.

Please post your questions by 10:00 am PDT on March 18th (Wednesday) if you can. That's when Toby plans to record his video. 

 

About Toby

Toby Ord is a moral philosopher focusing on the big picture questions facing humanity. What are the most important issues of our time? How can we best address them?

His earlier work explored the ethics of global health and global poverty, which led him to create Giving What We Can, whose members have pledged hundreds of millions of pounds to the most effective charities helping to improve the world. He also co-founded the wider effective altruism movement.

His current research is on avoiding the threat of human extinction, which he considers to be among the most pressing and neglected issues we face. He has advised the World Health Organization, the World Bank, the World Economic Forum, the US National Intelligence Council, the UK Prime Minister’s Office, Cabinet Office, and Government Office for Science. His work has been featured more than a hundred times in the national and international media.

Toby's new book, The Precipice, is now available for purchase in the UK and pre-order in other countries. You can learn more about the book here.

Comments82
Sorted by Click to highlight new comments since: Today at 10:05 AM
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings
[anonymous]4y53
0
0

How likely do you think we would be to recover from a catastrophe killing 50%/90%/99% of the world population respectively?

6
SiebeRozendal
4y
Given the high uncertainty of this question, would you (Toby) consider giving imprecise credences?
[anonymous]4y34
0
0

Does it worry you that there are very few published peer reviewed treatments of why AGI risk should be taken seriously that are relevant to current machine learning technology?

What would convince you that preventing s-risks is a bigger priority than preventing x-risks?

Suppose that humanity unified to pursue a common goal, and you faced a gamble where that goal would be the most morally valuable goal with probability p, and the most morally disvaluable goal with probability 1-p. Given your current beliefs about those goals, at what value of p would you prefer this gamble over extinction?

5
NunoSempere
4y
I like how you operationalized the second question.

The timing of this AMA is pretty awkward, since many people will presumably not have access to the book or will not have finished reading the book. For comparison, Stuart Russell's new book was published in October, and the AMA was in December, which seems like a much more comfortable length of time for people to process the book. Personally, I will probably have a lot of questions once I read the book, and I also don't want to waste Toby's time by asking questions that will be answered in the book. Is there any way to delay the AMA or hold a second one at a later date?

Thanks for the comment! Toby is going to do a written AMA on the Forum later in the year too. This one is timed so that we can have video answers during Virtual EA Global.

2
Linch
4y
Strongly concur, as someone who preordered the book and is excited to read it.
[anonymous]4y26
0
0

What is your solution to Pascal's Mugging?

What's a regular disagreement that you have with other researchers at FHI? What's your take on it and why do you think the other people are wrong? ;-)

We're currently in a time of global crisis, as the number of people infected by the coronavirus continues to grow exponentially in many countries. This is a bit of a hard question, but a time of crisis is often the time when governments substantially refactor things because it's finally transparent that they're not working, so can you name a feasible concrete change in the UK government (or a broader policy for any developed government) that you think would put us in a far better position for future such situations, especially future pandemics that have a much more serious chance of being an existential catastrophe?

In an 80,000 Hours interview, Tyler Cowen states:

[44:06]
I don't think we'll ever leave the galaxy or maybe not even the solar system.
. . .
[44:27]
I see the recurrence of war in human history so frequently, and I’m not completely convinced by Steven Pinker [author of the book Better Angels of Our Nature, which argues that human violence is declining]. I agree with Steven Pinker, that the chance of a very violent war indeed has gone down and is going down, maybe every year, but the tail risk is still there. And if you let the clock tick out for a long enough period of time, at some point it will happen.
Powerful abilities to manipulate energy also mean powerful weapons, eventually powerful weapons in decentralized hands. I don’t think we know how stable that process is, but again, let the clock tick out, and you should be very worried.

How likely do you think it is that humans (or post-humans) will get to a point where existential risk becomes extremely low? Have you looked into the question of whether interstellar colonization will be possible in the future, and if so, do you broadly agree with Nick Beckstead's conclusion in this piece? Do you think Cowen&a... (read more)

2
MichaelStJules
4y
This math problem is relevant, although maybe the assumptions aren't realistic. Basically, under certain assumptions, either our population has to increase without bound, or we go extinct. EDIT: The main assumption is effectively that extinction risk is bounded below by a constant that depends only on the current population size, and not the time (when the generation happens). But you could imagine that even for a stable population size, this risk could be decreased asymptotically to 0 over time. I think that's basically the only other way out. So, either: 1. We go extinct, 2. Our population increases without bound, or 3. We decrease extinction risk towards 0 in the long-run. Of course, extinction could still take a long time, and a lot of (dis)value could happen before then. This result isn't so interesting if we think extinction is almost guaranteed anyway, due to heat death, etc..
6
Pablo
4y
Source for the screenshot: Samuel Karlin & Howard E. Taylor, A First Course in Stochastic Processes, 2nd ed., New York: Academic Press, 1975.
2
Misha_Yagudin
4y
re: 3 — to be more precise, one can show that $\prod_i (1 - p_i) > 0$ iff $\sum p_i < ∞$, where $p_i \in [0, 1)$ is a probability of extinction in a given year.
2
MichaelStJules
4y
Should that be ∑ilog(1−pi)>−∞? Just taking logarithms.
3
Misha_Yagudin
4y
This is a valid convergence test. But I think it's easier to reason about \sum p_i < ∞. See math.SE for a proof.
1
ishi
4y
I've seen and liked that book. But i don't think there really is enough information about risks (eg earth being hit by a comet or meteor that kills everything) to really say much---maybe if cosmology makes major advances or in other fields one can say somerthing but that might takes centuries.

What do you think is the biggest mistake that the EA community is currently making?

[anonymous]4y19
0
0

Is your view that:

(i) the main thing that matters for the long-term is whether we get to the stars

(ii) This could plausibly happen in the next few centuries

(iii) therefore the main long-termist relevance of our actions is whether we survive the next few centuries and can make it to the stars?

Or do you put some weight on the view that long-term human and post-human flourishing on Earth could also account for >1% of the total plausible potential of our actions?

Do you think that "a panel of superforecasters, after being exposed to all the arguments [about existential risk], would be closer to [MacAskill's] view [about the level of risk this century] than to the median FHI view"? If so, should we defer to such a panel out of epistemic modesty?

8
Davidmanheim
4y
I personally, writing as a superforecaster, think that this isn't particularly useful. Superforecasters tend to be really good at evaluating and updating based on concrete evidence, but I'm far less sure about whether their ability to evaluate arguments is any better than that of a similarly educated / intelligent group. I do think that FHI is a weird test case, however, because it is selecting on the outcome variable - people who think existential risks are urgent are actively trying to work there. I'd prefer to look at, say, the views of a groups of undergraduates after taking a course on existential risk. (And this seems like an easy thing to check, given that there are such courses ongoing.)
3
MichaelStJules
4y
Do you have references/numbers for these views you can include here?

What have you changed your mind on recently?

There are many ways that technological development and economic growth could potentially affect the long-term future, including:

  • Hastening the development of technologies that create existential risk (see here)
  • Hastening the development of technologies that mitigate existential risk (see here)
  • Broadly empowering humanity (see here)
  • Improving human values (see here and here)
  • Reducing the chance of international armed conflict (see here)
  • Improving international cooperation (see the climate change mitigation debate)
  • Shifting the growth curve forward (see here)
  • Hastening the colonization of the accessible universe (see here and here)

What do you think is the overall sign of economic growth? Is it different for developing and developed countries?

Note: The fifth bullet point was added after Toby recorded his answers.

If you could only convey one idea from your new book to people who are already heavily involved in longtermism, what would it be?

Can you tell us a specific insight about AI that has made you positively update on the likelihood that we can align superintelligence? And a negative one?

What are the three most interesting ideas you've heard in the last three years? (They don't have to be the most important, just the most surprising/brilliant/unexpected/etc.)

[anonymous]4y13
0
0

Do you think we will ever have a unified and satisfying theory of how to respond to moral uncertainty, given the huge structural and substantive differences between apparently plausible moral theories? Will MacAskill's thesis is one of the best treatments of this problem, and it seems like it would be hard to build an account of how one ought to respond to e.g. Rawlsianism, totalism, libertarianism, person-affecting views, absolutist rights-based theories, and so on, across most choice situations.

What do you think is the strongest argument against working to improve the long-term future? What do you think is the strongest argument against working to reduce existential risk?

Can you describe what you think it would look like 5 years from now if we were in a world that was making substantially good steps to deal with the existential threat of misaligned artificial general intelligence?

Should non-suffering focused altruists cooperate with suffering-focused altruists by giving more weight to suffering than they otherwise would given their worldview (or given their worldview adjusted for moral uncertainty)?

Do you think there are any actions that would obviously decrease existential risk? (I took this question from here.) If not, does this significantly reduce the expected value of work to reduce existential risk or is it just something that people have to be careful about (similar to limited feedback loops, information hazards, unilateralist's curse etc.)?

If you could convince a dozen of the world's best philosophers (who aren't already doing EA-aligned research) to work on topics of your choice, which questions would you ask them to investigate?

Are there any specific natural existential risks that are significant enough that more than 1% of EA resources should be devoted to it? .1%? .01%?

3
MichaelA
4y
Good question! Just a thought: Assuming this question is intended to essentially be about natural vs anthropogenic risks, rather than also comparing against other things like animal welfare and global poverty, it might be simplest to instead wonder: "Are there any specific natural existential risks that are significant enough that more than 1% of longtermist [or "existential risk focused"] resources should be devoted to it? .1%? .01%?"

Can you tell us something funny that Nick Bostrom once said that made you laugh? We know he used to do standup in London...

On balance, what do you think is the probability that we are at or close to a hinge of history (either right now, this decade, or this century)?

What are the most important new ideas in your book for someone who's already been in the EA movement for quite a while?

You break down a "grand strategy for humanity" into reaching existential security, the long reflection, and then actually achieving our potential. I like this, and think it would be a good strategy for most risks.

But do you worry that we might not get a chance for a long reflection before having to "lock in" certain things to reach existential security?

For example, perhaps to reach existential security given a vulnerable world, we put in place "greatly amplified capacities for preventive policing and global governance" (Bostr... (read more)

3
Rhyss
4y
I had a similar question myself. It seems like believing in a "long reflection" period requires denying that there will be a human-aligned AGI. My understanding would have been that once a human-aligned AGI is developed, there would not be much need for human reflection—and whatever human reflection did take place could be accelerated through interactions with the superintelligence, and would therefore not be "long." I would have thought, then, that most of the reflection on our values would need to have been completed before the creation of an AGI. From what I've read of The Precipice, there is no explanation for how a long reflection is compatible with the creation of a human-aligned AGI.
[anonymous]4y8
0
0

What are your top three productivity tips?

Do you think that climate change has been neglected in the EA movement? What are some options that seem great to you at the moment to have a very large impact to stir us in a better direction regarding climate change?

We have a lot of philosophers and philosophically-minded people in EA, but only a tiny number of them are working on philosophical issues related to AI safety. Yet from my perspective as an AI safety researcher, it feels like there are some crucial questions which we need good philosophy to answer (many listed here; I'm particularly thinking about philosophy of mind and agency as applied to AI, a la Dennett). How do you think this funnel could be improved?

What's a book that you read and has impacted how you think / who you are, that you expected most people here won't have read?

Can you describe a typical day in your life with sufficient granularity that readers can have a sense of what "being a researcher at a place like FHI" is like?

What's up with Pascal's Mugging? Why hasn't this pesky problem just been authoritatively solved? (and if it has, what's the solution?) What is your preferred answer? / Which bullets do you bite (e.g., bounded utility function, assigning probability 0 to events, a decision-theoretical approach cop-out, etc.)?

Which ethical views do you have non-negligible credence in and, if true, would substantially change what you think ought to be prioritized, and how? How much credence do you have in these views?

Suppose your life's work ended up having negative impact. What is the most likely scenario under which this could happen?

As a sharp mind, respected scholar, or prominent member in the EA community, you have a certain degree of agency, an ability to start new projects and make things happen, a no small amount of oomph and mojo. How are you planning to use this agency in the coming decades?

8
NunoSempere
4y
This is a genuine question. The framing is that if Toby Ord wants to get in touch with a high ranking member of government, get an article published in a prominent newspaper, direct a large number of man hours to a project he finds worthy, etc. he probably can; just the association to Oxford will open doors in many cases. This is in opposition to a box in a basement which produces the same research he would, and some of these differences stem from him being endorsed by some prestigious organizations, and there being some social common knowledge around his person. The words "public intellectual" come to mind. I'm wondering how the powers-of-being-different-from-a-box-which-produces-research will pan out.

What's one book that you think most EAs have not yet read and you think that they should (other than your own, of course)?

What are some of your current challenges? (maybe someone in the audience can help!)

What are you looking for in a research / operations colleague?

How robust do you think the case is for any specific longtermist intervention? E.g. do new considerations constantly affect your belief in their cost-effectiveness, and by how much?

In your book, you define an existential catastrophe as "the destruction of humanity's longterm potential". Would defining it instead as "the destruction of the vast majority of the longterm potential for value in the universe" capture the concept you wish to refer to? Would it perhaps slightly more technically accurately/explicitly capture what you wish to refer to, just perhaps in a less accessible or emotionally resonating way?

I wonder this partly because you write:

It is not that I think only humans count. Instead, it is that hu
... (read more)
[anonymous]4y3
0
0

Do you think the problems of infinite ethics give us reason to reject totalism or long-termism? If so, what is the alternative?

What are your thoughts on the argument that the track record of robustly good actions is much better than that of actions contingent on high uncertainty arguments? (See here and here at 34:38 for pushback.)

How confident are you that the solution to infinite ethics is not discounting? How confident are you that the solution to the possibility of an infinitely positive/infinitely negative world automatically taking priority is not capping the amount of value we care about at a level low enough to undermine longtermism? If you're pretty confident about both of these, do you think additional research on infinites is relatively low priority?

How much uncertainty is there in your case for existential risk? What would you put as the probability that, in 2100, the expected value of a substantial reduction in existential risk over the course of this century will be viewed by EA-minded people as highly positive? Do you think we can predict what direction future crucial considerations will point based on what direction past crucial considerations have pointed?

What do you think of applying Open Phil's outlier opportunities principle to an individual EA? Do you think that, even in the absence of instrumental considerations, an early career EA who thinks longtermism is probably correct but possibly wrong should choose a substantial chance of making a major contribution to increasing access to pain relief in the developing world over a small chance of making a major contribution to reducing GCBRs?

Is the cause area of reducing great power conflict still entirely in the research stage or is there anything that people can concretely do? (Brian Tse's EA Global talk seemed to mostly call for more research.) What do you think of greater transparency about military capabilities (click here and go to 24:13 for context) or promoting a more positive view of China (same link at 25:38 for context)? Do you think EAs should refrain from criticizing China on human rights issues (click here and search the transcript for "I noticed that over the last few ... (read more)

What are your thoughts on these questions from page 20 of the Global Priorities Institute research agenda?

How likely is it that civilisation will converge on the correct moral theory given enough time? What implications does this have for cause prioritisation in the nearer term?
How likely is it that the correct moral theory is a ‘Theory X’, a theory radically different from any yet proposed? If likely, how likely is it that civilisation will discover it, and converge on it, given enough time? While it remains unknown, how can we properly hedg
... (read more)

What are your views on the prioritization of extinction risks vs other longtermist interventions/causes?

Which interventions/causes do you think are best to support/work on according to views in which extra people with good or great lives not being born is not at all bad (or far outweighed by other considerations)? E.g. different person-affecting views, or the procreation asymmetry.

You seem fairly confident that we are at "the precipice", or "a uniquely important time in our story". This seems very plausible to me. But how long of a period are you imagining for the precipice?

The claim is much stronger if you mean something like a century than something like a few millennia. But even if the "hingey" period is a few millennia, then I imagine that us being somewhere in it could still be quite an important fact.

(This might be answered past chapter 1 of the book.)

Do you lean more towards a preferential account of value, a hedonistic one, or something else?

How do you think tradeoffs between pleasure and suffering are best grounded according to a hedonistic view? It seems like there's no objective one-size-fits-all trade-off rate, since it seems like you could have different people have different preferences about the same quantities of pleasure and suffering in themselves.

What new evidence would cause the biggest shifts in your priorities?

What are the three least interesting ideas you've heard in the last three years? (They don't have to be the least important, just the least surprising/brilliant/unexpected/etc.)

2
Ben Pace
4y
This is such an odd question. Could produce surprising answers though, if it’s something like “the least interesting ideas that people still took seriously” or “the least interesting ideas that are still a little bit interesting”. Upvoted.
2
Peter Wildeford
4y
Sometimes the obvious is still important to discuss.

Can you describe what you think it would look like 5 years from now if we were in a world that was making substantially good steps to deal with the existential threat of engineered pandemics?

What do you like to do during your free time?

There will be a lot to learn from the current pandemic from global society. Which lesson would be most useful to "push" from EA's side?

I assume this question is in between the "best lesson to learn" and "lesson most likely to be learned". We probably want to push a lesson that's useful to learn, and that our push actually helps to bring it into policy.

What are your thoughts on how to evaluate or predict the impact of longtermist/x-risk interventions, or specifically efforts to generate and spread insights on this matters? E.g., how do you think about decisions like which medium to write in and whether to focus on generating ideas vs publicising ideas vs fundraising?

How would your views change (if at all) if you thought it was likely that there are intelligent beings elsewhere in the universe that "are responsive to moral reasons and moral argument" (quote from your book)? Or if you thought it's likely that, if humans suffer an existential catastrophe, other such beings would evolve on Earth later, with enough time to potentially colonise the stars?

Do your thoughts on these matters depend somewhat on your thoughts on moral realism vs antirealism/subjectivism?

What are some of your favourite theorems, proofs, algorithms, and data structures?

What are some directions you'd like the EA movement or some parts of the EA movement to take?

If you've read the book 'So good they can't ignore you', what do you think are the most important skills to master to be a writer/philosopher like yourself?

Hi Tobby! Thanks for being such a great source of inspiration for philosophy and EA. You're a great model to me!

Some questions, feel free to pick:

1) What philosophers are your sources of inspiration and why?

(put my other questions in separate comments). Also, writing "Toby"!

4
Ben Pace
4y
I think your questions are great. I suggest that you leave 7 separate comments so that users can vote on the ones that they’re most interested in.
3
Caro
4y
Thanks Ben! I've edited the message to have only one question per post. :-)
Curated and popular this week
Relevant opportunities