Hide table of contents

Full question:

Where are the long-termist theories of victory / impact? If there are some loitering around behind private Google Docs but not shared more widely, what's driving this?

Why I'm asking this:

  • I think it's one thing to say "the long-run matters", but quite another thing to say "the long-run matters, and we can probably affect it" and quite another thing to say "the long-run matters, we have worked up ways to impact it and - more importantly - to know if we're failing so we can divest / divert energy elsewhere".
  • I feel the above sums up a lot of the tension between long-termists and people more sceptical; they're driven by different assumptions about how easily the long-term can be positively affected, and how to affect it, and also different viewpoints on moral uncertainty both in the near- and long-term. (And other things to, but top-of-the-head example pulling). For example...:
    • a long-termist might be more sceptical about the durability of many near-term interventions, thereby opting for much longer-term planning and action; 
    • whereas a near-term advocate will believe that near-term interventions and strategies both cash out and can last long enough to affect the long-term
  • I'm a fence-sitter on the near- Vs. long-termism question, I think this is epistemically the most sensible place for me to be. Why? I'm frustrated by the amount of near-term interventions, particularly in global health and development, which prove less durable in the long-run, and therefore think there's value towards a longer time-range. But I also think there's many epistemic and accountability risks endemic to long-termism, such as how it's easy to pitch for results you'll (probably) never be around to see or be accountable for; I notice this thinking flaw in myself when I think about long-term interventions.
  • I think it's even more morally incumbent on advocates of long-termism to put forward more concrete theories of impact / victory for a few reasons:
    • Near-term advocates work will be tested / scrutinised by others just by coming to fruition in their lifetimes / careers; therefore, there is a feedback loop and holding to account and - where necessary - epistemic updating and disinvestment in things that don't work.
    • With long-termism, there's not a small risk of leaving value on the table not just right now but value that could endure; in fact, I know this makes many long-termists question whether they've chosen the right path, and this is good! 
    • I think if you're advocating for long-termism (or any cause area), you kind of owe it to people who are asked to change their careers / donating to give them weightier reasons and mechanisms for assessing whether / when they should change their minds.  
      • I agree with criticisms that we do have a culture of deference within EA are fair; particularly when contrasted with the rationalists and where there's more emphasis on developing intellectual skills so that you can understand and question what you're hearing from others in the tribe. And lesbehonest - there's benefits to splitting responsibilities between those who set direction and those who row there. But I do think the rowers deserve a bit more assurance they ain't headed for the doldrems of negative utility.
        • I notice when the deference pull happens to me. I was accepting theoretical arguments for long-termism based on some notable (but self-selecting) examples, such as how cultural values around meat consumption have shaped animal suffering for thousands of years. But I was still not listening to the part of my brain saying "but how easy is it to apply these lessons in reality to make the purposted impact?" 
      • I can't be the only one feeling this tension, and wanting to scrutinise things in more detail, but also not having time outside of my work and personal obligations to do this. But it feels like there community is drowning in chat of "this matters so much and we can do things about it, so you should do something about it", but a lot less of "here's how testable / scrutinisable these interventions are, so you can make informed decisions". Maybe this will change when Will's book is pumping on the airways a bit less, idk...
  • What could these ToI / ToV look like?
    • As someone who's done lots of ToI / ToV work before, it would seem sensible to start with slim, narrow causes and build out from that. Ideally selecting for causes with some structural similarities to others, and some existing real-world evidence; such as X-risk pandemic preparedness. But I'd likely choose an even smaller sliver within that and work on it in detail; something like 1-3 specific biosecurity interventions which could be implemented now or in the near-term.
    • I think these ToI / ToV could be for narrow and broad long-termism, and individual long-termish cause areas within that, such as improving long-term institutional design and decision making. 
    • Arguably, broad-longtermism should have even more considered ToV / ToI given how fuzzy an idea it is, and how liable our world is to unintended consequences / counterproductive backlash when it comes to things like inculcating cultural values.
  • Would I be willing to do some of the thinking on this?
    • Eh maybe... I ain't a full-time advocate, and therefore less likely to be the person putting forward long-termist ToI / ToV, but could be a red-teamer.

What we already have:

We've seen some theories of impact / victory written up in varying levels of detail...:

  • on farmed animal welfare here
  • improving institutional decision-making here
  • AI governance here, and here, arguably with more of a near-term perspective in so far as when the proof of impact is expected to fall
  • EA community building / meta-EA here
  • general ToV building across many cause areas, or inviting others to do the same here
  • and advice about world-view investigations, lending itself a little like a 'how-to' guide, over here
  • "What We Owe the Future" (or at least the pre-published versions I read of it) is notably not about putting forward ToVs, but rather arguing that the long-run can be affected and putting forward certain plausible mechanisms on the macro level (e.g. influencing future values) and on the meso level (e.g. citizens assemblies and scaling up participatory democracy like done in Taiwan).

10

0
0

Reactions

0
0
New Answer
New Comment

1 Answers sorted by

I guess the AI governance piece you link would be considered very pressing under longtermism. So I think that's a big answer to your question. 

Comments5
Sorted by Click to highlight new comments since:

To clarify, are you asking for a theory of victory around advocating for longtermism (i.e., what is the path to impact for shifting minds around longtermism) or for causes that are currently considered good from a longtermist perspectives?

Three things:

1. I'm mostly asking for any theories of victory pertaining to causes which support a long-termist vision / end-goal, such as eliminating AI risk. 

2. But also interested in a theory of victory / impact on long-termism itself, of which multiple causes interact. For example, if 

  • long-termism goal = reduce all x-risk and develop technology to end suffereing, enable flourishing + colonise stars

then the composites of a theory of victory/ impact could be...:

  • reduce X risk pertaining to Ai, bio, others
  • research / udnerstanding around enabling flourishing / reducing suffering 
  • stimulate innovation
  • think through governance systems to ensure technologies / research above used for the good / not evil

3. Definitely not 'advocating for longtermism' as an ends in itself, but I can imagine that advocacy could be part of a wider theory of victory. For example, could postulate that reducing X-risk would require mobilising considerable private / public sector resources, requiring winnning hearts and minds around both how scarily probably X-risk is and the bigger goal of giving our descendants beautiful futures / leaving a legacy.

I also would be curious to see some Google Docs about longtermist strategy, as I've only ever seen whiteboard drawings at various conferences/events (and maybe a few other written ideas in various places).

However, I'm a bit confused by this:

But I also think there's many epistemic and accountability risks endemic to long-termism, such as how it's easy to pitch for results you'll (probably) never be around to see or be accountable for; I notice this thinking flaw in myself when I think about long-term interventions.

I may not be fully understanding your hesitance here, but if your point is saying something like "you may not be held accountable for your long-term impacts and thus are more likely to make mistakes/be biased," I feel like you're potentially really overweighting this issue:

  1. How often do people get held directly/accurately accountable for their near-term actions even from a near-term lens? People can write bad-but-persuasive arguments for short-term actions and reap status or other rewards. This is especially problematic when the difficulty of verifying the quality of analysis is really high (e.g., when you can't run RCTs or rely on engineering models/simulations). Longtermism's problem here doesn't seem that unique.
  2. Near-term interventions can also prove to be relatively unimportant from a long-term lens (e.g., saving lives from malaria but those people or their children die anyway in 60 years due to some x-risk that you neglected). Who holds these near-term interventions accountable in the long-term?
  3. You can definitely be held accountable or feel guilty if it becomes apparent in the near-term that your arguments/proposals will actually be bad in the long-term. I don't particularly support their claims, but people like Kerry Vaughan have been attacking OpenPhil/EA for funding OpenAI due Kerry's perceptions that it was a bad choice.
  4. Long-termism can probably still just bite the bullet here even if you mostly dismiss the previous points: sure, it might theoretically be harder to be confident that what you are doing is good for the long-term specifically due to this "lack of accountability" argument, but even if that cuts your expected value by 50%, 75%, or even 90%, the expected value of x-risk reduction is still incredibly massive. So, maybe you think "After accounting for my 'lack-of-accountability' biases, this action actually only has an expected net effect of 0.001% x-risk reduction (rather than 0.01% as I initially thought)," but that expected value would still be massive.

Agree there's something to your 1-3 counterarguments but I find the fourth less convincing, maybe more because of semantics than actual substantive disagreement.  Why? A difference in net effect of x-risk reduction of 0.01% Vs. 0.001% is pretty massive. These especially matter if the expected value is massive, because sometimes the same expected value holds across multiple areas. For example, preventing asteroid related X risk Vs. Ai Vs. Bio Vs. runaway climate change; (by definition) all same EV (if you take the arguments at face value). But plausibility of each approach and individual interventions within that would be pretty high variance. 

A difference in net effect of x-risk reduction of 0.01% Vs. 0.001% is pretty massive. These especially matter if the expected value is massive, because sometimes the same expected value holds across multiple areas. For example, preventing asteroid related X risk Vs. Ai Vs. Bio Vs. runaway climate change; (by definition) all same EV (if you take the arguments at face value). But plausibility of each approach and individual interventions within that would be pretty high variance. 

I'm a bit uncertain as to what you are arguing/disputing here. To clarify on my end, my 4th point  was mainly just saying "when comparing long-termist vs. near-termist causes, the concern over 'epistemic and accountability risks endemic to long-termism' seems relatively unimportant given [my previous 3 points and/or] the orders of magnitude of difference in expected value between near-termism vs. long-termism."

Your new comment seems to be saying that an order-of-magnitude uncertainty factor is important when comparing cause areas within long-termism, rather than when comparing between overall long-termism and overall near-termism. I will briefly respond to that claim in the next paragraph, but if your new comment is actually still arguing your original point that the potential for bias is concerning enough that it makes the expected value of long-termism less than or just roughly equal to that of near-termism, I'm confused how you came to that conclusion. Could you clarify which argument you are now trying to make?

Regarding the inter-long-termism comparisons, I'll just say one thing for now: some cause areas still seem significantly less important than other areas. For example, it might make sense for you to focus on x-risks from asteroids or other cosmic events if you have decades of experience in astrophysics and the field is currently undersaturated (although if you are a talented scientist it might make sense for you to offer some intellectual support to AI, bio, or even climate). However, the x-risk from asteroids is many orders of magnitude smaller than that from AI and probably even biological threats. Thus, even an uncertainty factor that for some reason only reduces your estimate of expected x-risk reduction via AI or bio work by a factor of 10 (e.g., from 0.001% to 0.0001%) without also affecting your estimate of expected x-risk reduction via work on asteroid safety will probably not have much effect on the direction of the inequality (i.e., x > y; 0.1x > y).

Curated and popular this week
Relevant opportunities