A

Arepo

4147 karmaJoined Sep 2014

Sequences
4

EA advertisements
Courting Virgo
EA Gather Town
Improving EA tech work

Comments
590

Topic contributions
17

This. I'm imagine some Abrodolph Lincoler-esque character - Abronard Willter maybe - putting me in a brazen bull and cooing 'Don't worry, this will all be over soon. I'm going to create 10billion more of you also on a rack, and the fact that I continue to torture you personally will barely matter.'

most people intrinsically value diversity of experience, and see a large number of very similar lives as less of a good thing.

Especially in such a contentious argument, I think it's bad epistemics to link to a page with some random dude saying he personally believes x (and giving no argument for it) with the linktext 'most people believe x'.

I hadn't even thought of that! Yeah, that's some pretty impressive hypocrisy.

This doesn’t seem so different from p-zombies, and probably some moral thought experiments.

I'm not sure what you mean here. That the simulation argument doesn't seem different from those? Or that the argument that 'we have no evidence of their existence and therefore shouldn't update on speculation about them' is comparable to what I'm saying about the simulation hypothesis? 

If the latter, fwiw, I feel the same way about p-zombies and (other) thought experiments. They are a terrible methodology for reasoning about anything, very occasionally the only option we can think of, but one from which philosophers don't feel nearly enough urgency to find alternatives to which to move.

Our understanding of physics in our universe could still be about as reliable (depending on the simulation), and so would anything that follows from it. There's just more stuff outside our universe.

I don't see how this would allow us to update on anything based on speculation about the 'more stuff'. Yeah, we might choose to presume our pocket simulation will continue to behave as it has, but we don't get to then say 'there's some class of matter other than our own simulated matter which generates consciousness therefore consciousnessness is substrate independence.

As you say in your other comment, there's probably some minimal level of substrate independence that non-solipsists have to accept, but that turns it into an empirical question (as it should be) - so an imagined metaverse gives us no reason to change our view on how substrate independent consciousness is.

in doing so, you're throwing away too much of the causal structure that apparently explains our beliefs and makes them reliable

This seems like an argument from sadness. What we would lose by imagining some outcomes shouldn't affect our overall epistemics.

Arepo
2d26
6
3
1

I think assuming that this is purely based on optics is unwarranted. Like I argued at the time, talk of 'optics' is kind of insulting to the everyperson, carrying the implication that the irrational public will misunderstand the +EV of such a decision. Whereas I contend that there's a perfectly rational Bayesian update that people should do towards an organisation being poorly run or even corrupt when that org spends large sums of money on vanity projects which they justify with a vague claim about having done some CBA that they don't want to share.

Meanwhile, there's no guarantee EA will have fresh billionaires any time soon, so even if it takes a couple of years to sell, it might be worth it, given that it a) there are alternative far cheaper-to-run venues like Lightcone and CEEALAR, and b) just recouping the sticker price would fund multiple cash-strapped EA orgs for several years.

  1. We may ourselves be simulated in a similar way without knowing it, if our entire reality is also simulated. We wouldn't necessarily have access to what the simulation is run on.

It seems weird to meaningfully update in favour of some concrete view on the basis that something might be true but that

  1. we have no evidence for it, and 
  2. if it is true then everything we know about the universe is equally undermined

Is there are online version of the case for the fading qualia argument? This feels a bit abstract without it...

Partly from a scepticism about the highly speculative arguments for 'direct' longtermist work - on which I think my prior is substantially lower than most of the longtermist community (though I strongly suspect selection effects, and that this scepticism would be relatively broadly shared further from the core of the movement).

Partly from something harder to pin down, that good outcomes do tend to cluster in a way that e.g. Givewell seem to recognise, but AFAIK have never really tried to account for (in late 2022, they were still citing that post while saying 'we basically ignore these'). So if we're trying to imagine the whole picture, we need to have some kind of priors anyway.* Mine are some combination of considerations like

  • there are a huge number of ways in which people tend to behave more generously when they receive generosity, and it's possible the ripple effects of this are much bigger than we realise (small ripples over a wide group of people that are invisibly small per-person could still be momentous); 
  • having healthier, more economically developed people will tend to lead to more having more economically developed regions (I didn't find John's arguments against randomistas driving growth persuasive - e.g. IIRC it looked at absolute effect size of randomista-driven growth without properly accounting for the relative budgets vs other interventions. Though if he is right, I might make the following arguments about short term growth policies vs longtermism); 
    • having more economically countries seems better for global political stability than having fewer, so reduce the risk of global catastrophes; 
    • having more economically developed countries seems better for global resilience to catastrophe than having fewer, so reduce the magnitude of global catastrophes;
    • even 'minor' (i.e. non-extinction) global catastrophes can have a substantial reduction on our long-term prospects, so reducing their risk and magnitude is a potentially big deal
  • tighter feedback loops and better data mean we can learn more about incidental-optimisations than we can with longtermism work, including ones we didn't know at the time we wanted to optimise for - we build up a corpus of real-world data that can be referred to whenever we think of a new consideration
  • tighter feedback loops also mean I expect the people working on it to be more effective at what they do, and less susceptible to (being selected by or themselves being subject to) systemic biases/groupthink/motivated reasoning etc.
  • the combination of greater evidence base and tighter feedback loops has countless other ineffable reinforcing-general-good benefits, like greater probability of shutting down when having 0 or negative effect; better signalling; greater reasoning transparency; easier measurement of Shapley values vs rather than counterfactuals; faster and better process refinement etc

Hey Johannes :)

To be clear, I think the original post is uncontroversially right that it's very unlikely that the best intervention for A is also the best intervention for B. My claim is that, when something is well evidenced to be optimal for A and perhaps well evidenced to be high tier for B, you should have a relatively high prior that it's going to be high tier or even optimal for some related concern C.

Where you have actual evidence available for how effective various interventions are for C, this prior is largely irrelevant - you look at the evidence in the normal way. But when all interventions targeting C are highly speculative (as they universally are for longtermism), that prior seems to have much more weight.

Load more