most people intrinsically value diversity of experience, and see a large number of very similar lives as less of a good thing.
Especially in such a contentious argument, I think it's bad epistemics to link to a page with some random dude saying he personally believes x (and giving no argument for it) with the linktext 'most people believe x'.
This doesn’t seem so different from p-zombies, and probably some moral thought experiments.
I'm not sure what you mean here. That the simulation argument doesn't seem different from those? Or that the argument that 'we have no evidence of their existence and therefore shouldn't update on speculation about them' is comparable to what I'm saying about the simulation hypothesis?
If the latter, fwiw, I feel the same way about p-zombies and (other) thought experiments. They are a terrible methodology for reasoning about anything, very occasionally the only option we can think of, but one from which philosophers don't feel nearly enough urgency to find alternatives to which to move.
Our understanding of physics in our universe could still be about as reliable (depending on the simulation), and so would anything that follows from it. There's just more stuff outside our universe.
I don't see how this would allow us to update on anything based on speculation about the 'more stuff'. Yeah, we might choose to presume our pocket simulation will continue to behave as it has, but we don't get to then say 'there's some class of matter other than our own simulated matter which generates consciousness therefore consciousnessness is substrate independence.
As you say in your other comment, there's probably some minimal level of substrate independence that non-solipsists have to accept, but that turns it into an empirical question (as it should be) - so an imagined metaverse gives us no reason to change our view on how substrate independent consciousness is.
in doing so, you're throwing away too much of the causal structure that apparently explains our beliefs and makes them reliable
This seems like an argument from sadness. What we would lose by imagining some outcomes shouldn't affect our overall epistemics.
I think assuming that this is purely based on optics is unwarranted. Like I argued at the time, talk of 'optics' is kind of insulting to the everyperson, carrying the implication that the irrational public will misunderstand the +EV of such a decision. Whereas I contend that there's a perfectly rational Bayesian update that people should do towards an organisation being poorly run or even corrupt when that org spends large sums of money on vanity projects which they justify with a vague claim about having done some CBA that they don't want to share.
Meanwhile, there's no guarantee EA will have fresh billionaires any time soon, so even if it takes a couple of years to sell, it might be worth it, given that it a) there are alternative far cheaper-to-run venues like Lightcone and CEEALAR, and b) just recouping the sticker price would fund multiple cash-strapped EA orgs for several years.
- We may ourselves be simulated in a similar way without knowing it, if our entire reality is also simulated. We wouldn't necessarily have access to what the simulation is run on.
It seems weird to meaningfully update in favour of some concrete view on the basis that something might be true but that
Is there are online version of the case for the fading qualia argument? This feels a bit abstract without it...
Partly from a scepticism about the highly speculative arguments for 'direct' longtermist work - on which I think my prior is substantially lower than most of the longtermist community (though I strongly suspect selection effects, and that this scepticism would be relatively broadly shared further from the core of the movement).
Partly from something harder to pin down, that good outcomes do tend to cluster in a way that e.g. Givewell seem to recognise, but AFAIK have never really tried to account for (in late 2022, they were still citing that post while saying 'we basically ignore these'). So if we're trying to imagine the whole picture, we need to have some kind of priors anyway.* Mine are some combination of considerations like
Hey Johannes :)
To be clear, I think the original post is uncontroversially right that it's very unlikely that the best intervention for A is also the best intervention for B. My claim is that, when something is well evidenced to be optimal for A and perhaps well evidenced to be high tier for B, you should have a relatively high prior that it's going to be high tier or even optimal for some related concern C.
Where you have actual evidence available for how effective various interventions are for C, this prior is largely irrelevant - you look at the evidence in the normal way. But when all interventions targeting C are highly speculative (as they universally are for longtermism), that prior seems to have much more weight.
This. I'm imagine some Abrodolph Lincoler-esque character - Abronard Willter maybe - putting me in a brazen bull and cooing 'Don't worry, this will all be over soon. I'm going to create 10billion more of you also on a rack, and the fact that I continue to torture you personally will barely matter.'