A

Arepo

4291 karmaJoined Sep 2014

Sequences
4

EA advertisements
Courting Virgo
EA Gather Town
Improving EA tech work

Comments
606

Topic contributions
17

Space lasers don't seem as much of a threat as Jordan posits. They have to be fired from somewhere. If that's within the solar system they're targeting, then that system will still have plenty of time to see the object that's going to shoot them arriving. If they're much further out, it becomes much harder both to aim them correctly and to provide enough power to keep them focused, and the source needs to be commensurately more powerful (as in more expensive to run), and with a bigger lens, so more visible while under constructive and more vulnerable to conventional attack. Or you could just react to the huge lens by building a comparatively tiny mirror protecting the key targets in your system. Or you could build a Dyson swarm and not have any single target on which the rest of the settlement depends.

This guy estimates max effective range of lasers vs anything that can react (which, at a high enough tech level includes planets) at about one light second.

Self-replicating robots don't seem like they have any particular advantage when used as a weapon over ones with more benign intent.

I don't think anyone's arguing current technology would allow self-sufficiency. But part of the case for offworld settlements is that they very strongly incentivise technolology that would.

In the medium term, an offworld colony doesn't have to be fully independent to afford a decent amount of security. If it can a) outlast some globally local catastrophe (e.g. a nuclear winter or airborne pandemic) and b) get back to Earth once things are safer, it still makes your civilisation more robust. 

I broadly agree with the arguments here. I also think space settlement has a robustness to its security that no other defence against GCRs does - it's trivially harder to kill all of more people spread more widely than it is to kill of a handful on a single planet. Compare this to technologies designed to regulate a single atmosphere to protect against biorisk, AI safety mechanisms that operate on AGIs whose ultimate nature we still know very little of, global political institutions that could be subverted or overthrown, bunkers on a single planet, etc, all of which seem much less stable over more than a century or so.

It might be that AGI/vacuum decay/some other mechanism will always be lurking out there will the potential of destroying all life, and if so nothing will protect us - but if we're expected value maximisers (which seems to me a more reasonable strategy than any alternative), we should be fairly optimistic about scenarios where it's at least possible that we can stabilise civilisation.

If you haven't seen it, you should check out Christopher Lankhof's Security Among the Stars, which goes in depth on the case for space settlement.

You might also want to check out my recent project that lets you model the level of security afforded by becoming multiplanetary explicitly.

I strongly agree with the first half of this post - bunkers and refuges are pretty bad as a defence against global catastrophes.

Your solution makes a lot less sense to me. It seems like it has many of the same problems you're trying to avoid - it won't be pressure tested until the world collapses. In particular, if it's an active part of a local community, that implies people will be leaving and reentering regularly, which means any virus with a long incubation period could be in there before people know it's a problem. 

Also, I feel like your whole list of questions still applies, and I have no sense of how you imagine it's going to answer them. In particular, I don't see how digging underground is going it make it better at water treatment, electricity generation etc than the equivalent aboveground services.

Fwiw my take is that offworld bases have much better longterm prospects - they're pressure tested every moment of every day; they perforce have meaningful isolation; the inhabitants are very strongly incentivised to develop the base to make it more sustainable as fast as possible; and once you have the technology for one, you have the technology for many, and are a long way towards developing the sort of technology that's necessary for a future in which we (per Nick Bostrom's Astronomical Waste essay) colonise the Virgo supercluster.

Hey Corentin,

The calculators are intentionally silent on the welfare side, on the thought that in practice it's much easier to treat as a mostly independent question. That's not to say it actually is independent, and ideally I would like the output to include more information about what the pathways to either extinction or an interstellar state, so that people can do some further function on the output. I do think it's reasonable, even on a totalising view, to prioritise improving future welfare conditional on it existing and largely ignoring the question of whether it will - but that's not a question the calculators can help with except inasmuch as you condition on the pathway.

Even if they gave pathways, they would be agnostic on whose welfare qualified. Personally I'm interested in maximising total valence (I have an old essay still waiting for its conclusion on the subject), so every sentient being's mental state 'counts', but you could use these with a different perspective in mind. Primarily empirical questions about e.g. the duration of factory farming, and animal suffering in terraformed systems seem like they'd need their own research projects.

Answer by ArepoApr 09, 20242
1
0

I don’t feel so comfortable talking to community health at the moment.

Can you say why? That seems like the obvious first step, so it would make it easier to offer a useful alternative if you could share some part of your hesitation. I don't know if it would feel any safer to message a stranger, but feel free to DM me your concerns if you prefer (or you can email me if you don't want them stored on the EA forum). I'm not a support professional, but maybe have enough detachment from but also skin in the EA community to help you figure out next step.

Fwiw I've never meaningfully interacted with the community health team, but everyone I know who has has found the experience extremely sensitively dealt with. You certainly won't be the first person to have had bad experiences with a community manager (and, not to place this on you as a burden, but there's the extra upside that if other people report similar 'minor' concerns as you do that they might add up to someone taking action.) 

I would also echo what others have said about being willing to step away from the community. Early career, if you have reasonable alternatives, I suspect you'll do better personally, and possibly do most good overall if you work for a regular company and build up some skills for a while.

Answer by ArepoApr 08, 20242
0
0

Triodos (the most ethical bank I could find)

Fwiw I have never been terribly impressed by Triodos' ethos. The last time I looked at the sort of projects they fund, they were e.g. investing in alternative medicine and divesting from nuclear energy, the former of which seems surreal to call 'ethical' and the latter of which is a disastrous strategy for the environment.

I would much rather invest in something with a higher interest rate and donate 50% of the difference (or whatever seems appropriate).

Yeah, it sounds like this might not be appropriate for someone with your credences, though I'm confused by what you say here:

I mentioned point/mean probability estimates, but my upper bounds (e.g. 90th percentile) are quite close, as they are strongly limited by the means. For example, if one's mean probability is 10^-10, the 90th percentile probability cannot be higher than 10^-9, otherwise the mean probability would be higher than 10^-10 (= (1 - 0.90)*10^-9), which is the mean. So my point remains as long as you think my point/mean estimates are reasonable.

I'm not sure what you mean by this. What are you taking the mean of, and which type of mean, and why? It sounds like maybe you're talking about the arithmetic mean? If so that isn't how I would think about unknown probabilities fwiw. IMO it seems more appropriate to use a geometric mean to express this kind of uncertainty, or explicitly model the distribution of possible probabilities. I don't think either approach should limit your high-end credences.

Makes sense. I liked that post. I think my comment was probably overly crictical, and not related specifically to your series. I was not clear, but I meant to point to the greater value of using standard cost-effectiveness analyses (relative to using a model like yours) given my current empirical beliefs (astronomically low non-TAI extinction risk).

Yeah, fair enough :)

If one thinks the probability of extinction or permanent collapse without TAI is astronomically low (as I do)

Have you written somewhere about why you think permanent collapse is so unlikely? The more I think about it, the higher my credence seems to get :\

I have the impression there is often little data to validate them, and therefore think significant weight should be given to a prior simply informed by how long a given transition took.

I'm not saying the sexual selection theory is strongly likely to be correct. But it seems to be taken seriously by evolutionary psychologists, and if you're finding that other theories of human intelligence give ultra-high credence of a new species evolving, it seems like that credence should be substantially lowered by even a modest belief in the plausibility of such theories.

Hm, the link works ok for me. What happens when you open it? It can be a bit shonky on mobile phones - maybe try using it on a laptop/desktop if you haven't.

It's called 'EA coworking and lounge', if that helps.

Thanks for the kind words, David. And apologies - I'd forgotten you'd published those explicit estimates. I'll edit them in to the OP.

My memory of WWOtF is that Will talks about the process, but other than giving a quick estimate of '90% chance we recover without coal, 95% chance with' he doesn't do as much quantifying as you and Luisa. 

Also Lewis Dartnell talked about the process extensively in The Knowledge, but I don't think he gives any estimate at all about probabilities (the closest I could find was in an essay for Aeon where he opined that 'an industrial revolution without coal would be, at a minimum, very difficult').

Load more