A

Arepo

4327 karmaJoined Sep 2014

Sequences
4

EA advertisements
Courting Virgo
EA Gather Town
Improving EA tech work

Comments
611

Topic contributions
17

I hadn't seen this until now. I still hope you'll do a follow up on the most recent round, since as I've said (repeatedly) elsewhere, I think you guys are the gold standard in the EA movement about how to do this well :)

One not necessarily very helpful thought:

Our work trial was overly intense and stressful, and unrepresentative of working at GWWC.

is a noble goal, but somewhat in tension with this goal:

In retrospect, we could have ensured this was done on a time-limited basis, or provided a more reasonable estimate.

It's really hard to make a strictly timed test, especially a sub-one-day one unstressful/intense.

This isn't to say you shouldn't do the latter, just to recognise that there's a natural tradeoff between two imperatives here. 

Another problem with timing is that you don't get to equalise across all axes, so you can trade one bias for another. For example, you're going to bias towards people who have access to an extra monitor or two at the time of taking the test, whose internet is faster or who are just in a less distracting location.

I don't know that that's really a solvable problem, and if not, the timed test seems probably the least of all evils, but again it seems like a tradeoff worth being aware of.

The dream is maybe some kind of self-contained challenge where you ask them to showcase some relevant way of thinking in a way in time isn't super important, but I can't think of any good version of that.

Personally I think this is much less a concern if a high time commitment involves a decently paid work trial. Since the initial application is never trivial, it could actually increase the expected value of applying if the next stage is (e.g.) a 3-day trial. 

Arepo
1d18
1
0
4
1

Having been rejected for a few including most recently by Giving What We Can, I'd say their feedback process was a country mile ahead of any other org I've applied to, and other organisations should consider going to them as the gold standard for such a process. I hope they'll write it up on the forum, soon.

That's sad. For anyone interested in why they shut down (I'd thought they had an indefinitely sustainable endowment!), the archived version of their website gives some info:

Over time FHI faced increasing administrative headwinds within the Faculty of Philosophy (the Institute’s organizational home).  Starting in 2020, the Faculty imposed a freeze on fundraising and hiring.  In late 2023, the Faculty of Philosophy decided that the contracts of the remaining FHI staff would not be renewed.  On 16 April 2024, the Institute was closed down.

By inference, if you are one of those copies, the 'moral worth' of your own perceived torture will therefore be 1/10billionth of its normal level. So, selfishly, that's a huge upside - I might selfishly prefer being one of 10 billion identical torturees as long as I uniquely get a nice back scratch afterwards, for e.g.

Space lasers don't seem as much of a threat as Jordan posits. They have to be fired from somewhere. If that's within the solar system they're targeting, then that system will still have plenty of time to see the object that's going to shoot them arriving. If they're much further out, it becomes much harder both to aim them correctly and to provide enough power to keep them focused, and the source needs to be commensurately more powerful (as in more expensive to run), and with a bigger lens, so more visible while under constructive and more vulnerable to conventional attack. Or you could just react to the huge lens by building a comparatively tiny mirror protecting the key targets in your system. Or you could build a Dyson swarm and not have any single target on which the rest of the settlement depends.

This guy estimates max effective range of lasers vs anything that can react (which, at a high enough tech level includes planets) at about one light second.

Self-replicating robots don't seem like they have any particular advantage when used as a weapon over ones with more benign intent.

I don't think anyone's arguing current technology would allow self-sufficiency. But part of the case for offworld settlements is that they very strongly incentivise technolology that would.

In the medium term, an offworld colony doesn't have to be fully independent to afford a decent amount of security. If it can a) outlast some globally local catastrophe (e.g. a nuclear winter or airborne pandemic) and b) get back to Earth once things are safer, it still makes your civilisation more robust. 

I broadly agree with the arguments here. I also think space settlement has a robustness to its security that no other defence against GCRs does - it's trivially harder to kill all of more people spread more widely than it is to kill of a handful on a single planet. Compare this to technologies designed to regulate a single atmosphere to protect against biorisk, AI safety mechanisms that operate on AGIs whose ultimate nature we still know very little of, global political institutions that could be subverted or overthrown, bunkers on a single planet, etc, all of which seem much less stable over more than a century or so.

It might be that AGI/vacuum decay/some other mechanism will always be lurking out there will the potential of destroying all life, and if so nothing will protect us - but if we're expected value maximisers (which seems to me a more reasonable strategy than any alternative), we should be fairly optimistic about scenarios where it's at least possible that we can stabilise civilisation.

If you haven't seen it, you should check out Christopher Lankhof's Security Among the Stars, which goes in depth on the case for space settlement.

You might also want to check out my recent project that lets you model the level of security afforded by becoming multiplanetary explicitly.

I strongly agree with the first half of this post - bunkers and refuges are pretty bad as a defence against global catastrophes.

Your solution makes a lot less sense to me. It seems like it has many of the same problems you're trying to avoid - it won't be pressure tested until the world collapses. In particular, if it's an active part of a local community, that implies people will be leaving and reentering regularly, which means any virus with a long incubation period could be in there before people know it's a problem. 

Also, I feel like your whole list of questions still applies, and I have no sense of how you imagine it's going to answer them. In particular, I don't see how digging underground is going it make it better at water treatment, electricity generation etc than the equivalent aboveground services.

Fwiw my take is that offworld bases have much better longterm prospects - they're pressure tested every moment of every day; they perforce have meaningful isolation; the inhabitants are very strongly incentivised to develop the base to make it more sustainable as fast as possible; and once you have the technology for one, you have the technology for many, and are a long way towards developing the sort of technology that's necessary for a future in which we (per Nick Bostrom's Astronomical Waste essay) colonise the Virgo supercluster.

Hey Corentin,

The calculators are intentionally silent on the welfare side, on the thought that in practice it's much easier to treat as a mostly independent question. That's not to say it actually is independent, and ideally I would like the output to include more information about what the pathways to either extinction or an interstellar state, so that people can do some further function on the output. I do think it's reasonable, even on a totalising view, to prioritise improving future welfare conditional on it existing and largely ignoring the question of whether it will - but that's not a question the calculators can help with except inasmuch as you condition on the pathway.

Even if they gave pathways, they would be agnostic on whose welfare qualified. Personally I'm interested in maximising total valence (I have an old essay still waiting for its conclusion on the subject), so every sentient being's mental state 'counts', but you could use these with a different perspective in mind. Primarily empirical questions about e.g. the duration of factory farming, and animal suffering in terraformed systems seem like they'd need their own research projects.

Load more