Anders Sandberg has written a “final report” released simultaneously with the announcement of FHI’s closure. The abstract and an excerpt follow.
...Normally manifestos are written first, and then hopefully stimulate actors to implement their vision. This document is the reverse
People around me are very interested in AI taking over the world, so a big question is under what circumstances a system might be able to do that—what kind of capabilities could elevate an entity above the melange of inter-agent conflict and into solipsistic hegemony?
We...
NIce post!
We might then expect a lot of powerful attempts to change prevailing ‘human’ values, prior to the level of AI capabilities where we might have worried a lot about AI taking over the world. If we care about our values, this could be very bad.
This seems like a key point to me, that it is hard to get good evidence on. The red stripes are rather benign, so we are in luck in a world like that. But if the AI values something in a more totalising way (not just satisficing with a lot of x's and red stripes being enough, but striving to make all hum...
...U.S. Secretary of Commerce Gina Raimondo announced today additional members of the executive leadership team of the U.S. AI Safety Institute (AISI), which is housed at the National Institute of Standards and Technology (NIST). Raimondo named Paul Christiano as Head of AI
Nick Bostrom's website now lists him as "Principal Researcher, Macrostrategy Research Initiative."
Doesn't seem like they have a website yet.
Except they should maximize confusion by calling it the "Macrostrategy Interim Research Initiative" ;)
We should separate whether the view is well-motivated from whether it's compatible with "ethics being about affecting persons". It's based only on comparisons between counterparts, never between existence and nonexistence. That seems compatible with "ethics being about affecting persons".
We should also separate plausibility from whether it would follow on stricter interpretations of "ethics being about affecting persons". An even stricter interpretation would also tell us to give less weight to or ignore nonidentity differences using essentially the same a...
Thanks for writing this, Elijah. I agree that it’s really difficult to get an “EA job” (it took me five years). I wish this felt more normalized, and that there was better scoped advice on what EA jobseekers should do. I wrote about this last year and included a section on ways to contribute directly to EA projects even without an EA job. I'd also recommend Aaron Gertler's post on recovering from EA job rejection, probably my favorite ever EA Forum post.
On Aaron Bergman's comment about finding a higher paying role, certain tipped positions can be surprisin...
This was originally posted on Nathaniel's and Nuno's substacks (Pending Survival and Forecasting Newsletter, respectively). Subscribe here and here!
Discussion is also occurring on LessWrong here (couldn't link the posts properly for technical reasons).
When the...
I think the main reason that EA focuses relatively little effort on climate change is that so much money is going to it from outside of EA. So in order to be cost effective, you have to find very leveraged interventions, such as targeting policy, or addressing extreme versions of climate change, particularly resilience, e.g. ALLFED (disclosure, I'm a co-founder).
Carl Robichaud mentioned in his EAGxVirtual talk that the nuclear risk space is funding constrained. Dylan Matthews has also written about this at Vox.
There also seems to be a consensus that nuclear risk is higher than it has been in the recent past - with the Russia/Ukraine...
I would suggest the Back from the Brink campaign in the United States (www.preventnuclearwar.org) or the International Campaign to Abolish Nuclear Weapons (https://www.icanw.org/)
Both organizations are bringing a grassroots advocacy approach to push for multilateral efforts to prevent nuclear war. Grassroots advocacy is the most critically underfunded sector in the nuclear security space.
I'm awestruck, that is an incredible track record. Thanks for taking the time to write this out.
These are concepts and ideas I regularly use throughout my week and which have significantly shaped my thinking. A deep thanks to everyone who has contributed to FHI, your work certainly had an influence on me.