People around me are very interested in AI taking over the world, so a big question is under what circumstances a system might be able to do that—what kind of capabilities could elevate an entity above the melange of inter-agent conflict and into solipsistic hegemony?
We...
...U.S. Secretary of Commerce Gina Raimondo announced today additional members of the executive leadership team of the U.S. AI Safety Institute (AISI), which is housed at the National Institute of Standards and Technology (NIST). Raimondo named Paul Christiano as Head of AI
Nick Bostrom's website now lists him as "Principal Researcher, Macrostrategy Research Initiative."
Doesn't seem like they have a website yet.
Except they should maximize confusion by calling it the "Macrostrategy Interim Research Initiative" ;)
We should separate whether the view is well-motivated from whether it's compatible with "ethics being about affecting persons". It's based only on comparisons between counterparts, never between existence and nonexistence. That seems compatible with "ethics being about affecting persons".
We should also separate plausibility from whether it would follow on stricter interpretations of "ethics being about affecting persons". An even stricter interpretation would also tell us to give less weight to or ignore nonidentity differences using essentially the same a...
Thanks for writing this, Elijah. I agree that it’s really difficult to get an “EA job” (it took me five years). I wish this felt more normalized, and that there was better scoped advice on what EA jobseekers should do. I wrote about this last year and included a section on ways to contribute directly to EA projects even without an EA job. I'd also recommend Aaron Gertler's post on recovering from EA job rejection, probably my favorite ever EA Forum post.
On Aaron Bergman's comment about finding a higher paying role, certain tipped positions can be surprisin...
This was originally posted on Nathaniel's and Nuno's substacks (Pending Survival and Forecasting Newsletter, respectively). Subscribe here and here!
Discussion is also occurring on LessWrong here (couldn't link the posts properly for technical reasons).
When the...
I think the main reason that EA focuses relatively little effort on climate change is that so much money is going to it from outside of EA. So in order to be cost effective, you have to find very leveraged interventions, such as targeting policy, or addressing extreme versions of climate change, particularly resilience, e.g. ALLFED (disclosure, I'm a co-founder).
Carl Robichaud mentioned in his EAGxVirtual talk that the nuclear risk space is funding constrained. Dylan Matthews has also written about this at Vox.
There also seems to be a consensus that nuclear risk is higher than it has been in the recent past - with the Russia/Ukraine...
I would suggest the Back from the Brink campaign in the United States (www.preventnuclearwar.org) or the International Campaign to Abolish Nuclear Weapons (https://www.icanw.org/)
Both organizations are bringing a grassroots advocacy approach to push for multilateral efforts to prevent nuclear war. Grassroots advocacy is the most critically underfunded sector in the nuclear security space.
I am not confident that another FTX level crisis is less likely to happen, other than that we might all say "oh this feels a bit like FTX".
Changes:
- Little discussion of why or how the affiliation with SBF happened despite many well connected EAs having a low opinion of him
- Little discussion of what led us to ignore the base rate of scamminess in crypto and how we'll avoid that in future
For both of these comments, I want a more explicit sense of what the alternative was. Many well-connected EAs had a low opinion of Sam. Some had a high opinion. Should we have stopped the high-opinion ones from affiliating with him? By what means? Equally, suppose he finds skepticism from (say) Will et al, instead of a warm welcome. He probably still starts the FTX future fund, and probably still tries to make a bunch of people regranters. He probably still talks up EA in public. What would it have taken to prevent any of the resultant harms?
Likewise, what does not ignoring the base rate of scamminess in crypto actually look like? Refusing to take any money made through crypto? Should we be shunning e.g. Vitalik Buterin now, or any of the community donors who made money speculating?
For both of these comments, I want a more explicit sense of what the alternative was.
Not a complete answer, but I would have expected communication and advice for FTXFF grantees to have been different. From many well connected EAs having a low opinion of him, we can imagine that grantees might have been urged to properly set up corporations, not count their chickens before they hatched, properly document everything and assume a lower-trust environment more generally, etc. From not ignoring the base rate of scamminess in crypto, you'd expect to have seen stronger and more developed contingency planning (remembering that crypto firms can and do collapse in the wake of scams not of their own doing!), more decisions to build more organizational reserves rather than immediately ramping up spending, etc.
It's likely that no single answer is "the" sole answer. For instance, it's likely that people believed they could assume that trusted insiders were more significantly more ethical than the average person. The insider-trusting bias has bitten any number of organizations and movements (e.g., churches, the Boy Scouts). However, it seems clear from Will's recent podcast that the downsides of being linked to crypto were appreciated at some level. It would take a lot for me to be convinced that all that $$ wasn't a major factor.
Anders Sandberg has written a “final report” released simultaneously with the announcement of FHI’s closure. The abstract and an excerpt follow.
...Normally manifestos are written first, and then hopefully stimulate actors to implement their vision. This document is the reverse
For anyone wondering about the definition of macrostrategy, the EA forum defines it as follows:
...Macrostrategy is the study of how present-day actions may influence the long-term future of humanity.[1]
Macrostrategy as a field of research was pioneered by Nick Bostrom, and it is a core focus area of the Future of Humanity Institute.[2] Some authors distinguish between "foundational" and "applied" global priorities research.[3] On this distinction, macrostrategy may be regarded as closely related to the former. It is concerned with the assessment of
NIce post!
This seems like a key point to me, that it is hard to get good evidence on. The red stripes are rather benign, so we are in luck in a world like that. But if the AI values something in a more totalising way (not just satisficing with a lot of x's and red stripes being enough, but striving to make all hum... (read more)