AE

Anonymous_EA

435 karmaJoined

Comments
11

Really glad to hear that! Stereotypes often have a grain of truth and I don’t want to sugarcoat things with my post (the comments on this post are definitely worth reading to get a more complete picture). But if you’d be open to a move I really encourage planning a visit: even just a well-planned weekend with lots of one-on-ones could give you a lot of information.

Feel free to DM me if you decide to come for a visit and I might be able to help with making connections.

Love the question! Within AI policy:

  • How/whether R&D funding can be leveraged to fund AI safety research.
  • How/whether governments should regulate AI safety. We have people writing high-level academic papers on this, but potentially nobody, at least in the US, who’s aiming to become an expert in the details of implementation.
  • Anti-trust as it relates to AI. There’s a chance AI governance initiatives could run afoul of anti-trust by default, so it seems good to have people with deep experience at places like the FTC who could advise on how to navigate this.

Huh, I'm surprised you're planning to do further degrees after the Schwarzman: that undercuts my point above. If the Schwarzman isn't viewed by employers as a terminal degree, then I'd view that as a major downside of the program. The opportunity cost of a year of full-time work is high.

Thanks for this post! Schwarzman seems especially promising for folks interested in policy, where a grad degree is often needed and where China expertise is valued.

I think it's worth emphasizing that these degrees only take one year. This is a BIG advantage relative to e.g. law school, an MBA, and even many/most MPP programs. If you think education  (particularly non-STEM grad school) is mostly about signaling rather than learning, then the opportunity cost of an extra one or two years of schooling is really significant. Schwarzman looks like a great way to get a shiny grad credential in a very reasonable amount of time. 

Great post! 

From Scenario 1, in which alignment is easy:

The alignment problem turns out much easier than expected. Increasingly better AI models have a better understanding of human values, and they do not naturally develop strong influence-seeking tendencies. Moreover, in cases of malfunctions and for preventative measures, interpretability tools now allow us to understand important parts of large models on the most basic level and ELK-like tools allow us to honestly communicate with AI systems.

Here you seem to be imagining that technical AI alignment turns out to be easy, but you don't discuss the political/governance problem of making sure the AI (or AIs) are aligned with the right goals. 

E.g. what if the first aligned transformative AI systems are built by bad actors? What if they're built by well-intentioned actors who nevertheless have no idea what to do with the aligned TAI(s) they've developed? (My impression is that we don't currently have much idea of what a lab should be looking to do in the case where they succeed in technical alignment. Maybe the aligned system could help them decide what to do, but I'm pretty nervous about counting on that.)

From my perspective a full success story should include answers to these questions.

Appreciate the anecdata! I agree that probably there are at least a good number of people like you who will go under the radar, and this probably biases many estimates of the number of non-community-building EAs downward (esp estimates that are also based on anecdata, as opposed to e.g. survey data).

Hi, Vilhelm, thanks for these thoughts! Some quick responses to just a few points:

Fwiw, in Sweden, my 50% confidence interval of the share of highly-engaged longtermists under 25 doing movement-building is  20-35%.  However, I don't think I am as concerned as you seem to be with that number.

20-35% isn't all that concerning to me. I'd be more concerned if it were in the ballpark of 40% or more. That said, even 20-35% does feel a bit high to me if we're talking about college graduates working full-time on community-building (a higher percentage might make sense if we're counting college students who are just spending a fraction of their time on community-building).

my experience as a community builder in Sweden trying to help young longtermsist is that there aren't that many better opportunities out there right now. (Note that this might be very different in other contexts.)

Agreed that the counterfactual may be significantly worse for those based in Sweden (or most other countries besides the US and UK) who are unwilling to move to EA hubs. I  should have flagged that I'm writing this as someone based in the US where I see lots of alternatives to community building. With that said,  it's not totally clear to me which direction this points in: maybe a lack of opportunities to do object-level work in Sweden suggests the need for more people to go out and create such opportunities, rather than doing further community-building.

Data suggest people leave their community building roles rather quickly, indicating that people do pivot when finding a better fit

Yeah this matches my experience - I see a lot of young EAs doing community building for a year or two post-grad and then moving on to object-level work. This seems great when it's a case of someone thinking community-building is their highest-upside option, testing their fit, and then moving on (presumably because it hasn't gone super well). I worry, though, that in some cases folks do not even view community-building as a career path they're committed to, and instead fall into community-building because it's the "path of least resistance." 

To be clear, I'm incredibly grateful to community builders like you, and don't intend to devalue the work you do - I genuinely think community-building is one of the most impactful career paths, and a significant fraction of EAs should pursue it (particularly those who - like you, it sounds like - have great personal fit for the work and see it their highest-upside long-term career path).

Thanks for pointing to these! I had forgotten about them or hadn't seen them in the first place — all are very relevant.

Thanks for this! I really appreciate how carefully 80K thinks these questions through and have updated toward this bottleneck having gotten worse fairly recently, as you suggest. With that said, if there was an ops bottleneck in 2018 and 2019 as reflected in  previous surveys of skill needs, and if the ops bottleneck is back as of now, I wonder whether early 2020 was more the exception than the rule. 

To double check this, we would ideally run another survey of org leaders about skill needs, and there's some chance that happens in the next year.

I don't want to rush your process. At the same time, because I perceive this as fairly urgent bottleneck (as seems to be at least somewhat confirmed in comments by CarolineJ, Scronfinkle, and Anya Hunt), I'll just note that I hope that survey does in fact happen this year.  I doubt I can be helpful with this, but feel free to DM me if I could be - for example, I can think of at least one person who might be happy to run the survey this year and would likely do a good job.

Another reason why we dropped it is just because 'work at EA orgs' is already a priority path, and this is a subpath of that, and I'm not sure we should list both the broader path and subpath within the priority paths list (e.g. I also think 'research roles at EA orgs' is a big bottleneck but don't want to break that out as a separate category).

Again, I appreciate that you all are extremely thoughtful about these decisions. I will offer, from my outside perspective, that it seems like the Priority Paths already do a great job of conveying the value of research skills (e.g. 5/9 of the Priority Paths have the word "research" in the title), whereas they don't currently convey the value of operations skills. I'm not sure whether adding ops back to the Priority Paths is the best way to address this, or if there's another better option, such as simply removing the blurb about how ops skills are less needed now. But I think right now a reader of 80K would likely get the impression that ops skills are much less urgently needed than they are (see for example Eli Kaufman's comment on this post).

Just confirming that "longtermist ED" is more the type of skillset I have in mind when I refer to  a "severe" bottleneck. Though I think even "EA-aligned with significant ops skills and potential to grow into ED-type roles" is also highly valuable and in short supply. 

I also like your advice!

Load more