We've produced a lot of content tailored for people heavily involved in the effective altruist community as it's our key target audience. Here's a recent list:

More generally a large outlet for effective altruist facing content going forward is going to be our long-form interviews in the 80,000 Hours podcast. Our podcast is focussed on "the world's most pressing problems and how you can use your career to solve them." You can make sure not to miss any episodes, hear them on your phone whenever is convenient, and listen to them sped up, by searching for '80,000 Hours' in whatever app you use to get podcasts.

You can see a list of the episodes so far on SoundCloud - they should be weekly for the next few months.

Here's our post from 3 months ago with the last batch of EA-facing content.

Hope you like it - let us know what you think in the comments.

All the best,

The 80,000 Hours team.

Comments5
Sorted by Click to highlight new comments since: Today at 7:28 PM

Like me, I suspect many EA's do a lot of "micro advising" to friends and younger colleagues. (In medicine, this happens almost on a daily basis). I know I'm an amateur, and I do my best to direct people to the available resources, but it seems like creating some basic pointers on how to give casual advice may be helpful.

Alternatively, I see the value in a higher activation energy for potentially reachable advisees- if they truly are considering adjusting their careers, then they'll take the time to look at the official EA material.

Nonetheless, it seems like even this advice to amateurs like myself could be helpful - "Give your best casual advice. If things are promising, give them links to official EA content."

Great podcasts!

Glad you like them! Tell your friends. ;)

The article on machine learning doesn't discuss the possibility that more people to pursuing machine jobs can have a net negative effect. It's true your venue will generally encourage people that will be more considerate of the long-term and altruistic effects of their research and so will likely have a more positive effect than the average entrant to the field, but if accelerating the development of strong AI is a net negative then that could outweigh the benefit of the average researcher being more altruistic.

Accelerating the development of machine intelligence is not a net negative since it can make the world better and safer at least as much as it is a risk. The longer it takes for AGI algorithms to be developed, the more advanced hardware and datasets there will be to support an uncontrolled takeoff. Also, the longer it takes for AI leaders to develop AGI then the more time there is for other nations and organizations to catch up, sparking more dangerous competitive dynamics. Finally, even if it were a net negative, the marginal impact of one additional AI researcher is tiny whereas the marginal impact of one additional AI safety researcher is large, due to the latter community being much smaller.