Hide table of contents

Update #1: It’s a rite of passage to binge the top EA posts of all time, and now you can do it on your podcast app.

We (Nonlinear) made “top of all time” playlists for the EA Forum, LessWrong, and the Alignment Forum. Each is around ~400 of the most upvoted posts:

We think these could be useful to share with friends who are new to the community.

Update #2: The original Nonlinear Library feed includes top posts from the EA Forum, LessWrong, and the Alignment Forum. Now, by popular demand, you can get forum-specific feeds:

Stay tuned for more features. We’ll soon be launching channels by tag, so you can listen to specific subjects, such as longtermism, animal welfare, or global health. Enter your email here to get notified as we add more channels.

Below is the original explanation of The Nonlinear Library and its theory of change.


We are excited to announce the launch of The Nonlinear Library, which allows you to easily listen to top EA content on your podcast player. We use text-to-speech software to create an automatically updating repository of audio content from the EA Forum, Alignment Forum, LessWrong, and other EA blogs.

In the rest of this post, we’ll explain our reasoning for the audio library, why it’s useful, why it’s potentially high impact, its limitations, and our plans. You can read it here or listen to the post in podcast form here.

Goal: increase the number of people who read EA research

An EA koan: if your research is high quality, but nobody reads it, does it have an impact?

Generally speaking, the theory of change of research is that you investigate an area, come to better conclusions, people read those conclusions, they make better decisions, all ultimately leading to a better world. So the answer is no. Barring some edge cases (1), if nobody reads your research, you usually won’t have any impact.

Research → Better conclusion → People learn about conclusion → People make better decisions → The world is better

Nonlinear is working on the third step of this pipeline: increasing the number of people engaging with the research. By increasing the total number of EA articles read, we’re increasing the impact of all of that content.

This is often relatively neglected because researchers typically prefer doing more research instead of promoting their existing output. Some EAs seem to think that if their article was promoted one time, in one location, such as the EA Forum, then surely most of the community saw it and read it. In reality, it is rare that more than a small percentage of the community will read even the top posts. This is an expected-value tragedy, when a researcher puts hundreds of hours of work into an important report which only a handful of people read, dramatically reducing its potential impact.

Here are some purely hypothetical numbers just to illustrate this way of thinking:

Imagine that you, a researcher, have spent 100 hours producing outstanding research that is relevant to 1,000 out of a total of 10,000 EAs.
Each relevant EA who reads your research will generate $1,000 of positive impact. So, if all 1,000 relevant EAs read your research, you will generate $1 million of impact.
You post it to the EA Forum, where posts receive 500 views on average. Let’s say, because your report is long, only 20% read the whole thing - that’s 100 readers. So you’ve created 100*1,000 = $100,000 of impact. Since you spent 100 hours and created $100,000 of impact, that’s $1,000 per hour - pretty good!
But if you were to spend, say 1 hour, promoting your report -  for example, by posting links on EA-related Facebook groups - to generate another 100 readers, that would produce another $100,000 of impact. That’s $100,000 per marginal hour or ~$2,000 per hour taking into account the fixed cost of doing the original research.
Likewise, if another 100 EAs were to listen to your report while commuting, that would generate an incremental $100,000 of impact - at virtually no cost, since it’s fully automated.
In this illustrative example, you’ve nearly tripled your cost-effectiveness and impact with one extra hour spent sharing your findings and having a public system that turns it into audio for you.  

Another way the audio library is high expected value is that instead of acting as a multiplier on just one researcher or one organization, it acts as a multiplier on nearly the entire output of the EA research community. This allows for two benefits: long-tail capture and the power of large numbers and multipliers.

Long-tail capture. The value of research is extremely long tailed, with a small fraction of the research having far more impact than others. Unfortunately, it’s not easy to do highly impactful research or predict in advance which topics will lead to the most traction. If you as a researcher want to do research that dramatically changes the landscape, your odds are low. However, if you increase the impact of most of the EA community’s research output, you also “capture” the impact of the long tails when they occur. Your probability of applying a multiplier to very impactful research is actually quite high.

Power of large numbers and multipliers. If you apply a multiplier to a bigger number, you have a proportionately larger impact. This means that even a small increase in the multiplier leads to outsized improvements in output. For example, if a single researcher toiled away to increase their readership by 50%, that would likely have a smaller impact than the Nonlinear Library increasing the readership of the EA Forum by even 1%. This is because 50% times a small number is still very small, whereas 1% times a large number is actually quite large. And there’s reason to believe that the library could have much larger effects on readership, which brings us to our next section. 

Why it’s useful

EA needs more audio content

EA has a vibrant online community, and there is an amazing amount of well researched, insightful, and high impact content. Unfortunately, it’s almost entirely in writing and very little is in audio format.

There are a handful of great podcasts, such as the 80,000 Hours and FLI podcasts, and some books are available on Audible. However, these episodes come out relatively infrequently and the books even less so. There’s a few other EA-related podcasts, including one for the EA Forum, but a substantial percentage have become dormant, as is far too common for channels because of the considerable amount of effort required to put out episodes.

There are a lot of listeners

The limited availability of audio is a shame because many people love to listen to content. For example, ever since the 80,000 Hours podcast came out, a common way for people to become more fully engaged in EA is to mainline all of their episodes. Many others got involved through binging the HPMOR audiobook, as Nick Lowry puts it in this meme. We are definitely a community of podcast listeners.

Why audio? Often, you can’t read with your eyes but you can with your ears. For example, when you’re working out, commuting, or doing chores. Sometimes it’s just for a change of pace. In addition, some people find listening to be easier than reading. Because it feels easier, they choose to spend time learning that might otherwise be spent on lower value things.

Regardless, if you like to listen to EA content, you’ll quickly run out of relevant podcasts - especially if you’re listening at 2-3x speed - and have to either use your own text-to-speech software or listen to topics that are less relevant to your interests.

Existing text-to-speech solutions are sub-optimal

We’ve experimented extensively with text-to-speech software over the years, and all of the dozens of programs we’ve tried have fairly substantial flaws. In fact, a huge inspiration for this project was our frustration with the existing solutions and thinking that there must be a better way. Here are some of the problems that often occur with these apps:

  • They're glitchy, frequently crashing, losing your spot, failing at handling formatting edge cases, etc.
  • Their playlists don’t work or exist, so you’ll pause every 2-7 minutes to pick a new article to read, making it awkward to use during commutes, workouts, or chores. Or maybe you can’t change the order, like with Pocket, which makes it unusable for many.
  • They’re platform specific, forcing you to download yet another app, instead of, say, the podcast app you already use.
  • Pause buttons on headphones don’t work, making it exasperating to use when you’re being interrupted frequently.
  • Their UI is bad, requiring you to constantly fiddle around with the settings.
  • They don’t automatically add new posts. You have to do it manually, thus often missing important updates.
  • They use old, low-quality voices, instead of the newer, way better ones. Voices have improved a lot in the last year.
  • They cost money, creating yet another barrier to the content.
  • They limit you to 2x speed (at most), and their original voices are slower than most human speech, so it’s more like 1.75x. This is irritating if you’re used to faster speeds.

In the end, this leads to only the most motivated people using the services, leaving out a huge percentage of the potential audience. 

How The Nonlinear Library fixes these problems

To make it as seamless as possible for EAs to use, we decided to release it as a podcast so you can use the podcast app you’re already familiar with. Additionally, podcast players tend to be reasonably well designed and offer great customizability of playlists and speeds.

We’re paying for some of the best AI voices because old voices suck. And we spent a bunch of time fixing weird formatting errors and mispronunciations and have a system to fix other recurring ones. If you spot any frequent mispronunciations or bugs, please report them in this form so we can continue improving the service.

Initially, as an MVP, we’re just posting each day’s top upvoted articles from the EA Forum, Alignment Forum, and LessWrong. (2) We are planning on increasing the size and quality of the library over time to make it a more thorough and helpful resource.

Why not have a human read the content?

The Astral Codex Ten podcast and other rationalist podcasts do this. We seriously considered this, but it’s just too time consuming, and there is a lot of written content. Given the value of EA time, both financially and counterfactually, this wasn’t a very appealing solution. We looked into hiring remote workers but that would still have ended up costing at least $30 an episode. This compared to approximately $1 an episode via text-to-speech software.

On top of the time costs leading to higher monetary costs, it also makes us able to make a far more complete library. If we did this with humans and we invested a ton of time and management, we might be able to convert seven articles a week. At that rate, we’d never be able to keep up with new posts, let alone include the historical posts that are so valuable. With text-to-speech software, we could have the possibility of keeping up with all new posts and converting the old ones, creating a much more complete repository of EA content. Just imagine being able to listen to over 80% of EA writing you’re interested in compared to less than 1%.

Additionally, the automaticity of text-to-speech fits with Nonlinear’s general strategy of looking for interventions that have “passive impact”. Passive impact is the altruistic equivalent of passive income, where you make an upfront investment and then generate income with little to no ongoing maintenance costs. If we used human readers, we’d have a constant ongoing cost of managing them and hiring replacements. With TTS, after setting it up, we can mostly let it run on its own, freeing up our time to do other high impact activities.

Finally, and least importantly, there is something delightfully ironic about having an AI talk to you about how to align future AI.

On a side note, if for whatever reason you would not like your content in The Nonlinear Library, just fill out this form. We can remove that particular article or add you to a list to never add your content to the library, whichever you prefer. 

Future Playlists (“Bookshelves”)

There are a lot of sub-projects that we are considering doing or are currently working on. Here are some examples:

  • Top of all time playlists: a playlist of the top 300 upvoted posts of all time on the EA Forum, one for LessWrong, etc. This allows people to binge all of the best content EA has put out over the years. Depending on their popularity, we will also consider setting up top playlists by year or by topic. As the library grows we’ll have the potential to have even larger lists as well.
  • Playlists by topic (or tag): a playlist for biosecurity, one for animal welfare, one for community building, etc.
  • Playlists by forum: one for the EA Forum, one for LessWrong, etc.
  • Archives. Our current model focuses on turning new content into audio. However, there is a substantial backlog of posts that would be great to convert.
  • Org specific podcasts. We'd be happy to help EA organizations set up their own podcast version of their content. Just reach out to us.
  • Other? Let us know in the comments if there are other sources or topics you’d like covered.

Who we are

We're Nonlinear, a meta longtermist organization focused on reducing existential and suffering risks. More about us.

Footnotes

(1)  Sometimes the researcher is the same person as the person who puts the results into action, such as Charity Entrepreneurship’s model. Sometimes it’s a longer causal chain, where the research improves the conclusions of another researcher, which improves the conclusions of another researcher, and so forth, but eventually it ends in real world actions. Finally, there is often the intrinsic happiness of doing good research felt by the researcher themselves.

(2) The current upvote thresholds for which articles are converted are:
25 for the EA forum
30 for LessWrong
No threshold for the Alignment Forum due to low volume

This is based on the frequency of posts, relevance to EA, and quality at certain upvote levels.

Comments28
Sorted by Click to highlight new comments since:

Hi Kat,

You said that people could submit a form in order to make sure their work was not uploaded. I said that wasn't good enough, but I submitted the form anyways.

My work has been uploaded without my permission anyways.

What the fuck?

Oh, I'm so sorry. Where is it? It probably was a mishap with our data entry person. We'll remove it ASAP.

All of the posts by khorton that have been uploaded to Spotify...

They should be removed now. Might take awhile to update on all the platforms. Could only find ones on the EA Forum. Let us know if you posted anything on other the other forums.

I'd like to be super clear here that, while I genuinely appreciate you responding within minutes of me complaining, I'm still pretty upset that this happened in the first place. If there's any way you can reassure me that this won't happen again, I'd appreciate it. (Options I could take from my end are setting up a Google alert for my screenname being posted on your site and/or deleting my Forum posts, but I really don't want to!)

Don't feel pressure to respond to this immediately, feel free to take a few days and talk to the team about what might work.

Hey, overall, the object level issue seems not good, but not terrible. 

So a person's posts were distributed on an internet podcast:

  • In terms of legal or outside norms, it's unclear users have a right to prevent this podcasting. It may not be illegal to do this, and there may not be an expectation of control/privacy.
  • But certainly, legality isn't the bar here, and the dignity and choices of EA forum users is a thing.
  • There were promises made by an "official EA org" with an opt-out form[1].
     

In totality, having someone's choices then contravened doesn't seem good.

  1. ^

    By the way, it's worth noting that restricting posting isn't that hard to implement. Like, the code here is just to filter on username/user_id. This is like 3 lines of code, and the result is easy to test. 

    Also, my guess is that count of users who submitted the privacy request form is like <5. So that seems manageable update. At size, you can pretty much just hard code usernames as a kludge with even less SWE-type concerns. 

    (I'm confused by the "channels" explanation here, but that's not my business, the point is there is at least one easy technical way of doing this, putting a pretty low floor of difficulty.)

The main issue and the reason why I'm commenting is that I'm concerned about the voting patterns. 

I'm not sure why this comment is downvoted:

I'm not sure why the top comment is sitting at +1 and has 5 votes.

I'm not sure why an EA CEO has strong upvoted themselves in a thread involving mis/inaction of their org.

 

So, the "smell" of this voting is sort of intense. 

Entirely setting aside this particular event, or the particular people involved, I think it's reasonable to be concerned about setting examples or norms of behavior that involve control over EA institutions. 

 

Like, funding is growing, and there's incentive for  would be "CEOs" or "EDs" to basically take the "outer-product" of the set of cause areas and set of obvious EA meta institutions, take an element from the resulting matrix and instantiate it. 

In particular, people might do this because inputs/performance is hard to observe for "CEOs", once starting it's hard to dislodge, and in these meta orgs, existence or demand is conflated with the EA brand (allowing failure upwards).

So, in this situation, it's already "quickdraw". 

So, let's not add the feature of having constituencies of these warlords, voting on stuff, that situation is No Bueno.

I think that strong upvoting of your own comments should be disabled. I've noticed that it's quite frequent.

See previous discussion about this.

You have put more thought than me. Also, I self strong upvote a lot.

But your suggestion would be counterproductive in some of the scenarios I’ve implied above. The user who had been downvoted has almost 7,000 karma. Removing self-strong upvoting would weaken the “self-defence” of established users while being “mobbed”.

Republishing someone else's work in its entirety, without adding some commentary to argue it's a review, would not be legal, no.

Edit to add sources:

Blog posts are copyrighted materials https://blogging.com/copyright-dmca/

Using copyrighted materials in your podcast https://copyrightalliance.org/how-to-avoid-copyright-infringement-on-podcasts/

Someone downvoted this but afaik it's just a statement of fact. If you think it's inaccurate, please post evidence, or if you think stating this fact is harmful or unhelpful in some way, please say why!

I wasn't the one who downvoted the comment (and it appears to be at +4 karma anyway), but it might be because you were confidently asserting a claim without providing evidence. You previously made this claim six months ago, and Charles supplied some evidence to suggest you were mistaken. Given that you didn't reply the last time someone posted some contradictory evidence, but continued to make essentially the same claim, it is pretty understandable people might not appreciate your demanding evidence again.

(I don't have a strong view on who is correct on the legal question).

Thanks for the reply Larks, that gives a helpful perspective for why people might have downvoted.

The link that was previously provided was about copying message board posts to another message board, as part of a "roundup" of interesting posts, and whether an individual might be sued for that.

It's quite a different context in my view from an organisation re-publishing an entire article that someone's written, when they've expressly asked for it not to be.

I don't think I could win damages from Nonlinear, nor would I seek to, but I do think I would have a very good case for compelling them to take down my posts (which thankfully they've done voluntarily).

This was the link that was previously referenced: https://librarycopyright.net/forum/view/114

Absolutely. We use Asana and we'll just add it to our "Making a new channel" template to check and make sure that we have removed people who've opted out.

We have an automatic rule for the main channel. The problem here was that it was a one-off, static channel, so it wasn't using the same code we usually use.

I'm really sorry that that happened. I think this fix should do it.

Thanks Kat, yes just from the EA Forum

I'm glad this exists.

Feedback:

  • Why no links to the post in the episode notes? If I find a post interesting, I'm basically always going to want to be able to click through and read comments or vote on it so I need that.
  • I think there shouldn't be a preamble before reading the title. It kinda destroys the usecase of listening to article titles and skipping to decide whether you want to hear it, it makes that take two or three times as long as it needs to as a result of forcing the listener to sit through the same intro again.
    I'd suggest... just reading the title right away, mentioning the origin, then saying in a different voice, "This audio reading was produced by the Nonlinear Library." If this has to be here, it's important for it to be as short as possible. It'd probably be better if it were at the end instead of the beginning. In the vast majority of listens it really doesn't need to be there at all, given that all of your listeners will either know the origin before they start reading listening, or will listen to enough that they'll have learned already.

Thanks for the suggestions!

Yeah, the links in the episode notes is the most requested feature. We have them in all of our channels except for the static playlists (such as the top of all time lists) and the main channel because of technical reasons. We're working on the main channel, but it might take a bit because it's surprisingly difficult. Kind of reminds me of this comic.

For the intros, at least on PocketCast you can set it to skip the first X seconds, which I recommend.

Any action on this? It still seems to me like the most crucial missing feature. Especially because I very often have only a moment in transit on my phone etc... and I want to either quickly leave a comment or skim through the comments.

I appreciate this service a lot and use it basically every day! So thank you for making it happen 💪🏽 It's awesome to see some alternative options to consume EA content than surfing through the forums and it's even more awesome when that option just requires listening!

Thank you again for creating this service.

In the narration algorithm, can a few commonly used words be trained/fixed? Like “EA” comes out as “eeeyahhh”.

Yes, basically just adjust the lexicon or use an alias as described in the AWS backend being used.

Or do a regex (replace “EA”) with a phonetic transcription SSML tag, described here:

https://aws.amazon.com/blogs/machine-learning/customize-pronunciation-using-lexicons-in-amazon-polly/

Good work! Wondering if this could be updated to also include the All Time Top (Inflation Adjusted) posts? (reasoning)

Good idea! If we ever do another, we'll definitely use inflation adjusted rankings

Suggestion: Can it be set up to also read the top comments, or any comments (or entire comment threads) with more than 40 upvotes? This is particularly relevant for question threads.

I'd love to add that but unfortunately it would be really difficult technically speaking, so we probably won't make it happen.

Kind of cool to see that two of my posts made it into the top EA Forum posts playlist. A small point - the audio says for both of them that they were posted to the AI Alignment Forum which is a mistake. I don’t really care, but thought you might like to know.

Curated and popular this week
Relevant opportunities