RW

Robert_Wiblin

5309 karmaJoined Aug 2014

Posts
75

Sorted by New

Comments
461

Now having data for most of October, knowing our release schedule, and being able to see month-by-month engagement, I'd actually forecast that 80k Podcast listening time should grow 15-20% this year (not 5%), for ~300,000 hours of consumption total.

(If you forecast that Q4 2023 will be the same as Q3 2023 then you get 11% growth, and in fact it's going to come in higher.)

That is indeed still a significant reduction from last year when it grew ~40%.

Let me know if you'd like to discuss in more detail!

Basically we're grabbing analytics from Apple Podcasts, Spotify for Podcasters and Google Podcast Manager (which internally I call the 'Big 3'), and adding them up.

But Spotify and Google Podcasts only became available around/after Nov 2019. Drop me an email if you'd like to discuss! :)

"I'm really not sure what this means and surprised Rob didn't follow up on this."

Just the short time constraint. Sometimes I have to I trust the audience to assess for themselves whether or not they find an answer convincing.

Ah OK, I agree it's not that consistent with GiveWell's traditional approach.

I think of high-confidence GiveWell-style giving as just one possible approach one might take in the pursuit of 'effective altruism', and it's one that I personally think is misguided for the sorts of reasons Shruti is pointing to.

High-confidence (e.g. GiveWell) and hits-based giving (e.g. Open Phil, all longtermism) are both large fractions of the EA-inspired portfolio of giving and careers.

So really I should just say that there's nothing like a consensus around whether EA implies going for high-confidence or low-confidence strategies (or something in the middle I guess).

(Incidentally from my interview with Elie I'd say GiveWell is actually now doing some hits-based giving of its own.)

Sorry in what sense does Shruti say that EA solutions aren't effective in the case of air pollution? Do you mean that the highest 'EV' interventions are likely to be ones with high uncertainty about whether they work or not?

(I don't think of EA as being about achieving high confidence in impact, if anything I'd associate EA with high-risk hits based giving.)

Seems like David agrees that once you were spread across many star systems this could reduce existential risk a great deal.

The other line of argument would be that at some point AI advances will either cause extinction or a massive drop in extinction risk.

The literature on a 'singleton' is in part addressing this issue.

Because there's so much uncertainty about all this, it seems like an overly-confident claim that it's extremely unlikely for extinction risk to drop near zero within the next 100 or 200 years.

Ah great, glad I got it!

I think I had always assumed that the argument for x-risk relied on the possibility that the annual risk of extinction would eventually either hit or asymptote to zero. If you think of life spreading out across the galaxy and then other galaxies, and then being separated by cosmic expansion, then that makes some sense.

To analyse it the most simplistic way possible — if you think extinction risk has a 10% chance of permanently going to 0% if we make it through the current period, and a 90% chance of remaining very high even if we make it through the current period, then extinction reduction takes a 10x hit to its cost-effectiveness from this effect. (At least that's what I had been imagining.)

I recall there's an Appendix to The Precipice where Ord talks about this sort of thing. At least I remember that he covers the issue that it's ambiguous whether a high or low level of risk today makes the strongest case for working to reduce extinction being cost-effective. Because as I think you're pointing out above — while a low risk today makes it harder to reduce the probability of extinction by a given absolute amount, it simultaneously implies we're more likely to make it through future periods if we don't go extinct in this one, raising the value of survival now.

I'm not much at maths so I found this hard to follow.

Is the basic thrust that reducing the chance of extinction this year isn't so valuable if there remains a risk of extinction (or catastrophe) in future because in that case we'll probably just go extinct (or die young) later anyway?

I'm sympathetic to this, but so people don't think this will be trivial, note the 80k Podcast did produce some video episodes, and 60 extracts that went out on YouTube and Twitter and I think some other places. They got only a middling level of engagement and it didn't go up much over time.

Some nearby podcasts have made video episodes and had a lot of success (e.g .The Lunar Society), while others it doesn't seem like it has become a major way people consume the content (e.g. FLI, EconTalk).

So whether this is a high priority seems to depend on whether you can succeed at the content marketing aspect.

Yes sorry I don't meant to 'explain away' any large shift (if it occurred), the anti- side may just have been more persuasive here.

Load more