JR

Jonathan Rystrom

87 karmaJoined

Comments
5

Great initiative! Happy to see more collaboration within Europe:))

Thanks for your kind words, Esben! If anything comes out of this post, I agree that it should be a renewed focus on better framings - though James does raise some excellent points at the cost-effectiveness of this approach :))

Thank you for your excellent points, James! Before responding to your points in turn, I do agree that a significant part of the appeal of my proposal is to make it nicer for EAs. Whether that is worth investing in is not clear to me either - there are definitely more cost-effective ways of doing that. Now to your points:

  • I think I define democratic legitimacy slightly differently. Instead of viewing it as putting pressure on politicians through them knowing that everyone cares about the long term, I see it moving long-term policies within the Overton window so to speak by making it legitimate. Thus, it acts as a multiplier for EA policy work.
  • Wrt talent pool, I think it depends on how tractable it is to "predict" the impact of a given individual. I would guess that mass appeal works better if it is harder to a priori predict the impact of a given person / group - then it becomes more of a numbers' game with getting as many interested people thinking about these issues. I am quite uncertain about whether this is the case, and I imagine there are many other constraints (in e.g. hiring capacity of EA orgs).
  • I fully agree that this is more of a "nice to have" than a huge value proposition. I'd never heard of the 14 words but I do agree that the similarity is unfortunate. The slogan was also meant more as an illustration rather than a fully fledged proposal - luckily, it facilitates discussions like these!

I totally agree that it serves more as an internal strategic shorthand rather than as a part of communication. Ideally, no one outside core EA would need to know what "low-key longtermism" even refers to.

Super interesting stuff so far! It seems that quite a few of the worries (particularly in "Unclear definitions and constrained research thinking" and "clarity") seem to stem from AI safety currently being a pre-paradigmatic field. This might suggest that it would be particularly impactful to explore more than exploiting (though this depends on just how aggressive ones timelines are). It might also suggest that having a more positive "let's try out this funky idea and see where it leads" culture could be worth pursuing (to a higher degree than is being done currently). All and all, very nice to see pain points fleshed out in this way!

(Disclaimer: I do work for Apart Research with Esben, so please adjust for that excitement in your own assessment :))