A

AABoyles

22 karmaJoined Dec 2014

Posts
1

Sorted by New
2
· 2y ago · 1m read

Comments
11

I recently experienced a jarring update on my beliefs about Transformative AI. Basically, I thought we had more time (decades) than I now believe we will (years) before TAI causes an existential catastrophe. This has had an interesting effect on my sensibilities about cause prioritization. While I applaud wealthy donors directing funds to AI-related Existential Risk mitigation, I don't assign high probability to the success of any of their funded projects. Moreover, it appears to me that there is essentially no room for additional funds in kinds of denominations coming from non-wealthy donors (e.g. me).

I used to value traditional public health goals quite highly (e.g. I would direct donations to AMF). However, given that most of the returns on bed net distribution lie in a future beyond my current beliefs about TAI, this now seems to me like a bad moral investment. Instead, I'm much more interested in projects which can rapidly improve hedonic well-being (i.e. cause the greatest possible welfare boost in the near-term). In other words, the probability of an existential AI catastrophe has caused me to develop neartermist sympathies. I can't find much about other EAs considering this, and I have only begun thinking about it, but as a first pass GiveDirectly appears to serve this  neartermist hedonic goal somewhat more directly. 

The mortality rate is the proportion of infections that *ultimately* result in death. If we had really good data (we don't), we could get a better estimate by pitting fatalities against *recoveries*. Since we aren't tracking recoveries well, If we attempt to compute mortality rates right now (as infections are increasing exponentially), we're going to badly underestimate the actual mortality rate.

Totally agree about data collection. Seems like a good candidate for an approval vote. After a five-minute search, I couldn't find a good approval-voting platform, when I realized that basically all polls on DEAM work this way (i.e. Facebook supports this). Maybe this is something we could post in the EA Facebook group? @Peter_Hurford?

Related: What is your estimate of the field's room-for-funding for the next few years?

GiveWell's Holden Karnofsky assessed the Singularity Institute in 2012, and provided a thoughtful, extensive critique of the mission and approach which still remains tied for the top post on Lesswrong. It seems the EA Meta-charity evaluators are still hesitant to name AI Safety (and more broadly, Existential Risk Reduction) as a potentially effective target for donations. What are you doing to change that?

Mr. Musk has personally donated $10 million via the Future of Life Institute towards a variety of AI safety projects. Additionally, MIRI is currently engaged in its annual fundraising drive with ambitious stretch goals, which include the hiring of several (and potentially many) additional researchers.

With this in mind, Is the bottleneck to progress in AI Safety research the availability of funding or researchers? Stated differently, If a technically-competent person assesses AI Safety to be the most effective cause, which is approach more effective: Earning-to-give to MIRI or FLI, or becoming an AI Safety researcher?

The Boston Review held a Forum on Effective Altruism with some excellent criticism by academic, non-EAs.

Also, props for compiling it in LaTeX. The typesetting is beautiful. :)

Honestly, no. It covers the high points of the movement with excellent pacing. The essays are concise, readable, and interesting. There's no superfluous content. It's great all around.

Excellent work! I've just finished it and posted it on GoodReads.

Load more