28

ClaireZabel comments on My Cause Selection: Michael Dickens - Effective Altruism Forum

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (67)

You are viewing a single comment's thread.

Comment author: ClaireZabel 17 September 2015 09:15:06PM 6 points [-]

These comments are copied from some of the original ones I made when reviewing Michael's post. My views are my own, not GiveWell's.

I have never seen any strong reason to believe that anything we do now will affect far future values–if the case for organizations reducing global catastrophic risks is tenuous, then the case for values spreading is no better.

I think the case for values spreading is quite a bit better. Reducing global catastrophic risks is pretty bimodel. Either the catastrophe happens, or it doesn't. You can try to measure the risk being reduced, sometimes, but doing so isn't straightforward, obvious, or something we have experience in.

We have lots of experience tracking value change. We can see it happen in incremental parts in the near-future. You don't need special tools or access to confidential information to do a decent poll on values changing.

The strongest objection to this, I think, is that values changing in the short term won't necessarily affect the long-term trajectory of our values, or at least not in a predictable way. In contrast, preventing an x-risk in the short term at least allows for the possibility of doing stuff in the far future (and it seems plausible that GCRs might also change far-future trajectory).

Another consideration is that values may become vastly more or less mutable if we develop technology that allows of certain types of self-modification, or an AI that enforces values that are programmed into it. Depending on how you believe this might happen, you might believe spreading good values before those technologies develop is vastly more or less important, exactly because then the likelihood of those values affecting the far future increases.

I do not see reason to believe that some GCR other than AI risk is substantially more important; no GCR looks much more likely than AI risk, and right now it looks much easier to efficiently support efforts to improve AI safety than to support work on other major GCRs.

I think a lot of GCRs could be more tractable than AI risk (possibly by a large margin) if someone went through the work of identifying more opportunities to fund risk reduction for those GCRs, then made it available to small donors.

Comment author: MichaelDickens  (EA Profile) 17 September 2015 09:20:54PM 4 points [-]

I think a lot of GCRs could be more tractable than AI risk (possibly by a large margin) if someone went through the work of identifying more opportunities to fund risk reduction for those GCRs, then made it available to small donors.

This is definitely an important point. I think that if someone did identify opportunities like this, that's one of the most likely reasons why I might change where I donate. Right now it doesn't look like any GCR is substantially more important/tractable/neglected than AI risk (biosecurity is probably a bigger risk but not by a huge margin, geoengineering might be more tractable but not for small donors), but this could change in the future.