R

rjmk

11 karmaJoined Nov 2017

Comments
11

Thank you for this excellent post: I began by pulling out quotes that I wanted to reflect on further, but ended up copying most of the post paragraph by paragraph.

I'm still not sure how to get the most value out of the information that has been shared here.

Three ideas:

  1. Sample some summarised papers to (a) get a better idea of what AI safety work looks like and/or (b) build a model of where I might disagree with the post's evaluation of impact

  2. Generate alternatives to the model discussed in the introduction (for example, general public outreach being positive EV), see how that changes the outcome, and then consider which models are most likely

  3. Use as reading list for preparing for technical work in AI safety

On the thin markets problem, there's been some prior work (on doing some googling I found https://mason.gmu.edu/~rhanson/mktscore.pdf, but I recall reading a paper with a less scary tile).

In the office case, an obvious downside to incentivising the market is that one may divert labour away from normal work, so it may still be that non-market solutions are superior

Thanks for the response. I understand OPP doesn't control the visa process, but do you have a rough sense of how likely a successful applicant would be to get a visa after being sponsored, or is it a complete unknown?

Thanks for the work on this. It seems very valuable: I agree that they seem to be an awesome idea and like an individual donor should be able to improve their impact easily with a fund. Unless, that is, issues like the ones you highlight eat all the gain.

I imagine the data wasn't available, but I thought I'd check: was there any more granular information on the funding history than just percentage of total donation that remains unallocated? Because that would seem to make a big difference: the more skewed towards the recent past donations are, the less discount rates would seem to be a problem

Thanks Carl, this looks great. By

just get in touch with CEA if you need a chance at a larger pot

do you mean (a) get in touch with CEA if you need a chance at a larger pot than the current lotteries offer or (b) get in touch with CEA if you need a chance at a larger pot by entering a lottery (as there currently aren't any)?

Thanks Alex! Those sound like useful heuristics, though I'd love to see some experience reports (perhaps I ought to generate them).

I would be interested! I'll reach out via private message

This post is excellent. I find the historical work particularly useful, both as a collation of timelines and for the conclusions you tease out of it.

Considering the high quality and usefulness of this post, it is churlish to ask for more, but I'll do so anyway.

Have you given any thought to how donors might identify funding opportunities in the AI safety space? OpenPhil have written about how they found many more giving opportunities after committing to give, but it may be difficult to shop around a more modest personal giving budget.

A fallback here could be the far future EA fund, but I would be keen to hear other ideas

This seems like a really powerful tool to have in one's cognitive toolbox when considering allocating EA resources. I have two questions on evaluating concrete opportunities.

First, if I can state what I take to be the idea (if I have this wrong, then probably both of my questions are based on understanding): we can move resources from lower-need (i.e. the problem continues as default or improves) to higher-need situations (i.e. the problem gets worse) by investing in instruments that will be doing well if the problem is getting worse (which because of efficient markets is balanced by the expectation they will be doing poorly if the problem is improving).

You mention the possibility that for some causes, the dynamics of the cause progression might mean hedging fails (like fast takeoff AI). Is another possible issue that some problems might unlock more funding as they get worse? For example, dramatic results of climate change might increase funding to fight it sufficiently early. While the possibility of this happening could just be taken to undermine the serious of the cause ("we will sort it out when it gets bad enough"), if different worsenings unlock different amounts of funding for the same badness, the cause could still be important. So should we focus on instruments that get more valuable when the problem gets worse AND the funding doesn't get better?

My other question was on retirement saving. When pursuing earning-to-give, doesn't it make more sense just to pursue straight expected value? If you think situations in which you don't have a job will be particularly bad, you should just be hedging those situations anyway. Couldn't you just try and make the most expected money, possibly storing some for later high-value interventions that become available?

Thank you for sharing this research! I will consider it when making investment decisions.

Not falling prey to sunk cost fallacy, I would switch to the higher impact project and start afresh.

I have often fallen prey to over-negating the sunk cost fallacy. That is, if the sunk cost fallacy is acting as if you get paid costs back by pursuing the purchased option, I might end up acting as if I had to pay the cost again to pursue the option.

That is, if you already bought theatre tickets, but now realise you're not much more excited about going to the play than to the pub, you should still go to the play, because the small increase in expected value is available for free now!

I don't think that this post is only pointing at problems of the sort above, but it's useful to double check when re-evaluating projects

It would also be useful to build an intuition of what the distribution of projects across return on one's own effort is. That way you can also estimate value of information to weigh up against search costs.

Load more