Comment author: rjmk 25 May 2018 06:22:45PM 0 points [-]

On the thin markets problem, there's been some prior work (on doing some googling I found https://mason.gmu.edu/~rhanson/mktscore.pdf, but I recall reading a paper with a less scary tile).

In the office case, an obvious downside to incentivising the market is that one may divert labour away from normal work, so it may still be that non-market solutions are superior

Comment author: HoldenKarnofsky 26 March 2018 06:36:35PM 2 points [-]

We don't control the visa process and can't ensure that people will get sponsorship. We don't expect sponsorship requirements to be a major factor for us in deciding which applicants to move forward with.

Comment author: rjmk 02 May 2018 08:44:12PM 1 point [-]

Thanks for the response. I understand OPP doesn't control the visa process, but do you have a rough sense of how likely a successful applicant would be to get a visa after being sponsored, or is it a complete unknown?

Comment author: rjmk 04 April 2018 05:56:32PM 2 points [-]

Thanks for the work on this. It seems very valuable: I agree that they seem to be an awesome idea and like an individual donor should be able to improve their impact easily with a fund. Unless, that is, issues like the ones you highlight eat all the gain.

I imagine the data wasn't available, but I thought I'd check: was there any more granular information on the funding history than just percentage of total donation that remains unallocated? Because that would seem to make a big difference: the more skewed towards the recent past donations are, the less discount rates would seem to be a problem

Comment author: Carl_Shulman 01 April 2018 02:36:14AM 1 point [-]

If you find your opportunities are being constrained by small donation size, you can use donor lotteries to trade your donation for a small chance of a large budget (just get in touch with CEA if you need a chance at a larger pot). You may also be interested in a post I made on this subject.

Comment author: rjmk 01 April 2018 04:25:58PM 0 points [-]

Thanks Carl, this looks great. By

just get in touch with CEA if you need a chance at a larger pot

do you mean (a) get in touch with CEA if you need a chance at a larger pot than the current lotteries offer or (b) get in touch with CEA if you need a chance at a larger pot by entering a lottery (as there currently aren't any)?

Comment author: alexflint 31 March 2018 06:35:22PM 1 point [-]

Thank you!

In terms of finding opportunities, I don't have a complete framework but I do have some rough heuristics: (1) look for opportunities that the large donors can't find, are too small for them to act on, or for some other reason fail to execute on (2) follow the example of angel investors in the tech community by identifying a funding thesis and then reaching out through personal networks to find people to fund at the very early stage of starting projects/organizations.

In terms of the historical work, I'm considering organizing a much deeper investigation into the history of these organizations. If you or anyone else is interested in working full time / part time on this, do let me know!

Comment author: rjmk 31 March 2018 11:21:55PM 0 points [-]

Thanks Alex! Those sound like useful heuristics, though I'd love to see some experience reports (perhaps I ought to generate them).

I would be interested! I'll reach out via private message

Comment author: Taymon 31 March 2018 07:47:35PM 0 points [-]
Comment author: rjmk 31 March 2018 09:04:55PM 0 points [-]

That link's broken for me (404)

Comment author: rjmk 30 March 2018 12:58:58PM 1 point [-]

This post is excellent. I find the historical work particularly useful, both as a collation of timelines and for the conclusions you tease out of it.

Considering the high quality and usefulness of this post, it is churlish to ask for more, but I'll do so anyway.

Have you given any thought to how donors might identify funding opportunities in the AI safety space? OpenPhil have written about how they found many more giving opportunities after committing to give, but it may be difficult to shop around a more modest personal giving budget.

A fallback here could be the far future EA fund, but I would be keen to hear other ideas

Comment author: rjmk 30 March 2018 12:28:31AM 0 points [-]

This seems like a really powerful tool to have in one's cognitive toolbox when considering allocating EA resources. I have two questions on evaluating concrete opportunities.

First, if I can state what I take to be the idea (if I have this wrong, then probably both of my questions are based on understanding): we can move resources from lower-need (i.e. the problem continues as default or improves) to higher-need situations (i.e. the problem gets worse) by investing in instruments that will be doing well if the problem is getting worse (which because of efficient markets is balanced by the expectation they will be doing poorly if the problem is improving).

You mention the possibility that for some causes, the dynamics of the cause progression might mean hedging fails (like fast takeoff AI). Is another possible issue that some problems might unlock more funding as they get worse? For example, dramatic results of climate change might increase funding to fight it sufficiently early. While the possibility of this happening could just be taken to undermine the serious of the cause ("we will sort it out when it gets bad enough"), if different worsenings unlock different amounts of funding for the same badness, the cause could still be important. So should we focus on instruments that get more valuable when the problem gets worse AND the funding doesn't get better?

My other question was on retirement saving. When pursuing earning-to-give, doesn't it make more sense just to pursue straight expected value? If you think situations in which you don't have a job will be particularly bad, you should just be hedging those situations anyway. Couldn't you just try and make the most expected money, possibly storing some for later high-value interventions that become available?

Thank you for sharing this research! I will consider it when making investment decisions.

Comment author: rjmk 28 March 2018 03:32:21PM 4 points [-]

Not falling prey to sunk cost fallacy, I would switch to the higher impact project and start afresh.

I have often fallen prey to over-negating the sunk cost fallacy. That is, if the sunk cost fallacy is acting as if you get paid costs back by pursuing the purchased option, I might end up acting as if I had to pay the cost again to pursue the option.

That is, if you already bought theatre tickets, but now realise you're not much more excited about going to the play than to the pub, you should still go to the play, because the small increase in expected value is available for free now!

I don't think that this post is only pointing at problems of the sort above, but it's useful to double check when re-evaluating projects

It would also be useful to build an intuition of what the distribution of projects across return on one's own effort is. That way you can also estimate value of information to weigh up against search costs.

Comment author: rjmk 17 November 2017 09:54:32AM *  1 point [-]

Thanks for this post! I think it will make me more comfortable discussing EA in my extended friendship circle.

Which frames work best probably depend on the who you're talking to*, but I think the two on global inequality are likely to be useful for me (and are most similar to how I currently approach it)

I particularly like how they begin with explicitly granting the virtue and importance of the more local action. Firstly, it's true, and secondly, when I've seen people change cause focus it's normally been because of arguments that go "helping this person is good, but the reasons for helping this person apply EVEN MORE here".

Remembering to explicitly say that I think the local cause is important and moral is the behaviour change I'll take away from this.

* For example, with most people I meet, I can normally take moral cosmopolitanism for granted

View more: Next