Comment author: Telofy  (EA Profile) 31 December 2016 05:42:16PM 2 points [-]

Thank you for all the interesting thoughts! Though the general thesis confirmed my prior on the topic, there were many insightful nuggets in it that I need to remember.

One question though. Either I’m parsing this sentence wrong, the “in expectation” is not meant to be there, or it’s supposed to be something along the lines of “per time investment”:

In light of the availability of donor lotteries the rest of this post will be assuming that large donation sizes and time investments are accessible for small donors in expectation.

Comment author: Owen_Cotton-Barratt 31 December 2016 05:50:49PM 2 points [-]

I think "in expectation" is meant to mean that they can access a probability of having large donation size and time investment. You might say "stochastically".

Comment author: Owen_Cotton-Barratt 31 December 2016 05:18:28PM 5 points [-]

Thanks for such a thorough exploration of the advantages of scaling up, and why small donors may be able to beat larger institutions at the margin. I'd previously (prior to the other thread) thought that there was typically not that much to gain (or lose) from entering a lottery, but I'm now persuaded that it's probably a good idea for many small donors.

I still see a few reasons one might prefer not to commit to a lottery:

1) If you see significant benefit to the community of more people seriously thinking through donation decisions, you might prefer to reserve enough of your donation to be allocated personally that you will take the process seriously (even if you give something else to a donor lottery). Jacob Steinhardt discusses this in his recent donation post. I'm sympathetic to this for people who actively want to take some time to think through where to give (but I don't think that's everyone).

2) If you prefer giving now over giving later, you may wish to make commitments about future donations to help charities scale up faster. This is much harder to do with donor lotteries. If you trusted other lottery entrants enough you could all commit to donating to it in the future, with the ability to make commitments about next year's allocation of funds being randomised today. But that's a much higher bar of trust than the current lottery requires. Alternatively you could borrow money to donate more (via the lottery) today. If you think that there are significant advantages to the lottery and to giving earlier, this strategy might be correct, even if borrowing to give to a particular charity is often beaten by making commitments about future donations. But if you think you're only getting a small edge from entering the lottery, this might be smaller than the benefit of being able to make commitments, and so not worthwhile.

3) If you think you might be in a good position to recognise small giving opportunities which are clearly above the bar for the community as a whole to fund, it could make sense for you to reserve some funds to let you fill these gaps in a low-friction manner. I think this is most likely to be the case for people who are doing direct work in high-priority areas. Taking such opportunities directly can avoid having to pull the attention of large or medium-sized funders. This is similar to the approach of delegating to another small donor, where the small donor is future-you.

Comment author: Mac- 31 December 2016 02:57:00PM 1 point [-]

I think this is a very good idea. Unfortunately, I don't really know any of you, and I don't think it's worth the time to thoroughly research your reputations and characters, so I'm not going to contribute.

However, I would be interested in a registered charitable organization whose sole purpose is to run a donation lottery annually. In fact, I would donate to the operations of such a charity if the necessary safeguards and/or reputation were in place. Seems like an easy "bolt-on" project for GiveWell, no?

If anyone else would like to see a permanent donor lottery from GiveWell, let me know how much you're willing to contribute to start it (via private message if you prefer). I'll total the amounts in a few weeks and present to GiveWell. Maybe it will pique their interest.

Comment author: Owen_Cotton-Barratt 31 December 2016 04:32:30PM *  0 points [-]

This seems like a reasonable concern, and longer term building good institutions for donor lotteries seems valuable.

However, I suspect there may be more overheads (and possible legal complications) associated with trying to run it as part of an existing charity. In the immediate, I wonder if there are enough people who you do trust who might give character references which would work for this? (You implied trust in GiveWell, and I believe Paul and Carl are fairly well known to several GiveWell staff; on the other hand you might think that the institutional reputation of GiveWell is more valuable than the individual reputations of people who work there, and so be more inclined to trust a project it backs not because you know more about it, but because it has more at stake.)

Comment author: carneades 23 December 2016 03:45:51PM -1 points [-]

I agree, and I would even go farther, I would claim that the AMF should not simply be ranked below the top spot, but that it in fact does more harm than good. I live and work in international development in West Africa. Consistently bednet distributions provide short term benefits that organizations diligently document and cause long term economic and social harm that they conveniently ignore, because if they did not, they would be out of a job. They save lives at the expense of economic growth, freedom to choose, and community independence. Here's the full argument Stop Giving Well.

Comment author: Owen_Cotton-Barratt 23 December 2016 07:12:19PM 2 points [-]

Is there a written version of this anywhere? I'm interested in the content of the argument, but I don't like video.

Comment author: AGB 22 December 2016 06:18:54PM 0 points [-]

I think the arguments in favor of meta are intuitive, but not easy to find. For one thing, the org's posts tend to be org-specific (unsurprisngly) rather than a general defense of meta work. In fact, to the best of my knowledge the best general arguments have never been made on the forum at the top level because it's sort-of-assumed that everybody knows them. So while you're saying Peter's post is the only such post you could find, that's still more than the reverse (and with your post, it's now 2 - 0).

At the comment level it's easy to find plenty of examples of people making anti-meta arguments.

Comment author: Owen_Cotton-Barratt 23 December 2016 10:55:26AM 0 points [-]

I think it's not quite what you're looking for, but I wrote How valuable is movement growth?, which is an article analysing the long-term counterfactual impact of different types of short-term movement growth effects. (It doesn't properly speak to the empirical question of how short-term effort into meta work translates into short-term movement growth effects.)

Comment author: Peter_Hurford  (EA Profile) 18 December 2016 08:16:15PM 0 points [-]

Let me know if you think the model is better and I can update the post.

re (1), that is true because Guesstimate uses a Monte Carlo method with 5K samples I think.

re (2), I don't know how to read the sensitivity outputs well, but nothing looks weird to me. Could you explain?

Comment author: Owen_Cotton-Barratt 19 December 2016 11:03:01AM 0 points [-]

I think this has removed the pathology. There's still more variation in this number, but that comes from more uncertainty about amount of senior staff time needed. If the decision-relevant question under consideration is "how many of these could we do sequentially?" then this uncertainty is appropriate to weight like this.

Comment author: Peter_Hurford  (EA Profile) 18 December 2016 05:45:32PM 1 point [-]

you include scenarios with negative senior staff time(!)

That part is now fixed, but it doesn't look like it contributed meaningfully to the end calculation.

Comment author: Owen_Cotton-Barratt 18 December 2016 06:59:21PM *  1 point [-]

This doesn't look fixed to me (possible I'm seeing an older cached version?). I no longer see negative numbers in the summary statistics, but you're still dividing by things involving normal distributions -- these have a small chance of being extremely small or even negative. That in turn means that the expectation of the eventual distribution is undefined.

Empirically I think this is happening, because: (i) the sampling seems unstable -- refreshing the page a few times gives me quite different answers each time; (ii) the "sensitivity" tool in Guesstimate suggests something funny is going on there (but I'm not sure exactly how this diagnostic tool works, so take with some salt).

To avoid this, I'd change all of the normal distributions that you may end up dividing by to log-normals.

Comment author: Owen_Cotton-Barratt 18 December 2016 03:05:40PM 2 points [-]

Can you explain why attributing all impact to senior staff increases the width of the confidence interval (in log space)? I'd naively expect this to remove a source of uncertainty.

I had a quick look at the Guesstimate model, and what I think is going on is that you just have much wider error bars over how much senior staff time will be taken; but you include scenarios with negative senior staff time(!), which may contribute significantly to expectation of the value-per-year figure, but isn't very meaningful. Am I just confused?

Comment author: MichaelDickens  (EA Profile) 09 December 2016 04:04:09PM 4 points [-]

This is enough to make me discount its value by perhaps one-to-two orders of magnitude.

So you'd put the probability of CEV working at between 90 and 99 percent? 90% seems plausible to me if a little high; 99% seems way too high.

But I have to give you a lot of credit for saying "the possibility of CEV discounts how valuable this is" instead of "this doesn't matter because CEV will solve it"; many people say the latter, implicitly assuming that CEV has a near-100% probability of working.

Comment author: Owen_Cotton-Barratt 09 December 2016 04:52:01PM *  2 points [-]

So you'd put the probability of CEV working at between 90 and 99 percent?

No, rather lower than that (80%?). But I think that we're more likely to attain only somewhat-flawed versions of the future without something CEV-ish. This reduces my estimate of the value of getting them kind of right, relative to getting good outcomes through worlds which do achieve something like CEV. I think that probably ex-post provides another very large discount factor, and the significant chance that it does provides another modest ex-ante discount factor (maybe another 80%; none of my numbers here are deeply considered).

Comment author: Owen_Cotton-Barratt 09 December 2016 03:20:52PM 4 points [-]

Thanks for the write-up. I'm excited about people presenting well thought-through cases for the value of different domains.

I want to push back a bit against the claim that the problem is time-sensitive. If we needed to directly specify what we valued to a powerful AI, then it would be crucial that we had a good answer to that by the time we had such an AI. But an alternative to directly specifying what it is that we value is to specify the process for working out what to value (something in the direction of CEV). If we can do this, then we can pass the intellectual work of this research off to the hypothesised AI. And this strategy looks generally very desirable for various robustness reasons.

Putting this together, I think that there is a high probability that consciousness research is not time-critical. This is enough to make me discount its value by perhaps one-to-two orders of magnitude. However, it could remain high-value even given such a discount.

(I agree that in the long run it's important. I haven't looked into your work beyond this post, so I don't (yet) have much of a direct view of how tractable the problem is to your approach. At least I don't see problems in principle.)

View more: Next