Comment author: SamDeere 16 December 2017 07:43:24AM *  3 points [-]

It's not clear to me how a donor lottery would capture all the considerations. Can you elaborate?

In this case, you haven't found an advisor who you trust to take into account all the things you consider to be relevant. So, instead of relying on a third-party advisor, you do the research yourself. As research is costly for any given individual to undertake, it may not make sense for you to do this with a smaller donations, but with the larger pot, if you win, you've got more incentive to undertake whatever research you feel is necessary (i.e. that 'captures the relevant considerations').

Does this presume that (some) donors already know where they prefer to donate, rather than offsetting time spent on additional research with a larger donation pool?

It's just meant to illustrate that the value of the amount that you would be able to grant to a preferred organization is the same in expectation whether you participate in the lottery or donate directly. The lottery then may generate additional upside, potentially increasing the effectiveness of your donation if you do more research, and also giving you access to different funding opportunities (providing seed funding for an organization, donating to organizations that have a minimum donation threshold etc)

Is there an expectation (or requirement) that the winning donor provides a write-up of their research and reasoning for their selected charity?

[updated — see more in the discussion below]

We think that it's in the spirit of the lottery that someone who does useful research that would be of interest to other donors should publish it (or give permission for CEA to publish their grant recommendation). Also, if they convince others to donate then they'll be causing additional grants to go to their preferred organization(s). We'll strongly encourage winners to do so, however, in the interests of keeping the barriers to entry low, we haven't made it a hard requirement.

Comment author: Owen_Cotton-Barratt 16 December 2017 10:28:50PM 5 points [-]

We think that it's in the spirit of the lottery that someone who does useful research that would be of interest to other donors should publish it (or give permission for CEA to publish their grant recommendation). Also, if they convince others to donate then they'll be causing additional grants to go to their preferred organization(s). We'll strongly encourage winners to do so, however, in the interests of keeping the barriers to entry low, we haven't made it a hard requirement.

Seems like even strong social pressure might be enough to be a significant barrier to entry. I feel excited about entering a donor lottery, and would feel less excited if I thought I'd feel accountable if I won (I might still enter, but it seems like a significant cost).

Would an attitude of "we think it's great if you want to share (and we could help you with communication) but there's no social obligation" capture the benefits? That's pretty close to what you were saying already, but the different tone might be helpful for some people.

Comment author: Owen_Cotton-Barratt 23 November 2017 11:15:36PM 2 points [-]

Thanks for the write-up!

I found the figures for existential-risk-reduced-per-$ with your default values a bit suspiciously high. I wonder if the reason for this is in endnote [2], where you say:

say one researcher year costs $50,000

I think this is too low as the figure to use in this calculation, perhaps by around an order of magnitude.

Firstly, that is a very cheap researcher-year even just paying costs. Many researcher salaries are straight-up higher, and costs should include overheads.

A second factor is that having twice as much money doesn't come close to buying you twice as much (quality-adjusted) research. In general it is hard to simply pay money to produce more of some of these specialised forms of labour. For instance see the recent 80k survey of willingness to pay of EA orgs to bring forward recent hires, where the average willingness to forgo donations to move a senior hire forward by three years was around $4 million.

Comment author: Elizabeth 24 June 2017 05:23:01PM 0 points [-]

I think costly signaling is the wrong phrase here. Costly signaling is about gain for the signaler. This seems better modeled as people trying to indirectly purchase the good "rich people donate lots to charity.". Similar to people who are unwilling to donate to the government (so they don't think the government is better at spending money than they are) but do advocate for higher taxes (meaning they think the government is better at spending money than other people are). They're trying to purchase the good "higher taxes for everyone".

Comment author: Owen_Cotton-Barratt 24 June 2017 07:53:31PM 1 point [-]

Seems like it's suggesting it as costly signalling at the level of the movement rather than the individuals. It's a stretch from normal use, but that's kind of the strength of analogies?

Comment author: jpaddison 18 June 2017 03:50:07PM 2 points [-]

Does anyone have recommendations for activities that are valuable for people considering their values? Or people considering / doing action?

Comment author: Owen_Cotton-Barratt 24 June 2017 02:29:39PM 0 points [-]

This is a great question and I think deserves further thought.

Helping people consider their values was one of the major goals Daniel Kokotajlo and I had in designing this flowchart. One possible activity would be to read through and/or discuss parts of that.

Comment author: Owen_Cotton-Barratt 13 June 2017 11:18:26AM 9 points [-]

I was a bit confused by some of these. Posting questions/comments here in case others have the same thoughts:

Earning-to-give buy-out

You're currently earning to give, because you think that your donations are doing more good than your direct work would. It might be that we think that it would be more valuable if you did direct work. If so we could donate a proportion of the amount that you were donating to wherever you were donating it, and you would move into work.

This made more sense to me after I realised that we should probably assume the person doesn't think CEA is a top donation target. Otherwise they would have an empirical disagreement about whether they should be doing direct work, and it's not clear how the offer helps resolve that (though it's obviously worth discussing).

Anti-Debates / Shark Tank-style career choice discussions / Research working groups

These are all things that might be good, but it's not obvious how funding would be a bottleneck. Might be worth saying something about that?

For those with a quantitative PhD, it could involve applying for the Google Brain Residency program or AI safety fellowship at ASI.

Similarly I'm confused what the funding is meant to do in these cases.

I'd be keen to see more people take ideas that we think we already know, but haven't ever been put down in writing, and write them up in a thorough and even-handed way; for example, why existential risk from anthropogenic causes is greater than the existential risk from natural causes

I think you were using this as an example of the type of work, rather than a specific request, but some readers might not know that there's a paper forthcoming on precisely this topic (if you mean something different from that paper, I'm interested to know what!).

Comment author: DavidNash 01 June 2017 08:55:49AM 9 points [-]

If there was any community that it might apply to, it's probably effective altruists.

Comment author: Owen_Cotton-Barratt 01 June 2017 10:21:00PM 7 points [-]

Not as pithy, but just a flag that I think the question implicitly raised by Tom's comment and the answer in David's are pretty important. This is a community which is willing to update actions based on theoretical arguments about what's important. Of course I don't expect an article to totally change people's beliefs -- let alone behaviours -- but if it has a fraction of that effect I'd count it as cheap.

Comment author: MichaelPlant 01 June 2017 12:01:06AM 9 points [-]

Thanks for this. I think I strongly agree with what you've said. I've often noticed/got the impression that lots of EAs seem to be quite interested in pursuing their own projects and don't help each other very much. I worry this results in an altruistic tragedy of the commons problem; it would be better if people helped each other, but instead we chose to do our own good in our own way, resulting in less good done overall. Now I think of it, I've probably done this myself.

The real challenge, as you noted, is the following:

Being considerate often makes others happier to interact with you. That is normally good, but in some circumstances may not be desirable. If people find you extremely helpful when they ask you about frivolous matters, they will be incentivized to keep asking you about such matters. If you would prefer them not to, you should not be quite so helpful.

This seems to be quite a common problem, at least in academia. VIPs (very important people) will often deliberately make themselves unavailable so they have time for their own projects. Presumably, this has some reciprocal costs to the VIP too: if they had helped you, you would be more inclined to help them in future.

Relatedly, suppose people accept more considerate norms and so are reluctant to bother some VIP in case it's annoying to the VIP. We can imagine this backfiring. Take an extreme scenario where considerate people dont ask VIPs (or indeed anyone) else for help. This means people don't get help from the VIPs, and VIPs only get requests from inconsiderate people. Presuming these VIPs do grant some requests for help and the requests from considerate people would have done more good, this is now a worse situation overall. Extreme considerateness, call it 'meekness', seems bad.

It strikes me that it would be important to develop some community norms for navigating this difficulty. Perhaps people asking for help should be encouraged to do so, ask once or twice and leave the other person plenty of room to turn the request down. Perhaps receipients of requests should make a habit of replying to them but being polite and honest about their current capacity to help.

Comment author: Owen_Cotton-Barratt 01 June 2017 09:18:22AM 4 points [-]

I think you're right that there's a failure mode of not asking people for things. I don't think that not-asking is in general the more considerate action, though -- often people would prefer to be given the opportunity to help (particularly if it feels like an opportunity rather than a demand).

I suppose the general point is: avoid the trap of overly-narrow interpretations of considerateness (just like it was good to avoid the trap of overly-narrow interpretations of consequences of actions).

Comment author: adom_hartell 25 May 2017 02:10:14AM 2 points [-]

Hey Max, thanks for linking these.

I have a question about an argument for the benefit of reserves made in the second link:

Assuming that core programmes are roughly as effective next year, additional funding mostly reduces the funding needs of the organisation next year, thereby freeing up money for those donors who would have given next year. Assuming those donors still donate that money somewhere else, then their alternate donations are likely to produce at least almost as great value as this organisations’ core programmes.

I read this as saying that the benefit of donating to Organization A this year is that it will free up money for Organization B next year. But if Organization B is almost as good (as assumed in the quoted text), then why not donate to them directly this year?

On this reading, it seems like the impact of reserves for Organization A is whatever benefit Org A draws from the other arguments you offer (potential for capacity-building, freeing up staff-time from fundraising efforts next year) minus something like a discount rate / the cost of Organization B getting resources one year later. It's not obvious to me that this will always, or usually, be positive.

Am I missing something here?

Comment author: Owen_Cotton-Barratt 25 May 2017 10:24:17AM 1 point [-]

Fair question. This argument is all conditioned on A not actually having good ways to expand capacity -- the case is that even then the funds are comparably good given to A as elsewhere. The possibility of A in fact having useful expansion might make it noticeably better than the alternative, which is what (to my mind) drives the asymmetry.

Comment author: Benito 16 May 2017 09:57:21AM 0 points [-]

Well I don't understand that at all, and it seems to contradict my guess.

I thought DALYs had a more rigorous conversion than "we took our median estimate" and I thought a life was a full life, not just preventing death one time. Strike me wrong on this count.

Comment author: Owen_Cotton-Barratt 16 May 2017 10:37:59AM 2 points [-]

DALYs do use a more defensible analysis; GiveWell aren't using DALYs. This has some good and some bad aspects (related to the discussion in this post, although in this case the downside of defensibility is more that it doesn't let you incorporate considerations that aren't fully grounded).

The problem with just using DALYs is that on many views they overweigh infant mortality (here's my view on some of the issues, but the position that they overweigh infant mortality is far from original). With an internal agreement that they significantly overweigh infant mortality, it becomes untenable to just continue using DALYs, even absent a fully rigorous alternative. Hence falling back on more ad hoc but somewhat robust methods like asking people to consider it and using a median.

[I'm just interpreting GW decision-making from publicly available information; this might easily turn out to be a misrepresentation.]

Comment author: RyanCarey 05 May 2017 05:40:10AM *  3 points [-]

A clear problem with this model is that AFAICT, it assumes that (i) the size of the research community working on safety when AI is developed is independent of (ii) the the degree to which adding a researcher now will change the total number of researchers.

Both (i) and (ii) can vary by orders of magnitude, at least on my model, but are very correlated, because they depend on timelines. This means I get an oddly high chance of averting existential risk. If the questions where combined together into "what fraction of the AI community will the community by enlarged by adding an extra person" then I think my chance of averting existential risk would come out much lower.

Comment author: Owen_Cotton-Barratt 08 May 2017 01:42:36PM 2 points [-]

Yes, I think this is a significant concern with this version of the model (somewhat less so with the original cruder version using something like medians, but that version also fails to pick up on legitimate effects of "what if these variables are all in the tails"). Combining the variables as you suggest is the easiest way to patch it. More complex would be to add in explicit time-dependency.

View more: Next