Comment author: ThomasSittler 25 April 2018 12:32:04PM 2 points [-]

From your link:

Some members leave Giving What We Can, and therefore can be assumed not to actually donate the money they pledged. Others we lose contact with, so that we don’t know whether they donate the money they pledged. The rate of people leaving has so far been 1.7% of members per year.[9]

Other people lose contact with Giving What We Can. The rate of people going silent has been 4.7% per year (we have counted people as silent if we haven’t had any contact with them for over 2 years) . It seems likely that members who go silent still donate some amount, but it is likely to be less than the amount they pledged. We have assumed that this will be around one-third of their original pledge (for example, if a person pledging the standard 10% of their income has gone silent, we’ve only counted 3.33% of their pledge in this calculation).

Given these numbers, the total of those ceasing donations per year is4.8%.[10] We’ve assumed that this percentage will remain constant over time. This means that after, say, 30 years, each member has a 23% chance of still donating, which we believe is a plausible estimate.

It might be useful to know "no contact for 2 years" means exactly. Not because I'm trying to nitpick. But the way we operationalise these metrics sometimes makes a big difference.

Comment author: Larks 29 April 2018 05:57:46PM 6 points [-]

I think if people promise you that they'll do something, and then they don't answer when you ask if they did it, it's quite probably they did not do the thing.

Comment author: Larks 21 February 2018 02:52:20AM 12 points [-]

Thanks for writing this, I thought it was a good article. And thanks to Greg for funding it.

My pushback would be on the cooperation and coordination point. It seems that a lot of other people, with other moral values, could make a very similar argument: that they need to promote their values now, as the stakes as very high with possible upcoming value lock-in. To people with those values, these arguments should seem roughly as important as the above argument is to you.

  • Christians could argue that, if the singularity is approaching, it is vitally important that we ensure the universe won't be filled with sinners who will go to hell.
  • Egalitarians could argue that, if the singularity is approaching, it is vitally important that we ensure the universe won't be filled with wider and wider diversities of wealth.
  • Libertarians could argue that, if the singularity is approaching, it is vitally important that we ensure the universe won't be filled with property rights violations.
  • Naturalists could argue that, if the singularity is approaching, it is vitally important that we ensure the beauty of nature won't be bespoiled all over the universe.
  • Nationalists could argue that, if the singularity is approaching, it is vitally important that we ensure the universe will be filled with people who respect the flag.

But it seems that it would be very bad if everyone took this advice literally. We would all end up spending a lot of time and effort on propaganda, which would probably be great for advertising companies but not much else, as so much of it is zero sum. Even though it might make sense, by their values, for expanding-moral-circle people and pro-abortion people to have a big propaganda war over whether foetuses deserve moral consideration, it seems plausible we'd be better off if they both decided to spend the money on anti-malaria bednets.

In contrast, preventing the extinction of humanity seems to occupy a privileged position - not exactly comparable with the above agendas, though I can't exactly cache out why it seems this way to me. Perhaps to devout Confucians a pre-occupation with preventing extinction seems to be just another distraction from the important task of expressing filial piety – though I doubt this.

(Moral Realists, of course, could argue that the situation is not really symmetric, because promoting the true values is distinctly different from promoting any other values.)

Comment author: Larks 19 February 2018 09:47:39PM 0 points [-]

a standard sheet of paper is about 7 square feet.

Do you mean 0.7 square feet?

Comment author: Liam_Donovan 14 January 2018 10:34:58AM 3 points [-]

Hopefully...since it's a zero-sum game though, I'm not necessarily convinced that we can improve efficiency and learn from our mistakes more than other groups. In fact, I'd expect the %matched to go down next year, as the % of the matching funds directed by the EA community was far larger than the % of total annual donations made by EAs (and so we're likely to revert to the mean)

Comment author: Larks 19 January 2018 03:47:55AM 0 points [-]

I think we are much more organised than most, and hence more able to learn from our mistakes.

Comment author: Larks 04 January 2018 02:32:39AM 0 points [-]

Not only is it not necessarily true that actual willingness to pay determines consumer preference, it is not even usually true. Differences in willingness to pay are to a significant extent and in a huge range of cases driven by differences in personal wealth rather than by differences in consumer preference. Rich people tend to holiday in exotic and sunny places at much higher rates than poor people. This is entirely a product of the fact that rich people have more money, not that poor people prefer to holiday in Blackpool. I think the same holds for the vast majority of differences in market demand across different income groups.

This is probably empirically true between income groups, but I don't think it's true between individuals, even of different income levels. Most people have zero demand for most goods, due to a combination of geographic location, lack of interest and diminishing marginal utility, and this is the main determinant of differences in demand between individuals.

For example, I have 0 demand for sandwiches right now - hence why sandwiches can be bought all over the world by people with incomes <1% of mine. This sort of case, where markets do correctly allocate sandwiches, strikes me as being the norm in markets, rather than the exception.

(I realise this does not directly contradict your point but wanted to ensure readers did not draw an unnecessarily strong conclusion from it)

Comment author: Milan_Griffes 21 December 2017 03:47:09AM *  1 point [-]

Thanks for writing this; I found it really useful.

two of the authors (Christiano from OpenAI and Amodei from Deepmind)

I thought both Christiano and Amodei were at OpenAI? (Amodei's LinkedIn)

Small typo:

in general I am warey of giving organisations ...

"warey" should be "wary"

Comment author: Larks 21 December 2017 11:28:24PM 1 point [-]

Thanks, made corrections.


2018 AI Safety Literature Review and Charity Comparison

  Summary: I review a significant amount of 2017 research related to AI Safety and offer some comments about where I am going to donate this year.   Contents Contents Introduction The Machine Intelligence Research Institute (MIRI) The Future of Humanity Institute (FHI) Global Catastrophic Risks Institute (GCRI) The Center... Read More
Comment author: MHarris 10 October 2017 04:10:56PM 0 points [-]

One issue to consider is whether catastrophic risk is a sufficiently popular issue for an agency to use it to sustain itself. Independent organisations can be vulnerable to cuts. This probably varies a lot by country.

Comment author: Larks 10 October 2017 11:14:04PM *  0 points [-]

Independent organisations can be vulnerable to cuts.

Do you know of any quantitative evidence on the subject? My impression was there is a fair bit of truth to the maxim "There's nothing as permanent as a temporary government program."

Comment author: Larks 07 October 2017 03:01:36PM 0 points [-]

Thanks for writing this up! While it's hard to evaluate externally without seeing the eventually outcomes of the projects, and the counterfactuals of who you rejected, it seems like you did a good job!

Comment author: MichaelPlant 01 October 2017 12:50:13AM 3 points [-]

All this was hard to follow.

Comment author: Larks 07 October 2017 02:49:31PM 1 point [-]

EA money is money in the hands of EAs. It is argued that this is more valuable than non-EA money, because EAs are better at turning money into EAs. As such, a policy that cost $100 of non-EA money might be more expensive than one which cost $75 of EA money.

View more: Next