In response to Utopia In The Fog
Comment author: Denkenberger 31 March 2017 12:25:03PM 1 point [-]

Just a comment on growth functions: I think a common prior here is that once we switch to computer consciousnesses, progress will go with Moore's law, which is exponential with doubling time of roughly 18 months (Ray Kurzweil says it is actually slow exponential growth in the exponent). Hanson sees the transition to a much shorter doubling time, something around one month. Others have noted that if the computer consciousnesses are making the progress and they are getting faster with Moore's law, you actually get a hyperbolic shape which goes to infinity in a finite time (around three years). Then you get to recursive self-improvement of AI, which could have a doubling time of days or weeks, and I think this is roughly the Yudkowsky position (though he does recognize that progress could get harder). I think this is the most difficult to manage. Then going the other direction from the Moore's law prior would be many economists who see continued exponential growth with the doubling time of decades. Then we have historical economists who think economic growth rate will go back to zero. Next you have the resource (or climate) doomsters who think there will be slow negative economic growth. Further down, you have faster catastrophes, which we might recover from. Finally, you have sudden catastrophes with no recovery. Quite the diversity in opinion: It would be an interesting project (or paper?) to try to plot this out.

Comment author: Richard_Batty 02 March 2017 09:56:16AM 13 points [-]

This is really helpful, thanks.

Whilst I could respond in detail, instead I think it would be better to take action. I'm going to put together an 'open projects in EA' spreadsheet and publish it on the EA forum by March 25th or I owe you £100.

Comment author: Denkenberger 08 March 2017 10:53:19PM *  1 point [-]

Isn't this list of ideas in need of implementation similar?

Comment author: lukeprog 09 February 2017 10:33:08PM *  4 points [-]

I think EA may have picked the lowest-hanging fruit, but there's lots of low-ish hanging fruit left unpicked. For example: who, exactly, should be seen as the beneficiaries aka allkind aka moral patients? EAs disagree about this quite a lot, but there hasn't been that much detailed + broadly informed argument about it inside EA. (This example comes to mind because I'm currently writing a report on it for OpenPhil.)

There are also a great many areas that might be fairly promising, but which haven't been looked into in much breadth+detail yet (AFAIK). The best of these might count as low-ish hanging fruit. E.g.: is there anything to be done about authoritarianism around the world? Might certain kinds of meta-science work (e.g. COS) make future life science and social science work more robust+informative than it is now, providing highly leveraged returns to welfare?

Comment author: Denkenberger 11 February 2017 01:17:08AM 2 points [-]

There is also non-AI global catastrophic risk, like engineered pandemics, and low hanging fruit for dealing with agricultural catastrophes like nuclear winter.

Comment author: Denkenberger 10 February 2017 01:12:26PM 1 point [-]

Very interesting! That's great you did a sensitivity analysis, though it is a little surprising the range was so small. Did you do scenario where you might become convinced of the value of far future computer consciousnesses and therefore the effectiveness might be ~10^40 times as much?

Comment author: tjmather  (EA Profile) 06 February 2017 12:48:05PM *  0 points [-]

Interesting, are you concerned that in a full-scale nuclear war that most places in the northern hemisphere would be unsafe due to military targets outside the cities and fallout?

What do you think about this Q&A on Quora about where it would be safest in the event of a nuclear war? Most of the suggested safe locations are in the southern hemisphere like New Zealand.

Comment author: Denkenberger 08 February 2017 12:42:53AM 2 points [-]

Most of the Quora discussion seems reasonable for the safest locations. But it is a pretty big sacrifice to change countries just because of the threat of nuclear war. So I am looking at lower cost options. Also, being outside the target countries even in the northern hemisphere would generally not be too bad because the radiation largely rains out within a few days. And even within the target countries if you are not hit by the blast/fire, you're most likely to survive. I believe the radiation exposure would be lower than Chernobyl, which took about one year of life off the people nearby.

Comment author: Denkenberger 06 February 2017 02:02:21AM *  1 point [-]

Control Board (INCB) estimates that 92% of all morphine is consumed in America, Canada, New Zealand, Australia, and parts of western Europe— only 17% of the world’s population (ref ; 2014 estimates).

If we think of 100 units and 100 people, this means 92 units are spent on 17 people and 8 units are spent on 83 people, which means the unlucky countries are only using 1/56 as much per person!

Comment author: Denkenberger 05 February 2017 10:16:50PM 2 points [-]

There is roughly 0.02-7% chance per year of accidental full-scale nuclear war between US and Russia: source. Since NATO says an attack on one is an attack on all, this could easily spread to the UK. One simple precaution would be for EAs to locate in the suburbs where the risk of being hit his lower (as I have done). The economics of this appear to be favorable because housing prices are typically lower in the suburbs, especially if you can move by rail that is low risk and good potential for multitasking. I would like to formalize this into a paper, but I would need a collaborator.

Comment author: TruePath 12 January 2017 12:00:29PM 0 points [-]

That is good to know and I understand the motivation to keep the analysis simple.

As far as the definition go that is a reasonable definition of the term (our notion of catastrophe doesn't include an accumulation of many small utility losses) so is a good criteria for classifying the charity objective. I only meant to comment on QALYs as a means to measure effectiveness.

WTF is with the votedown. I nicely and briefly suggested that another metric might be more compelling (though the author's point about mass appeal is a convincing rebuttal). Did the comment come off as simply bitching rather than a suggestion/observation?

Comment author: Denkenberger 17 January 2017 10:16:26PM 1 point [-]

I did not do the vote down, but I did think that calling lives saved a mostly useless metric was a little harsh. :-)

Comment author: Denkenberger 17 January 2017 01:34:08PM *  0 points [-]

Note that the proposed norm within EA of following laws at least in the US is very demanding-see this article. A 14th very common violation I would add is not fully reporting income to the government like babysitting: "under the table" or "shadow economy." A 15th would be pirated software/music. Interestingly, lying is not illegal in the US, though lying under oath is. So perhaps what we mean is be as law-abiding as would be socially acceptable to most people? And then for areas that are more directly related to running organizations (not e.g. speeding or jaywalking or urinating outside), we should have a significantly higher standard than the law to preserve our reputation?

Comment author: Brian_Tomasik 02 January 2017 10:21:25AM 0 points [-]

Interesting. :) Do you have further reading on this point?

It seems that increased phytoplankton in lakes and rivers generally leads to more zooplankton. Do you think the dynamics are different in the oceans? I have a hard time believing that herbivorous fish could not only eat all the extra phytoplankton from fertilization but even some of the phytoplankton that was present pre-fertilization (which is what's necessary to reduce zooplankton populations relative to pre-fertilization levels), but I could be wrong!

Comment author: Denkenberger 03 January 2017 02:58:29AM 1 point [-]

Thanks for the information on freshwater systems. I believe the quote about saltwater systems was in this book.

View more: Next