Comment author: lukeprog 09 February 2017 10:33:08PM *  4 points [-]

I think EA may have picked the lowest-hanging fruit, but there's lots of low-ish hanging fruit left unpicked. For example: who, exactly, should be seen as the beneficiaries aka allkind aka moral patients? EAs disagree about this quite a lot, but there hasn't been that much detailed + broadly informed argument about it inside EA. (This example comes to mind because I'm currently writing a report on it for OpenPhil.)

There are also a great many areas that might be fairly promising, but which haven't been looked into in much breadth+detail yet (AFAIK). The best of these might count as low-ish hanging fruit. E.g.: is there anything to be done about authoritarianism around the world? Might certain kinds of meta-science work (e.g. COS) make future life science and social science work more robust+informative than it is now, providing highly leveraged returns to welfare?

Comment author: Denkenberger 11 February 2017 01:17:08AM 2 points [-]

There is also non-AI global catastrophic risk, like engineered pandemics, and low hanging fruit for dealing with agricultural catastrophes like nuclear winter.

Comment author: Denkenberger 10 February 2017 01:12:26PM 1 point [-]

Very interesting! That's great you did a sensitivity analysis, though it is a little surprising the range was so small. Did you do scenario where you might become convinced of the value of far future computer consciousnesses and therefore the effectiveness might be ~10^40 times as much?

Comment author: tjmather  (EA Profile) 06 February 2017 12:48:05PM *  0 points [-]

Interesting, are you concerned that in a full-scale nuclear war that most places in the northern hemisphere would be unsafe due to military targets outside the cities and fallout?

What do you think about this Q&A on Quora about where it would be safest in the event of a nuclear war? Most of the suggested safe locations are in the southern hemisphere like New Zealand.

Comment author: Denkenberger 08 February 2017 12:42:53AM 2 points [-]

Most of the Quora discussion seems reasonable for the safest locations. But it is a pretty big sacrifice to change countries just because of the threat of nuclear war. So I am looking at lower cost options. Also, being outside the target countries even in the northern hemisphere would generally not be too bad because the radiation largely rains out within a few days. And even within the target countries if you are not hit by the blast/fire, you're most likely to survive. I believe the radiation exposure would be lower than Chernobyl, which took about one year of life off the people nearby.

Comment author: Denkenberger 06 February 2017 02:02:21AM *  1 point [-]

Control Board (INCB) estimates that 92% of all morphine is consumed in America, Canada, New Zealand, Australia, and parts of western Europe— only 17% of the world’s population (ref ; 2014 estimates).

If we think of 100 units and 100 people, this means 92 units are spent on 17 people and 8 units are spent on 83 people, which means the unlucky countries are only using 1/56 as much per person!

Comment author: Denkenberger 05 February 2017 10:16:50PM 2 points [-]

There is roughly 0.02-7% chance per year of accidental full-scale nuclear war between US and Russia: source. Since NATO says an attack on one is an attack on all, this could easily spread to the UK. One simple precaution would be for EAs to locate in the suburbs where the risk of being hit his lower (as I have done). The economics of this appear to be favorable because housing prices are typically lower in the suburbs, especially if you can move by rail that is low risk and good potential for multitasking. I would like to formalize this into a paper, but I would need a collaborator.

Comment author: TruePath 12 January 2017 12:00:29PM 0 points [-]

That is good to know and I understand the motivation to keep the analysis simple.

As far as the definition go that is a reasonable definition of the term (our notion of catastrophe doesn't include an accumulation of many small utility losses) so is a good criteria for classifying the charity objective. I only meant to comment on QALYs as a means to measure effectiveness.


WTF is with the votedown. I nicely and briefly suggested that another metric might be more compelling (though the author's point about mass appeal is a convincing rebuttal). Did the comment come off as simply bitching rather than a suggestion/observation?

Comment author: Denkenberger 17 January 2017 10:16:26PM 1 point [-]

I did not do the vote down, but I did think that calling lives saved a mostly useless metric was a little harsh. :-)

Comment author: Denkenberger 17 January 2017 01:34:08PM *  0 points [-]

Note that the proposed norm within EA of following laws at least in the US is very demanding-see this article. A 14th very common violation I would add is not fully reporting income to the government like babysitting: "under the table" or "shadow economy." A 15th would be pirated software/music. Interestingly, lying is not illegal in the US, though lying under oath is. So perhaps what we mean is be as law-abiding as would be socially acceptable to most people? And then for areas that are more directly related to running organizations (not e.g. speeding or jaywalking or urinating outside), we should have a significantly higher standard than the law to preserve our reputation?

Comment author: Brian_Tomasik 02 January 2017 10:21:25AM 0 points [-]

Interesting. :) Do you have further reading on this point?

It seems that increased phytoplankton in lakes and rivers generally leads to more zooplankton. Do you think the dynamics are different in the oceans? I have a hard time believing that herbivorous fish could not only eat all the extra phytoplankton from fertilization but even some of the phytoplankton that was present pre-fertilization (which is what's necessary to reduce zooplankton populations relative to pre-fertilization levels), but I could be wrong!

Comment author: Denkenberger 03 January 2017 02:58:29AM 0 points [-]

Thanks for the information on freshwater systems. I believe the quote about saltwater systems was in this book.

Comment author: John_Maxwell_IV 31 December 2016 01:02:40PM *  1 point [-]

Interesting paper! I'm intuitively skeptical, though--with 7 billion people, it just seems really hard to kill off every last person.

Where was this paper posted?

Comment author: Denkenberger 02 January 2017 08:02:43PM 0 points [-]

Sorry-I guess the review period for the paper on academia.edu expired. But contact Alexey: https://fromhumantogod.wordpress.com/contacts/ if you want to see the paper.

Comment author: Linch 26 December 2016 07:12:38PM 0 points [-]

I wish I had a better idea of what an impact factor of 1.242 actually cashes out to, in terms of academic influence/prestige. (Though maybe me not knowing that is a good indication that I'm not the right target audience for this post!)

Comment author: Denkenberger 30 December 2016 04:09:29PM 0 points [-]

It means in the first five years, the average paper gets 1.242*5 ~6 citations. The citation rate generally increases in the first decade, and then falls off. So the average paper might get a few dozen citations. This is decent for a peer-reviewed journal. As I say in a comment here, the number of reads is likely much greater than the number of citations.

View more: Next