Comment author: Elizabeth 24 April 2017 01:46:10PM *  3 points [-]

Fyi, original comment was deleted and it now looks like you're criticizing a reasonable post, at least on mobile.

Comment author: Benito 24 April 2017 06:29:55PM *  4 points [-]

Thanks! Someone posted a multiple-page (on mobile) comment explaining Parfit's argument regarding extinction being much worse than 90% of people dying, amongst other things, that had basically no relevance to the OP.

If a mod wants to go ahead and clear all four comments here that'd be great.

Comment author: [deleted] 23 April 2017 06:14:18PM -1 points [-]

Great post! For existential risk, one can construe the badness of human extinction in different ways. (1) Human extinction is bad because of the premature death of current human population (7.5 billion); or (2) Human extinction is bad because the happiness of 5*10^46 future possible people are lost.

Many EAs concerned about existential risk are mainly concerned about the latter (2) 'badness', rather than the premature death of people here and now.

Indeed, respected philosopher Derek Parfit argued death of 100% of human population is very much worse than 99% of human population.

["I believe that if we destroy mankind, as we now can, this outcome will be much worse than most people think. Compare three outcomes:

(1) Peace. (2) A nuclear war that kills 99% of the world's existing population. (3) A nuclear war that kills 100%.

(2) would be worse than (1), and (3) would be worse than (2). Which is the greater of these two differences? Most people believe that the greater difference is between (1) and (2). I believe that the difference between (2) and (3) is very much greater. ... The Earth will remain habitable for at least another billion years. Civilization began only a few thousand years ago. If we do not destroy mankind, these few thousand years may be only a tiny fraction of the whole of civilized human history. The difference between (2) and (3) may thus be the difference between this tiny fraction and all of the rest of this history. If we compare this possible history to a day, what has occurred so far is only a fraction of a second. (Parfit 1984, pp. 453-454)."](http://www.existential-risk.org/concept.html)

Many professional philosophers argued for and against the asymmetry of the population ethics. I don't have definitive, knockdown argument for the asymmetry, but at least moral intuition of most people seems to favour the asymmetry. I (and 'most people', as Parfit suggested) believe the difference between (1) peace and (2) 99%-death is much bigger than the difference between (2) 99%-death and (3) 100%-death. Indeed, I believe (3) is only 100/99 times as bad as (2). Most people might believe (3) is less than two times as bad as (2).

However, many people who are concerned about existential risks seems to think (3) is (more than, given many possible generations in the future) 6.7*10^36 times worse than (2). However, on the asymmetry view, there are crucial difference between (1) the absence of pleasure of present people; (2) the absence of pleasure of future people who will exist; (3) the absence of pleasure of past people; and (4) the absence of pleasure of potential people who never become actual. (1), (2), (3) is regrettable, while (4) is not.

Using the term 'future people' is ambiguous. Although I agree we have to inherit clean planet for future generations, do we really have to 'save a life' of future people by succeeding in bringing them into existence? (is a failure to have children, or contraception the moral equivalent of failure to save a life?) In other words, using the term 'future people' sound they will be (certainly) going to exist. While in many cases, the future people can exist or not by our action, and for that reason, they are potential peoples.

Also, it is uncertain x-risk (or even s-risk) research will ever reduce such risk. Even if they reduce 'the risk', should we recognize averting x% of the risk of y disvalue as a creation of 0.01xy value? On the view of (hard) determinism, human extinction will happen or not. Although it seems odd to do 'what if' thought experiment on deterministic view, one's action will change whether humanity will extinct or not. Extinction Prevention is 100% or 0% (all-or-nothing) business in strict deterministic view, rather than somewhere between 0% to 100%.

For example, if I avert 10^-40 extinction risk, and if humanity goes extinct, 5*10^46 future potential happy lives will be lost. How much good have I done?

I suspect many x-risk researchers might think that I have created as much good as saving 5 million people's lives.

Rather, I would say there is 10^-40 probability I have saved 7.5 billion people's lives, and 1-10^-40 chance I have changed nothing. (on consequentialist view, I have not contributed to the good at all) Far most likely than not, I have changed nothing, thereby my actual impact is zero, even though my expected impact is high.

Based on thoughts previously presented on my blog posts The worst scenarios of singularity are about 6.7*10^36 times worse than the best scenarios of singularity are good. and A thought experiment on remote chances

In response to comment by [deleted] on Effective altruism is self-recommending
Comment author: Benito 23 April 2017 06:36:42PM 0 points [-]

This comment is confusing. The content is entirely common knowledge around here, it doesn't respond to any of the main claims of the post it responds to, and it's very long. Why did you post it?

Comment author: AGB 22 April 2017 12:51:34PM 14 points [-]

Things don't look good regarding how well this project has been received

I know you say that this isn't the main point you're making, but I think it's the hidden assumption behind some of your other points and it was a surprise to read this. Will's post introducing the EA funds is the 4th most upvoted post of all time on this forum. Most of the top rated comments on his post, including at least one which you link to as raising concerns, say that they are positive about the idea. Kerry then presented some survey data in this post. All those measures of support are kind of fuzzy and prone to weird biases, but putting it all together I find it much more likely than not that the community is as-a-whole positive about the funds. An alternative and more concrete angle would be money received into the funds, which was just shy of CEA's target of $1m.

Given all that, what would 'well-received' look like in your view?

If you think the community is generally making a mistake in being supportive of the EA funds, that's fine and obviously you can/should make arguments to that effect. But if you are making the empirical claim that the community is not supportive, I want to know why you think that.

Comment author: Benito 22 April 2017 01:37:07PM *  6 points [-]

Yeah, in this community it's easy for your data to be filtered. People commonly comment with criticism, rarely with just "Yeah, this is right!", and so your experience can be filled with negative responses even when the response is largely positive.

Comment author: Kerry_Vaughan 21 April 2017 05:11:07PM 8 points [-]

But if I can't convince them to fund me for some reason and I think they're making a mistake, there are no other donors to appeal to anymore. It's all or nothing.

The upside of centralization is that it helps prevent the unilateralist curse for funding bad projects. As the number of funders increases, it becomes increasingly easy for the bad projects to find someone who will fund them.

That said, I share the concern that EA Funds will become a single point of failure for projects such that if EA Funds doesn't fund you, the project is dead. We probably want some centralization but we also want worldview diversification. I'm not yet sure how to accomplish this. We could create multiple versions of the current funds with different fund managers, but that is likely to be very confusing to most donors. I'm open to ideas on how to help with this concern.

Comment author: Benito 21 April 2017 08:45:14PM 4 points [-]

Quick (thus likely wrong) thought on solving unilateralist's curse: put multiple position in charge of each fund, each representing a different worldview, and give everyone 3 grant vetoes each year (so they can prevent grants that are awful in their worldview). You can also give them control of a percentage of funds in proportion to CEA's / the donor's confidence in that worldview.

Comment author: Benito 21 April 2017 09:35:49AM 2 points [-]

Thanks for the post!

Lewis Bollard gave away 180k but Nick Beckstead says he only had access to 14k. Was this due to a spike in donations to the far future cause after they made their recommendations?

Comment author: BenMillwood  (EA Profile) 25 February 2017 11:54:38AM *  0 points [-]

The name "Oxford Prioritisation Project" has an unhelpful acronym collision :)

Do you have a standard abbreviated form that avoids it? Maybe OxPri, following the website address?

edit: I've found this issue addressed in other comments, and the official answer is apparently "oxprio".

Comment author: Benito 11 March 2017 10:56:15AM *  0 points [-]

Thus the website, oxpr.io. OxPrio normally has a capital 'O' and 'P' too.

Comment author: HaydnBelfield 24 February 2017 01:27:49PM 18 points [-]

Thanks for this! Its mentioned in the post and James and Fluttershy have made the point, but I just wanted to emphasise the benefits to others of Open Philanthropy continuing to engage in public discourse. Especially as this article seems to focus mostly on the cost/benefits to Open Philanthropy itself (rather than to others) of Open Philanthropy engaging in public discourse.

The analogy of academia was used. One of the reasons academics publish is to get feedback, improve their reputation and to clarify their thinking. But another, perhaps more important, reason academics publish academic papers and popular articles is to spread knowledge.

As an organisation/individual becomes more expert and established, I agree that the benefits to itself decrease and the costs increase. But the benefit to others of their work increases. It might be argued that when one is starting out the benefits of public discourse go mostly to oneself, and when one is established the benefits go mostly to others.

So in Open Philanthropy’s case it seems clear that the benefits to itself (feedback, reputation, clarifying ideas) have decreased and the costs (time and risk) have increased. But the benefits to others of sharing knowledge have increased, as it has become more expert and better at communicating.

For example, speaking personally, I have found Open Philanthropy’s shallow investigations on Global Catastrophic Risks a very valuable resource in getting people up to speed – posts like Potential Risks from Advanced Artificial Intelligence: The Philanthropic Opportunity have also been very informative and useful. I’m sure people working on global poverty would agree.

Again, just wanted to emphasise that others get a lot of benefit from Open Philanthropy continuing to engage in public discourse (in the quantity and quality at which it does so now).

Comment author: Benito 24 February 2017 01:53:28PM 6 points [-]

Strong agreement. I'd like to add that the general reports on biorisk have also been very valuable personally, including the written-up conversations with experts.

Comment author: Peter_Hurford  (EA Profile) 14 February 2017 12:01:20AM 4 points [-]

I think others have suggested this, but have you thought about putting your 10K GBP into a donor lottery or otherwise saving up to getting a larger donation size? I'd like to see research address that question (e.g., is a 100K donation >10x better than a 10K donation?).

Comment author: Benito 14 February 2017 03:24:59AM *  2 points [-]

I'll note that the team should still allocate its time to answering the object level question, so that if they win the lottery they know where they'll give the money.

Comment author: Benito 09 February 2017 10:18:33AM 1 point [-]

Brief comment: I personally use the word 'care' to imply that I prioritise something not just abstractedly, but also have a gut, S1 feel of desire to work on the problem. I expect people in my reference class here to mostly continue to use 'care' unless a better alternative is proposed.

Comment author: Richard_Batty 03 February 2017 04:17:41PM *  6 points [-]

Is there an equivalent to 'concrete problems in AI' for strategic research? If I was a researcher interested in strategy I'd have three questions: 'What even is AI strategy research?', 'What sort of skills are relevant?', 'What are some specific problems that I could work on?' A 'concrete problems'-like paper would help with all three.

Comment author: Benito 03 February 2017 11:08:49PM 3 points [-]

I feel like "Superintelligence" is the closest thing to this, which was largely on strategy rather than maths. While it didn't end each chapter with explicit questions for further research, it'd be my first recommendation for a strategy researcher to read and gain a sense of what work could be done.

I'd also recommend Eliezer Yudkowsky's paper on Intelligence Explosion Microeconomics, which is more niche but way less read.

View more: Next