26

Announcing PriorityWiki: A Cause Prioritization Wiki

This essay was jointly written by Peter Hurford and Marcus A. Davis. Locating and keeping up to date on all of the information in all every cause worthy of attention is time-consuming work, and a task we think is likely to be duplicated many times over by individuals and organizations.... Read More
8

Lessons for estimating cost-effectiveness (of vaccines) more effectively

This essay was jointly written by Peter Hurford and Marcus A. Davis.   Summary: We investigated the cost-effectiveness of vaccine research and development to learn about how cost-effectiveness estimates are made and where they might go wrong. By doing this, we became far more wary of taking these estimates literally.... Read More
10

How beneficial have vaccines been?

This essay was jointly written by Peter Hurford and Marcus A. Davis. Previously we estimated how expensive it is to research and develop a vaccine and also how expensive it is to roll-out a vaccine . If vaccines are to be cost-effective, we need to realise significant benefits in return... Read More
Comment author: Denkenberger 17 March 2018 03:45:09PM 2 points [-]

I'm glad to see you are investigating present generation causes with significant uncertainty, which seem to be less looked at. Any interest in investigating preparation for agricultural catastrophes?

Comment author: Marcus_A_Davis 18 April 2018 12:12:12AM 1 point [-]

Sorry for the extremely slow reply, but yes. That topic is on our radar.

34

Announcing Rethink Priorities

This essay was jointly written by Peter Hurford and Marcus A. Davis. Rethink Charity is excited to announce our new project, Rethink Priorities, which is dedicated to doing foundational research on neglected causes in a highly empirical and transparent manner. This work will begin this year, beginning with a focus... Read More
19

Charity Science: Health - A New Direct Poverty Charity Founded on EA Principles

After 6 months of research , primarily based on GiveWell’s list of charities they wish existed , we’re happy to announce the launching of Charity Science: Health . It is a direct poverty charity that will increase vaccination rates in India using phone-based reminders. We think it has the potential... Read More
Comment author: Gram_Stone 20 February 2016 04:57:36AM 0 points [-]

But I think we may be disagreeing over whether "thinks AI risk is an important cause" is too close to "is broadly positive towards AI risk as a cause area." I think so. You think not?

Are there alternatives to a person like this? It doesn't seem to me like there are.

"Is broadly positive towards AI risk as a cause area" could mean "believes that there should exist effective organizations working on mitigating AI risk", or could mean "automatically gives more credence to the effectiveness of organizations that are attempting to mitigate AI risk."

It might be helpful if you elaborated more on what you mean by 'aim for neutrality'. What actions would that entail, if you did that, in the real world, yourself? What does hiring the ideal survey supervisor look like in your mind if you can't use the words "neutral" or "neutrality" or any clever rephrasings thereof?

Comment author: Marcus_A_Davis 20 February 2016 05:44:55AM *  0 points [-]

It might be helpful if you elaborated more on what you mean by 'aim for neutrality'. What >actions would that entail, if you did that, in the real world, yourself?

I meant picking someone with no stake whatsoever in the outcome. Someone who, though exposed to arguments about AI risk, has no strong opinions one way or another. In other words, someone without a strong prior on AI risk as a cause area. Naturally, we all have biases, even if they are not explicit, so I am not proposing this as a disqualifying standard, just a goal worth shooting for.

An even broader selection tool I think worth considering alongside this is simply "people who know about AI risk" but that's basically the same as Rob's original point of "have some association with the general rationality or AI community."

Edit: Should say "Naturally, we all have priors..."

Comment author: Robert_Wiblin 20 February 2016 04:40:11AM *  2 points [-]

"Why should the person overseeing the survey think AI risk is an important cause?"

Because someone who believes it's a real risk has strong personal incentives to try to make the survey informative and report the results correctly (i.e. they don't want to die). Someone who believed it's a dumb cause would be tempted to discredit the cause by making MIRI look bad (or at least wouldn't be as trusted by prospective MIRI donors).

Comment author: Marcus_A_Davis 20 February 2016 04:50:26AM 0 points [-]

Such personal incentives are important but, again, I didn't advocate getting someone hostile to AI risk. I proposed aiming for someone neutral. I know, no one is "truly" neutral but you have to weigh potential positive personal incentives of someone invested against potential motivated thinking (or more accurately in this case, "motivated selection").

Comment author: Gram_Stone 20 February 2016 04:25:39AM 0 points [-]

Why should the person overseeing the survey think AI risk is an important cause?

Because the purpose of the survey is to determine MIRI's effectiveness as a charitable organization. If one believes that there is a negligible probability that an artificial intelligence will cause the extinction of the human species within the next several centuries, then it immediately follows that MIRI is an extremely ineffective organization, as it would be designed to mitigate a risk that ostensibly does not need mitigating. The survey is moot if one believes this.

Comment author: Marcus_A_Davis 20 February 2016 04:37:50AM 0 points [-]

I don't disagree on the problems of getting someone who thinks there is "negligible probability" of AI causing extinction being not suited for the task. That's why I said to aim for neutrality.

But I think we may be disagreeing over whether "thinks AI risk is an important cause" is too close to "is broadly positive towards AI risk as a cause area." I think so. You think not?

Comment author: Marcus_A_Davis 20 February 2016 04:13:56AM 0 points [-]

This survey makes sense. However, I have a few caveats:

Think that AI risk is an important cause, but have no particular convictions about the best >approach or organisation for dealing with it. They shouldn't have worked for MIRI in the past, but >will presumably have some association with the general rationality or AI community.

Why should the person overseeing the survey think AI risk is an important cause? Doesn't that self-select for people who or more likely to be positive toward MIRI than whatever the baseline is for all people familiar with AI risk (and, obviously, competent to judge who to include in the survey)? The ideal person to me would be neutral and while of course finding someone who is truly neutral would likely prove impractical, selecting someone overtly positive would be a bad idea for the same reasons it would be to select someone overtly negative. The point is the aim should be towards neutrality.

They should also have a chance to comment on the survey itself >before it goes out. Ideally it >would be checked by someone who understand good survey >design, as subtle aspects of >wording can be important.

This should be a set time frame to draft a response to the survey before it goes public. A "chance" is too vague.

It should be impressed on participants the value of being open and thoughtful in their answers >for maximising the chances of solving the problem of AI risk in the long run.

Telling people to be open and thoughtful is great, but explicitly tying it to solving long run AI risk primes them to give certain kinds of answers.

View more: Next