19

Charity Science: Health - A New Direct Poverty Charity Founded on EA Principles

After 6 months of research , primarily based on GiveWell’s list of charities they wish existed , we’re happy to announce the launching of Charity Science: Health . It is a direct poverty charity that will increase vaccination rates in India using phone-based reminders. We think it has the potential... Read More
Comment author: Gram_Stone 20 February 2016 04:57:36AM 0 points [-]

But I think we may be disagreeing over whether "thinks AI risk is an important cause" is too close to "is broadly positive towards AI risk as a cause area." I think so. You think not?

Are there alternatives to a person like this? It doesn't seem to me like there are.

"Is broadly positive towards AI risk as a cause area" could mean "believes that there should exist effective organizations working on mitigating AI risk", or could mean "automatically gives more credence to the effectiveness of organizations that are attempting to mitigate AI risk."

It might be helpful if you elaborated more on what you mean by 'aim for neutrality'. What actions would that entail, if you did that, in the real world, yourself? What does hiring the ideal survey supervisor look like in your mind if you can't use the words "neutral" or "neutrality" or any clever rephrasings thereof?

Comment author: Marcus_A_Davis 20 February 2016 05:44:55AM *  0 points [-]

It might be helpful if you elaborated more on what you mean by 'aim for neutrality'. What >actions would that entail, if you did that, in the real world, yourself?

I meant picking someone with no stake whatsoever in the outcome. Someone who, though exposed to arguments about AI risk, has no strong opinions one way or another. In other words, someone without a strong prior on AI risk as a cause area. Naturally, we all have biases, even if they are not explicit, so I am not proposing this as a disqualifying standard, just a goal worth shooting for.

An even broader selection tool I think worth considering alongside this is simply "people who know about AI risk" but that's basically the same as Rob's original point of "have some association with the general rationality or AI community."

Edit: Should say "Naturally, we all have priors..."

Comment author: Robert_Wiblin 20 February 2016 04:40:11AM *  1 point [-]

"Why should the person overseeing the survey think AI risk is an important cause?"

Because someone who believes it's a real risk has strong personal incentives to try to make the survey informative and report the results correctly (i.e. they don't want to die). Someone who believed it's a dumb cause would be tempted to discredit the cause by making MIRI look bad (or at least wouldn't be as trusted by prospective MIRI donors).

Comment author: Marcus_A_Davis 20 February 2016 04:50:26AM 0 points [-]

Such personal incentives are important but, again, I didn't advocate getting someone hostile to AI risk. I proposed aiming for someone neutral. I know, no one is "truly" neutral but you have to weigh potential positive personal incentives of someone invested against potential motivated thinking (or more accurately in this case, "motivated selection").

Comment author: Gram_Stone 20 February 2016 04:25:39AM 0 points [-]

Why should the person overseeing the survey think AI risk is an important cause?

Because the purpose of the survey is to determine MIRI's effectiveness as a charitable organization. If one believes that there is a negligible probability that an artificial intelligence will cause the extinction of the human species within the next several centuries, then it immediately follows that MIRI is an extremely ineffective organization, as it would be designed to mitigate a risk that ostensibly does not need mitigating. The survey is moot if one believes this.

Comment author: Marcus_A_Davis 20 February 2016 04:37:50AM 0 points [-]

I don't disagree on the problems of getting someone who thinks there is "negligible probability" of AI causing extinction being not suited for the task. That's why I said to aim for neutrality.

But I think we may be disagreeing over whether "thinks AI risk is an important cause" is too close to "is broadly positive towards AI risk as a cause area." I think so. You think not?

Comment author: Marcus_A_Davis 20 February 2016 04:13:56AM 0 points [-]

This survey makes sense. However, I have a few caveats:

Think that AI risk is an important cause, but have no particular convictions about the best >approach or organisation for dealing with it. They shouldn't have worked for MIRI in the past, but >will presumably have some association with the general rationality or AI community.

Why should the person overseeing the survey think AI risk is an important cause? Doesn't that self-select for people who or more likely to be positive toward MIRI than whatever the baseline is for all people familiar with AI risk (and, obviously, competent to judge who to include in the survey)? The ideal person to me would be neutral and while of course finding someone who is truly neutral would likely prove impractical, selecting someone overtly positive would be a bad idea for the same reasons it would be to select someone overtly negative. The point is the aim should be towards neutrality.

They should also have a chance to comment on the survey itself >before it goes out. Ideally it >would be checked by someone who understand good survey >design, as subtle aspects of >wording can be important.

This should be a set time frame to draft a response to the survey before it goes public. A "chance" is too vague.

It should be impressed on participants the value of being open and thoughtful in their answers >for maximising the chances of solving the problem of AI risk in the long run.

Telling people to be open and thoughtful is great, but explicitly tying it to solving long run AI risk primes them to give certain kinds of answers.

Comment author: RyanCarey 12 August 2015 03:16:17PM 1 point [-]

Anyone feel free to correct me, but I believe in such a scenario spreading your prior evenly over all possible outcomes wouldn't arbitrarily just include spitting the difference between 10^-15 or 10^-50 but spreading your belief over all positive outcomes below some reasonable barrier and (potentially) above another* (and this isn't taking into account the non-zero, even if unlikely, probability that despite caution AI research is indeed speeding up our doom).

It's complicated, but I don't think it makes sense to have a probability distribution over probability distributions, because it collapses. We should just have a probability distribution over outcomes. We choose our prior estimate for chance of success based on other cases of people attempting to make safer tech.

Despite what appeared to him to be this large uncertainty, he seemed to encounter many people who brushed aside, or seemingly belittled, all other possible cause areas and this rubbed him the wrong way.

In fairness, for people who adhere to expected value thinking to the fullest extent (some of whom would have turned out to the conference), arguments purely on the basis of scope of potential impact would be persuasive. But if it's even annoying folks at EA Global, then probably people ought to stop using them.

Comment author: Marcus_A_Davis 12 August 2015 11:54:21PM 1 point [-]

It's complicated, but I don't think it makes sense to have a probability distribution over probability distributions, because it collapses. We should just have a probability distribution over outcomes.

I did mean over outcomes. I was referring to this:

If we're uncertain about Matthews propositions, we ought to place our guesses somewhere closer to 50%. To do otherwise would be to mistake our deep uncertainty deep scepticism.

That seems mistaken to me but it could be because I'm misinterpreting it. I was reading it as saying we should split the difference between the two probabilities of success Matthews proposed. However, I thought he was suggesting, and believe it is correct, that we shouldn't just pick the median between the two because the smaller number was just an example. His real point being that any tiny probability of success seems equally as reasonable from the vantage point of now. If true we'd then have to split our prior evenly over that range instead of picking the median between 10^-15 and 10^-50. And given it's very difficult to put a lower bound on the reasonable range but a $1000 donation being a good investment depends on a specific lower bound higher than he believes can be justified with evidence, some people came across as unduly confident.

But if it's even annoying folks at EA Global, then probably people ought to stop using them.

Let me be very clear, I was not annoyed by them, even if I disagree, but people definitely used this reasoning. However, as I often point out, extrapolating from me to other humans is not a good idea even within the EA community.

Comment author: Marcus_A_Davis 11 August 2015 04:46:47PM 10 points [-]

I think you are short selling Matthews on Pascal's Mugging. I don't think his point was that you must throw up your hands because of the uncertainty, but that he believes friendly AI researchers have approximately the same amount of evidence that AI research done today will have a 10^-15 chance of saving the existence of future humanity as any infinitesimal but positive chance.

Anyone feel free to correct me, but I believe in such a scenario spreading your prior evenly over all possible outcomes wouldn't arbitrarily just include spitting the difference between 10^-15 or 10^-50 but spreading your belief over all positive outcomes below some reasonable barrier and (potentially) above another* (and this isn't taking into account the non-zero, even if unlikely, probability that despite caution AI research is indeed speeding up our doom). What those numbers are is very difficult to tell but if the estimation of those boundaries is off, and given the record of future predictions of technology it's not implausible, then all current donations could end up doing basically nothing. In other words, his critique is not that we must give up in the face of uncertainty but that the the justification of AI risk reduction being valuable right now depends on a number of assumptions with rather large error bars.

Despite what appeared to him to be this large uncertainty, he seemed to encounter many people who brushed aside, or seemingly belittled, all other possible cause areas and this rubbed him the wrong way. I believe that was his point about Pascal's Mugging. And while you criticized him for not acknowledging MIRI does not support Pascal's Mugging reasoning to support AI research, he never said they did in the article. He said many people at the conference replied to him with that type of reasoning (and as a fellow attendee, I can attest to a similar experience).

*Normally, I believe, it would be all logically possible outcomes but obviously it's unreasonable to believe a $1000 donation, which was his example, has, say, a 25% chance of success given everything we know about how much such work costs, etc. However, where the lower bound is on this estimation is far less clear.

Comment author: Marcus_A_Davis 14 April 2015 05:12:15PM 5 points [-]

This is super practical advice that I can definitely see myself applying in the future. The introductions on the sheets seem particularly well-suited to getting people engaged.

Also, "What is the first thing you would do if appointed dictator of the United States?" likely just entered my favorite questions to ask anyone in ice-breaker scenarios, many of which have nothing to do with EA.

Comment author: Peter_Hurford  (EA Profile) 11 April 2015 04:43:32PM 0 points [-]

I didn't change my career, but I did dramatically change my career plans, twice.

In response to comment by Peter_Hurford  (EA Profile) on April Open Thread
Comment author: Marcus_A_Davis 12 April 2015 04:34:17PM 0 points [-]

That counts. And, as I said above to Ben, I should have been more broad anyway. I just think we can use more first-person narratives about earning to give to present the idea as less of an abstraction.

Of course, I could be wrong and those who would consider earning to give at all (or would be moved to donate more because of hearing such a story) would be equally swayed by a third person analysis of why it is a good idea for some people.

Comment author: Ben_West  (EA Profile) 11 April 2015 04:01:16PM 2 points [-]

How broadly do you define "changing careers"? I, for example, switched from being a developer to founding a company for E2G reasons.

In response to comment by Ben_West  (EA Profile) on April Open Thread
Comment author: Marcus_A_Davis 12 April 2015 04:27:04PM 0 points [-]

That would count but I should have been more broad in my statement anyway. People like the "here's what I did and why I did it narrative" and earning to give could use more of these stories in general. I think a variety of them showing different perspectives for people in different positions and different abilities would be a boon.

Btw, I was quite wrong about there being no first person accounts as, for one, Chris Hallquist has written about this extensively.

View more: Next