Comment author: Marcus_A_Davis 20 July 2018 01:46:54AM 6 points [-]

I think the proposed karma system, particularly when combined with the highly rated posts being listed higher, is a quite bad idea. In general, if you are trying to ensure quality of posts and comments while spreading the forum out more broadly there are hard tradeoffs with different strengths and weaknesses. Indeed, I might prefer some type of karma weighting system to overly strict moderation but even then the weights proposed here don't seem justifiable.

What problem is being solved by giving up to 16 times maximum weight that would not be solved with giving users with high karma "merely" a maximum of 2 times the amount of possible weight? 4 times?

However, we obviously don’t want this to become a tyranny of a few users. There are several users, holding very different viewpoints, who currently have high karma on the Forum, and we hope that this will help maintain a varied discussion, while still ensuring that the Forum has strong discussion standards.

While it may be true now that there are multiple users with high karma with very different viewpoints, any imbalance among competing viewpoints at the start of a weighted system could possibly feedback on itself. That is to say, if viewpoint X has 50% of the top posters (by weight in the new system), Y has 30%, and Z 20%, viewpoint Z could easily see their viewpoint shrink relative to the others because the differential voting will compound itself over time.

37

Announcing PriorityWiki: A Cause Prioritization Wiki

This essay was jointly written by Peter Hurford and Marcus A. Davis. Locating and keeping up to date on all of the information in all every cause worthy of attention is time-consuming work, and a task we think is likely to be duplicated many times over by individuals and organizations.... Read More
12

Lessons for estimating cost-effectiveness (of vaccines) more effectively

This essay was jointly written by Peter Hurford and Marcus A. Davis.   Summary: We investigated the cost-effectiveness of vaccine research and development to learn about how cost-effectiveness estimates are made and where they might go wrong. By doing this, we became far more wary of taking these estimates literally.... Read More
10

How beneficial have vaccines been?

This essay was jointly written by Peter Hurford and Marcus A. Davis. Previously we estimated how expensive it is to research and develop a vaccine and also how expensive it is to roll-out a vaccine . If vaccines are to be cost-effective, we need to realise significant benefits in return... Read More
Comment author: Denkenberger 17 March 2018 03:45:09PM 2 points [-]

I'm glad to see you are investigating present generation causes with significant uncertainty, which seem to be less looked at. Any interest in investigating preparation for agricultural catastrophes?

Comment author: Marcus_A_Davis 18 April 2018 12:12:12AM 1 point [-]

Sorry for the extremely slow reply, but yes. That topic is on our radar.

35

Announcing Rethink Priorities

This essay was jointly written by Peter Hurford and Marcus A. Davis. Rethink Charity is excited to announce our new project, Rethink Priorities, which is dedicated to doing foundational research on neglected causes in a highly empirical and transparent manner. This work will begin this year, beginning with a focus... Read More
19

Charity Science: Health - A New Direct Poverty Charity Founded on EA Principles

After 6 months of research , primarily based on GiveWell’s list of charities they wish existed , we’re happy to announce the launching of Charity Science: Health . It is a direct poverty charity that will increase vaccination rates in India using phone-based reminders. We think it has the potential... Read More
Comment author: Gram_Stone 20 February 2016 04:57:36AM 0 points [-]

But I think we may be disagreeing over whether "thinks AI risk is an important cause" is too close to "is broadly positive towards AI risk as a cause area." I think so. You think not?

Are there alternatives to a person like this? It doesn't seem to me like there are.

"Is broadly positive towards AI risk as a cause area" could mean "believes that there should exist effective organizations working on mitigating AI risk", or could mean "automatically gives more credence to the effectiveness of organizations that are attempting to mitigate AI risk."

It might be helpful if you elaborated more on what you mean by 'aim for neutrality'. What actions would that entail, if you did that, in the real world, yourself? What does hiring the ideal survey supervisor look like in your mind if you can't use the words "neutral" or "neutrality" or any clever rephrasings thereof?

Comment author: Marcus_A_Davis 20 February 2016 05:44:55AM *  0 points [-]

It might be helpful if you elaborated more on what you mean by 'aim for neutrality'. What >actions would that entail, if you did that, in the real world, yourself?

I meant picking someone with no stake whatsoever in the outcome. Someone who, though exposed to arguments about AI risk, has no strong opinions one way or another. In other words, someone without a strong prior on AI risk as a cause area. Naturally, we all have biases, even if they are not explicit, so I am not proposing this as a disqualifying standard, just a goal worth shooting for.

An even broader selection tool I think worth considering alongside this is simply "people who know about AI risk" but that's basically the same as Rob's original point of "have some association with the general rationality or AI community."

Edit: Should say "Naturally, we all have priors..."

Comment author: Robert_Wiblin 20 February 2016 04:40:11AM *  2 points [-]

"Why should the person overseeing the survey think AI risk is an important cause?"

Because someone who believes it's a real risk has strong personal incentives to try to make the survey informative and report the results correctly (i.e. they don't want to die). Someone who believed it's a dumb cause would be tempted to discredit the cause by making MIRI look bad (or at least wouldn't be as trusted by prospective MIRI donors).

Comment author: Marcus_A_Davis 20 February 2016 04:50:26AM 0 points [-]

Such personal incentives are important but, again, I didn't advocate getting someone hostile to AI risk. I proposed aiming for someone neutral. I know, no one is "truly" neutral but you have to weigh potential positive personal incentives of someone invested against potential motivated thinking (or more accurately in this case, "motivated selection").

Comment author: Gram_Stone 20 February 2016 04:25:39AM 0 points [-]

Why should the person overseeing the survey think AI risk is an important cause?

Because the purpose of the survey is to determine MIRI's effectiveness as a charitable organization. If one believes that there is a negligible probability that an artificial intelligence will cause the extinction of the human species within the next several centuries, then it immediately follows that MIRI is an extremely ineffective organization, as it would be designed to mitigate a risk that ostensibly does not need mitigating. The survey is moot if one believes this.

Comment author: Marcus_A_Davis 20 February 2016 04:37:50AM 0 points [-]

I don't disagree on the problems of getting someone who thinks there is "negligible probability" of AI causing extinction being not suited for the task. That's why I said to aim for neutrality.

But I think we may be disagreeing over whether "thinks AI risk is an important cause" is too close to "is broadly positive towards AI risk as a cause area." I think so. You think not?

View more: Next