Comment author: MichaelDickens  (EA Profile) 18 September 2015 03:34:28AM 0 points [-]

Can you talk more about what convinced you that they're a good giving opportunity on the margin?

I asked Tobias Pulver about this specifically. He told me about their future plans and how they'd like to use marginal funds. They have things that they would have done if they'd had more money but couldn't do. I don't know if they're okay with me speaking about this publicly but I invite Tobias or anyone else at REG to comment on this.

I know you know I think this, but I think it's better for the health of ACE if their supporters divide their money between ACE and its recommended charities, even if the evidence for its recommended charities isn't currently as strong as I'd like.

If ACE thought this was best, couldn't it direct some of the funds I donate to its top charities? (This is something I probably should have considered and investigated, although it's moot since I'm not planning on donating directly to ACE.)

Would I expect even the couple actual PhDs MIRI hired recently to do anything really ground breaking? They might, but I don't see why you'd think it likely.

AI safety is such a new field that I don't expect you need to be a genius to do anything groundbreaking. MIRI researchers are probably about as intelligent as most FLI grantees. But I expect them to be better at AI safety research because MIRI has been working on it for longer and has a stronger grasp of the technical challenges.

Comment author: TopherHallquist 20 September 2015 09:15:22PM 1 point [-]

AI safety is such a new field that I don't expect you need to be a genius to do anything groundbreaking.

They claim to be working on areas like game theory, decision theory, and mathematical logic, which are all well-developed fields of study. I see no reason to think those fields have lots of low-hanging fruit that would allow average researchers to make huge breakthroughs. Sure, they have a new angle on those fields, but does a new angle really overcome their lack of an impressive research track-record?

But I expect them to be better at AI safety research because MIRI has been working on it for longer and has a stronger grasp of the technical challenges.

Do they have a stronger grasp of the technical challenges? They're certainly opinionated about what it will take to make AI safe, but their (public) justifications for those opinions look pretty flimsy.

Comment author: Tom_Ash  (EA Profile) 18 September 2015 02:29:28PM *  0 points [-]

In a way, it was easier to make a case for MIRI back when they did a lot of advocacy work. Now that they're billing themselves as a research institute, I think they've set a much higher bar for themselves, and when it comes to doing research (as opposed to advocacy) they've got much less of a track record to go on.

What are the best groups that are specifically doing advocacy for (against?) AI risk, or existential risks in general?

Comment author: TopherHallquist 20 September 2015 09:06:05PM 0 points [-]

If I had to guess, I would guess FLI, given their ability to at least theoretically use the money for grant-making. Though after Elon Musk's $10 million, donation this cause area seems to be short on room for more funding.

Comment author: TopherHallquist 18 September 2015 02:14:43AM *  6 points [-]

Thanks for writing this, Michael. More people should write up documents like these. I've been thinking of doing something similar, but haven't found the time yet.

I realized reading this that I haven't thought much about REG. It sounds like they do good things, but I'm a bit skeptical re: their ability to make good use of the marginal donation they get. I don't think a small budget, by itself, is strong evidence that they could make good use of more money. Can you talk more about what convinced you that they're a good giving opportunity on the margin? (I'm thinking out loud here, don't mean this paragraph to be a criticism.)

Re: ACE's recommended charities. I know you know I think this, but I think it's better for the health of ACE if their supporters divide their money between ACE and its recommended charities, even if the evidence for its recommended charities isn't currently as strong as I'd like. But I admit this is based on a fuzzy heuristic, not a knock-down argument.

Re: MIRI. Setting aside what I think of Yudkowsky, I think you may be overlooking the fact that that "competence" is relative to what you're trying to accomplish. Luke Muehlhauser accomplished a lot in terms of getting MIRI to follow nonprofit best practices, and from what I've read of his writing, I expect he'll do very well in his new role as an analyst for GiveWell. But there's a huge gulf between being competent in that sense, and being able to do (or supervise other people doing) ground breaking math and CS research.

Nate Soares seems as smart as you'd expect a former Google engineer to be, but would I expect him to do anything really ground breaking? No. Would I expect even the couple actual PhDs MIRI hired recently to do anything really ground breaking? They might, but I don't see why you'd think it likely.

In a way, it was easier to make a case for MIRI back when they did a lot of advocacy work. Now that they're billing themselves as a research institute, I think they've set a much higher bar for themselves, and when it comes to doing research (as opposed to advocacy) they've got much less of a track record to go on.

Comment author: tomstocker 24 August 2015 09:19:04PM 1 point [-]

Sound roughly like the manifesto that brought about the seattle demonstrations. Lots of work's been tried here by the NGO communities but to little avail. The vested interests are large, I wouldn't underestimate getting some of the fine tuning of the policy right or the amount of effort it would take to get these things through + keep them there.

Comment author: TopherHallquist 25 August 2015 01:00:59AM 0 points [-]

I was 12 when those demonstrations happened, and I'm a little fuzzy on the agenda of the protesters. I'm currently finishing up Stigliz's Gobalization and its Discontents, which while critical of the IMF, also complaints about anti-globalization activists lobbying for more protectionist measures on the part of developed countries, against goods produced in developing countries. Do you have any idea if that applies to the Seattle protests?

Comment author: Ben_Todd 21 August 2015 09:29:49PM 2 points [-]

CGD aims to find these policies. Check out their list of initiatives: http://www.cgdev.org/section/initiatives

The Copenhagen Consensus although thinks trade reform could be very high return: http://www.copenhagenconsensus.com/post-2015-consensus/trade

Comment author: TopherHallquist 22 August 2015 01:57:44AM 0 points [-]

Question about CGD: are they optimizing for making their proposals sound boring even though in fact they ideally want huge changes from the status quo? Or do they really just think we need tweaks to the status quo?

(This is based on a very superficial glance at their site, was already planning on trying to read more of their materials.)

9

Rich-country policy changes that could greatly benefit poor countries

In the EA movement, there's a lot of enthusiasm for significantly increasing the number of immigrants able to come to countries like the United States each year. This would help the global poor in two ways: it would directly help immigrants who come to rich countries but wouldn't have been... Read More
Comment author: Larks 10 July 2015 10:28:20PM 1 point [-]

A longer-term strategy might to found an organization dedicated to shifting incentives towards politicians in the US, UK, and France towards less bellicose rhetoric and less escalation, and more international compromise.

Or it might be to shift incentives in the US, UK and France towards more credible deterence and sharper red lines, to prevent a slow sleepwalk into nuclear war when the tanks cross the Vistula. Given there are credible game theoretic and historical arguments on both sides, it seems rather unfair to only highlight one direction as a possibility.

Comment author: TopherHallquist 11 July 2015 01:01:12AM 0 points [-]

Hmmm... let me put it this way: I suspect the right approach to dealing with the current situation in Ukraine is to back off there, while taking a hard line re: willingness to defend Baltic NATO states like Estonia. Truly sharp red lines are established by things like the NATO treaty, not [hawkish politician X] shooting his mouth off.

Comment author: Carl_Shulman 09 July 2015 11:58:06PM *  8 points [-]

I know GiveWell is aware of these articles, and has looked more into nukes. Probably more conversation notes will be coming out. There is broad agreement (and good object-level evidence) that NATO-Russia nuclear risk is the highest it's been in the post Cold War period. One reason GiveWell has cited for not putting resources into nukes (although it was perhaps runner-up to the GCRs they have invested more in) is the existence of a large established community working on the problem that seemed fairly competent.

"A longer-term strategy might to found an organization dedicated to shifting incentives towards politicians in the US, UK, and France towards less bellicose rhetoric and less escalation, and more international compromise."

Why not support the existing organizations, which have people with a lifetime of experience, scholarly background, and political connections?

"a survey of experts putting the risk of nuclear war with Russia over the next 5 years at 2%"

One note for interpreting that: the experts themselves didn't give those numbers. I was talking about this with someone and they noted that the survey didn't actually ask for probabilities (except 50:50), but verbal descriptions that the authors converted into probabilities by assuming a certain statistical distribution in the relationship between descriptions and probabilities. The previous 'more rigorous' study asked for answers on a 1-10 scale. Risk is definitely up a lot, but we don't have experts' explicit credences, which might be higher or lower than that.

In the EA community see GCRI's work, e.g. this paper on "Analyzing and reducing the risks of inadvertent nuclear war between the United States and Russia." It discusses the disproportionate role of high tension periods such as the Cuban Missile Crisis, or today's fighting in Eastern Europe, many modelling details, and does some estimation.

Comment author: TopherHallquist 10 July 2015 02:19:35AM 3 points [-]

I know GiveWell is aware of these articles, and has looked more into nukes. Probably more conversation notes will be coming out.

This is good to know.

Why not support the existing organizations, which have people with a lifetime of experience, scholarly background, and political connections?

Do you have any specific organizations in mind? Existing anti-nuclear weapons orgs seem focused on disarmament–which seems extremely unlikely as long as Putin (or someone like him) is in power in Russia. And existing US anti-war orgs seem tragically ineffective. But maybe that's because it's just too hard to have an effective anti-war organization in current US political context.

Partly, I was thinking of an org focused on achievable, narrowly defined actions, one that would fight say, a bill in Congress to provide arms to Ukraine, or authorize "limited" military intervention in eastern Europe, or raise a fuss when presidential candidates go a bit over the line in bellicose rhetoric (disincentivizing such rhetoric). Maybe there are already groups that do things like that–I admit I've only recently started trying to understand this area better.

8

Have we underestimated the risk of a NATO-Russia nuclear war? Can we do anything about it?

Until recently, I thought that the risk of a nuclear war in the 21st century, while not zero, was nevertheless very low and the marginal bit of effort spent reducing it further was probably not a good use of resources. But in the past two weeks, a series of articles... Read More
Comment author: MichaelDickens  (EA Profile) 03 July 2015 10:57:00PM 2 points [-]

Looks like the formatting on your link is messed up.

In response to comment by MichaelDickens  (EA Profile) on July Open Thread
Comment author: TopherHallquist 08 July 2015 03:12:45AM 0 points [-]

Crap, thanks. Forgot the forum uses Markdown rather than HTML.

View more: Next