Comment author: RobBensinger 08 July 2017 01:52:08AM *  7 points [-]

FWIW, I don't think (1) or (2) plays a role in why MIRI researchers work on the research they do, and I don't think they play a role in why people at MIRI think "learning to reason from humans" isn't likely to be sufficient. The shape of the "HRAD is more promising than act-based agents" claim is more like what Paul Christiano said here:

As far as I can tell, the MIRI view is that my work is aimed at [a] problem which is not possible, not that it is aimed at a problem which is too easy. [...] One part of this is the disagreement about whether the overall approach I'm taking could possibly work, with my position being "something like 50-50" the MIRI position being "obviously not" [...]

There is a broader disagreement about whether any "easy" approach can work, with my position being "you should try the easy approaches extensively before trying to rally the community behind a crazy hard approach" and the MIRI position apparently being something like "we have basically ruled out the easy approaches, but the argument/evidence is really complicated and subtle."

With a clarification I made in the same thread:

I think Paul's characterization is right, except I think Nate wouldn't say "we've ruled out all the prima facie easy approaches," but rather something like "part of the disagreement here is about which approaches are prima facie 'easy.'" I think his model says that the proposed alternatives to MIRI's research directions by and large look more difficult than what MIRI's trying to do, from a naive traditional CS/Econ standpoint. E.g., I expect the average game theorist would find a utility/objective/reward-centered framework much less weird than a recursive intelligence bootstrapping framework. There are then subtle arguments for why intelligence bootstrapping might turn out to be easy, which Nate and co. are skeptical of, but hashing out the full chain of reasoning for why a daring unconventional approach just might turn out to work anyway requires some complicated extra dialoguing. Part of how this is framed depends on what problem categories get the first-pass "this looks really tricky to pull off" label.

Comment author: Daniel_Dewey 08 July 2017 04:41:23AM 2 points [-]

Thanks for linking to that conversation -- I hadn't read all of the comments on that post, and I'm glad I got linked back to it.

Comment author: Tobias_Baumann 07 July 2017 02:49:05PM *  1 point [-]

Great post! I agree with your overall assessment that other approaches may be more promising than HRAD.

I'd like to add that this may (in part) depend on our outlook on which AI scenarios are likely. Conditional on MIRI's view that a hard or unexpected takeoff is likely, HRAD may be more promising (though it's still unclear). If the takeoff is soft or AI will be more like the economy, then I personally think HRAD is unlikely to be the best way to shape advanced AI.

(I wrote a related piece on strategic implications of AI scenarios.)

Comment author: Daniel_Dewey 07 July 2017 06:17:17PM 2 points [-]

Thanks!

Conditional on MIRI's view that a hard or unexpected takeoff is likely, HRAD is more promising (though it's still unclear).

Do you mean more promising than other technical safety research (e.g. concrete problems, Paul's directions, MIRI's non-HRAD research)? If so, I'd be interested in hearing why you think hard / unexpected takeoff differentially favors HRAD.

Comment author: TaraMacAulay 07 July 2017 06:54:27AM 15 points [-]

I know it's outside the scope of this writeup, but just wanted to say that I found this really helpful, and I'm looking forward to seeing an evaluation of MIRIs other research.

I'd also be really excited to see more posts about which research pathways you think are most promising in general, and how you compare work on field building, strategy and policy approaches and technical research.

Comment author: Daniel_Dewey 07 July 2017 06:12:53PM 9 points [-]

Thanks Tara! I'd like to do more writing of this kind, and I'm thinking about how to prioritize it. It's useful to hear that you'd be excited about those topics in particular.

Comment author: Benito 07 July 2017 05:57:42AM *  7 points [-]

Agreed! It was nice to see the clear output of someone who had spent a lot of time and effort into a good-faith understanding of the situation.

I was really happy with the layout of four key factors, this will help me have more clarity in further discussions.

Comment author: Daniel_Dewey 07 July 2017 06:11:54PM 6 points [-]

Thanks Kerry, Benito! Glad you found it helpful.

44

My current thoughts on MIRI's "highly reliable agent design" work

Interpreting this writeup: I lead the Open Philanthropy Project's work on technical AI safety research. In our MIRI grant writeup last year, we said that we had strong reservations about MIRI’s research, and that we hoped to write more about MIRI's research in the future. This writeup explains my current... Read More
In response to Open Thread #36
Comment author: DiverWard 15 March 2017 10:10:36PM 3 points [-]

I am new to EA, but it seems that a true effective altruist would not be interested in retiring. When just a $1000 can avert decades of disability-adjusted life years (years of suffering), I do not think it is fair to sit back and relax (even in your 70's) when you could still be earning to give.

In response to comment by DiverWard on Open Thread #36
Comment author: Daniel_Dewey 16 March 2017 03:44:10PM 5 points [-]

Welcome! :)

I think your argument totally makes sense, and you're obviously free to use your best judgement to figure out how to do as much good as possible. However, a couple of other considerations seem important, especially for things like what a "true effective altruist" would do.

1) One factor of your impact is your ability to stick with your giving; this could give you a reason to adopt something less scary and demanding. By analogy, it might seem best for fitness to commit to intense workouts 5 days a week, strict diet changes, and no alcohol, but in practice trying to do this may result in burning out and not doing anything for your fitness, while a less-demanding plan might be easier to stick with and result in better fitness over the length of your life.

Personally, the prospect of giving up retirement doesn't seem too demanding; I like working, and retirement is so far away that it's hard to take seriously. However, I'd understand if others didn't feel this way, and I wouldn't want to push them into a commitment they won't be able to keep.

2) Another factor of your impact is the other people you influence who may start giving, and would not have done so without your example -- in fact, it doesn't seem implausible that this could make up the majority of your impact over your life. To the extent that giving is a really significant cost for people, it's harder to spread the idea (e.g. many more people are vegetarian than vegan [citation needed]), and asking people to give up major parts of their life story like retirement (or a wedding, or occasional luxuries, or christmas gifts for their families, etc.) comes with real costs that could be measured in dollars (with lots of uncertainty). More broadly, the norms that we establish as a community affect the growth of the community, which directly affects total giving -- if people see us as a super-hardcore group that requires great sacrifice, I just expect less money to be given.

For these reasons, I prefer to follow and encourage norms that say something like "Hey, guess what -- you can help other people a huge amount without sacrificing anything huge! Your life can be just as you thought it would be, and also help other people a lot!" I actually anticipate these norms to have better consequences in terms of helping people than more strict norms (like "don't retire") do, mostly for reasons 1 and 2.

There's still a lot of discussion on these topics, and I could imagine finding out that I'm wrong -- for example, I've heard that there's evidence of more demanding religions being more successful at creating a sense of community and therefore being more satisfying and attractive. However, my best guess is that "don't retire" is too demanding.

(I looked for an article saying something like this but better to link to, but I didn't quickly find one -- if anyone knows where one is, feel free to link!)

Comment author: Daniel_Dewey 13 March 2017 01:57:34PM 1 point [-]

Thanks for putting StrongMinds on my radar!

Comment author: Daniel_Dewey 07 March 2017 05:59:00PM 5 points [-]

Nice work, and looks like a good group of advisors!

Comment author: Daniel_Dewey 27 February 2017 05:39:29AM 7 points [-]

Re: donation: I'd personally feel best about donating to the Long-Term Future EA Fund (not yet ready, I think?) or the EA Giving Group, both managed by Nick Beckstead.

Comment author: Daniel_Dewey 13 February 2017 03:19:07PM *  5 points [-]

Thanks for recommending a concrete change in behavior here!

I also appreciate the discussion of your emotional engagement / other EAs' possible emotional engagement with cause prioritization -- my EA emotional life is complicated, I'm guessing others have a different set of feelings and struggles, and this kind of post seems like a good direction for understanding and supporting one another.

ETA: personally, it feels correct when the opportunity arises to emotionally remind myself of the gravity of the ER-triage-like decisions that humans have to make when allocating resources. I can do this by celebrating wins (e.g. donations / grants others make, actual outcomes) as well as by thinking about how far we have to go in most areas. It's slightly scary, but makes me more confident that I'm even-handedly examining the world and its problems to the best of my abilities and making the best calls I can, and I hope it keeps my ability to switch cause areas healthy. I'd guess this works for me partially because those emotions don't interfere with my ability to be happy / productive, and I expect there are people whose feelings work differently and who shouldn't regularly dwell on that kind of thing :)

View more: Prev | Next