Comment author: Daniel_Dewey 13 January 2017 03:32:29PM *  5 points [-]

This is a great point -- thanks, Jacob!

I think I tend to expect more from people when they are critical -- i.e. I'm fine with a compliment/agreement that someone spent 2 minutes on, but expect critics to "do their homework", and if a complimenter and a critic were equally underinformed/unthoughtful, I'd judge the critic more harshly. This seems bad!

One response is "poorly thought-through criticism can spread through networks; even if it's responded to in one place, people cache and repeat it other places where it's not responded to, and that's harmful." This applies equally well to poorly thought-through compliments; maybe the unchallenged-compliment problem is even worse, because I have warm feelings about this community and its people and orgs!

Proposed responses (for me, though others could adopt them if they thought they're good ideas):

  • For now, assume that all critics are in good faith. (If we have / end up with a bad-critic problem, these responses need to be revised; I'll assume for now that the asymmetry of critique is a bigger problem.)
  • When responding to critiques, thank the critic in a sincere, non-fake way, especially when I disagree with the critique (e.g. "Though I'm about to respond with how I disagree, I appreciate you taking the critic's risk to help the community. Thank you! [response to critique]")
  • Agree or disagree with critiques in a straightforward way, instead of saying e.g. "you should have thought about this harder".
  • Couch compliments the way I would couch critiques.
  • Try to notice my disagreements with compliments, and comment on them if I disagree.

Thoughts?

Comment author: RyanCarey 13 January 2017 06:25:51PM 0 points [-]

"Though I'm about to respond with how I disagree, I appreciate you taking the critic's risk to help the community. Thank you!"

Not sure how much this helps because if the criticism is thoughtful and you fail to engage with it, you're still being rude and missing an opportunity, whether or not you say some magic words.

Comment author: jsteinhardt 12 January 2017 07:19:44PM 19 points [-]

I strongly agree with the points Ben Hoffman has been making (mostly in the other threads) about the epistemic problems caused by holding criticism to a higher standard than praise. I also think that we should be fairly mindful that providing public criticism can have a high social cost to the person making the criticism, even though they are providing a public service.

There are definitely ways that Sarah could have improved her post. But that is basically always going to be true of any blog post unless one spends 20+ hours writing it.

I personally have a number of criticisms of EA (despite overall being a strong proponent of the movement) that I am fairly unlikely to share publicly, due to the following dynamic: anything I write that wouldn't incur unacceptably high social costs would have to be a highly watered-down version of the original point, and/or involve so much of my time to write carefully that it wouldn't be worthwhile.

While I'm sympathetic to the fact that there's also a lot of low-quality / lazy criticism of EA, I don't think responses that involve setting a high bar for high-quality criticism are the right way to go.

(Note that I don't think that EA is worse than is typical in terms of accepting criticism, though I do think that there are other groups / organizations that substantially outperform EA, which provides an existence proof that one can do much better.)

Comment author: RyanCarey 12 January 2017 09:54:57PM 8 points [-]

I strongly agree with the points Ben Hoffman has been making (mostly in the other threads) about the epistemic problems caused by holding criticism to a higher standard than praise. I also think that we should be fairly mindful that providing public criticism can have a high social cost to the person making the criticism, even though they are providing a public service.

This is completely true.

I personally have a number of criticisms of EA (despite overall being a strong proponent of the movement) that I am fairly unlikely to share publicly, due to the following dynamic: anything I write that wouldn't incur unacceptably high social costs would have to be a highly watered-down version of the original point, and/or involve so much of my time to write carefully that it wouldn't be worthwhile.

There are at least a dozen people for whom this is true.

Comment author: LaurenMcG  (EA Profile) 10 January 2017 08:36:08PM 2 points [-]

Has anyone calculated a rough estimate for the value of an undergraduate student's hour? Assume they attend a top UK university, currently are unemployed, and plan to pursue earning to give. Thanks in advance for any info or links!

Comment author: RyanCarey 11 January 2017 01:26:16AM 2 points [-]

It's not an estimate, just some relevant argumentation, but see Katja's post here. Maybe $30-$150 but it would depend on a lot of factors and I haven't thought about it very hard.

Comment author: RyanCarey 08 January 2017 06:07:44AM *  1 point [-]

If you have greater uncertainty over the cost-effectiveness of something, there's more value in investigating the cost-effectiveness, either by doing that thing for the value of information (making uncertainty an argument in favor of doing the thing) or by researching it.

Comment author: capybaralet 07 January 2017 02:05:59AM 1 point [-]

Do you have any info on how reliable self-reports are wrt counterfactuals about career changes and EWWC pledging?

I can imagine that people would not be very good at predicting that accurately.

Comment author: RyanCarey 07 January 2017 04:17:37AM 1 point [-]

One would expect some social acceptability bias that might require substantial creativity and work to measure.

4

Tell us how to improve the forum

  EA Forum volunteers are doing a brief survey to get a better idea about how people use the forum and what they think about it. Your response will help us decide how much effort should be invested in improving various aspects of the forum.   Fill Out the Survey... Read More
Comment author: Peter_Hurford  (EA Profile) 02 January 2017 07:06:24PM 1 point [-]

Thanks for the high-quality analysis!

The lack of growth of the EA Forum coupled with the large growth in EA Survey respondents confuses me. The 2015 EA Survey found 350 self-reported users of the EA Forum, but unfortunately the 2014 survey did not ask about that, so there's no longitudinal data (yet) to compare.

Comment author: RyanCarey 02 January 2017 07:17:22PM *  1 point [-]

Perhaps a combination of i) different degrees of intensity of engagement and ii) changing methodology.

1 - The degree of high-intensity engagement (reading and writing intellectual arguments) is meant to be much harder to grow than total people who affiliate with EA.

2 - How did the Survey methodology change between years?

Comment author: SoerenMind  (EA Profile) 31 December 2016 02:54:29PM 0 points [-]

Any details on safety work in Montreal and Oxford (other than FHI I assume)? I might use that for an application there.

Comment author: RyanCarey 31 December 2016 07:57:06PM *  1 point [-]

At Montreal, all I know is that the PhD student, David Krueger, is currently in discussions about what work could be done. At Oxford, I have in mind the work of folks at FHI like Owain Evans and Stuart Armstrong.

In response to comment by RyanCarey on Lunar Colony
Comment author: kbog  (EA Profile) 29 December 2016 03:15:08AM 0 points [-]

These are sufficiently generic criteria that all kinds of systems can improve them. Healthcare, for instance: build more advanced healthcare centers in more areas of the world. This will give any segment of the population more redundancy and resiliency when performing healthcare related functions. Same goes with education: provide more educational programs so that they are redundant and resilient to anything that happens to other educational programs and provide varied methods of education. If you take an old-fashioned geopolitical look at the world then sure it seems like being on another planet makes you really robust, but if we're protecting against Unknown Unknowns then you can't assume that far-away-and-in-space is a more valuable direction to go in, out of all the other directions that you can go for improving resilience and redundancy.

In response to comment by kbog  (EA Profile) on Lunar Colony
Comment author: RyanCarey 29 December 2016 05:55:03AM *  0 points [-]

Making healthcare centers more advanced would prima facie reduce the resiliency of healthcare systems by making them more complex and brittle. One would have to argue for more specific changes.

You don't need to resort to a geopolitical stance to want to be on another planet. Physical separation and duplication is useful for redundancy of basically everything. Any reasonable reference class makes this look good.

For the last two layers of nested comments you have not actually addressed my arguments, which can be seen if you look carefully over them, nor have you given any impression of really engaging seriously with the issue, so this is my final comment for the thread.

Comment author: RyanCarey 29 December 2016 01:50:09AM *  8 points [-]

Well, you might be getting toward the frontiers of where published AI Safety-focused books can take you. From here, you might want to look to AI Safety agendas and specific papers, and AI textbooks.

MIRI has a couple of technical agendas for more foundational and more machine learning-based research on AI Safety. Dario Amodei of OpenAI, and some other researchers also put out a machine learning-focused agenda. These agendas cite and are cited by a bunch of useful work. There's also great unpublished work by Paul Christiano on AI Control.

In order to understand and contribute to current research, you will also want to do some background reading. Jan Leike (now of Deepmind) has put out a good syllabus of relevant reading materials through 80,000 Hours that includes some good suggestions. Personally, for a math student like yourself wanting to start out with theoretical computer science, George Boolos' book Computability and Logic might be useful. Learning Python and Tensorflow is also great in general.

To increase your chance of working on this career, you might want to look toward the entry requirements for some specific grad schools. You might also want to go for some internships at these groups (or at other groups that do similar work).

In academia some labs analyzing safety problems are:

  • UC Berkeley (especially Russell)
  • Cambridge (Adrian Weller)
  • ANU (Hutter)
  • Montreal Institute for Learning Algorithms
  • Oxford
  • Louisville (Yampolskiy)

In industry, Deepmind and OpenAI both have safety-focused teams.

Working on grad school or internships in any of these places (notwithstanding that you won't necessarily end up in a safety-focused team) would be a sweet step toward working on AI Safety as a career.

Feel free to reach out by email at (my first name) at intelligence.org with further questions, or for more personalized suggestions. (And the same offer goes to similarly interested readers)

View more: Next