9

RyanCarey comments on Introducing the Oxford Prioritisation Project blog - Effective Altruism Forum

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (7)

You are viewing a single comment's thread.

Comment author: RyanCarey 11 February 2017 07:56:16PM *  5 points [-]

Great to see this!

My 2c on what research I and others like me would find useful from groups like this:

  • Overviewing empirical and planning-relevant considerations (rather than philosophical theorizing).
  • Focusing on obstacles and major events on the path to "technological maturity" I.e. risky or transformative techs.
  • Investigate specific risky and transformative tech in detail. FHI has done a little of this but it is very neglected on the margin. Scanning microscopy for neural tissue, invasive brain-computer interfaces, surveillance, brain imaging for mind-reading, CRISPR, genome synthesis, GWAS studies in areas of psychology, etc.
  • Help us understand AI progress. AI Impacts has done a bit of this but they are tiny. It would be really useful to have a solid understanding of growth of capabilities, funding and academic resources in a field like deep learning. How big is the current bubble compared to previous ones, et cetera.

Also, in its last year, GPP largely specialized on tech and long-run issues. This meant it did a higher density of work on prioritization questions that mattered. Prima facie, this and other reasons would also make Oxford Prioritization Project want to specialize on the same.

Lastly, you'll get more views and comments if you use a (more beautiful) Medum blog.

Happy to justify these positions further.

Good luck!

Comment author: Maxdalton 12 February 2017 06:36:03AM 1 point [-]

Hey Ryan, I'd be particularly interested in hearing more about your reasons for your first point (about theoretical vs. empirical work).

Comment author: RyanCarey 12 February 2017 05:58:39PM *  11 points [-]

Sure. Here are some reasons I think this:

  • Too few EAs are doing object-level work, (excluding donations), and this can be helped by doing empirical research around possible actions. One can note that there were not enough people interested in starting ventures for EAV, and that newbies are often at a loss to figuring out what EA does apart from philosophize. This makes it hard to attract people who are practically competent, such as businesspeople and scientists, and overcome our philosopher-founder effect. From a standpoint of running useful projects, I think that what would be most useful would be business-plans and research agendas, followed by empirical investigations of issues, followed by theoretical prioritization, followed by philosophical investigations. However, it seems to me that most people are working in the latter categories.

  • For EAs who are actually acting, their actions would more easily be swayed by empirical research. Although most people working on high-impact areas were brought there by theoretical reasoning, their ongoing questions are more concrete. For example, in AI, I wonder: To what extent have concerns about edge-instantiation and incorrigibility borne out in actual AI systems? To what extent has AI progress been driven by new mathematical theory, rather than empirical results? What kind of CV do you need to have to participate in governing AI? What can we learn about this from the case of nuclear governance? This would help people to prioritize much more than, for example, philosophical arguments about the different reasons for working on AI as compared to immigration.

  • Empirical research is easier to build on.

One counterargument would be that perhaps these action-oriented EAs have too-short memories. Since their previous decisions relied on theory from people like Bostrom, shouldn't we expect the same from their future decisions? There are two rebuttals to this. One is that theoretical investigations are especially dependent on the talent of their authors. I would not argue that people like Bostrom (if we know of any) should stop philosophizing about deeply theoretical issues, such as infinite ethics or decision theory. However, that research must be supported by many more empirically-minded investigators. Second, there are reasons to expect the usefulness of theoretical investigations to decrease relative to empirical research over time as the important insights are harvested, people start implementing plans, and plausible catastrophes get nearer.

Comment author: ThomasSittler 11 March 2017 10:51:07AM 0 points [-]

Hey Ryan, I'm following up about the idea of using a Medium blog. Medium is beautiful, and allows commenting on particular portion of the document, which is the main advantage of Google Docs commenting. However, you need to create an account to comment, and it seems like that will be too much trouble for most people. Also, it seems like there isn't a simple way to embed Medium into Squarespace (https://support.squarespace.com/hc/en-us/articles/205814558-Connecting-Medium-with-Squarespace). What are your thoughts?

Comment author: RyanCarey 11 March 2017 07:41:41PM 0 points [-]

I guess you'd get more shares, views, and hence comments on a Medium, even accounting for a small inconvenience from signup. Traffic is almost all through sharing nowadays. e.g. EA Forum gets 70% of referrals from Facebook, >80% if you include other social media, and >90% if you include other blogs.

The proposal would not require embedding anything inside a Squarespace. You can just put it on a subdomain with the right logos, and linking back to the main page as in the recent EA example of https://blog.ought.com/