Comment author: MichaelDello 14 March 2017 11:18:38AM 0 points [-]

Thanks for this John. I agree that even if you use some form of classical utilitarianism, the future might still plausibly be net negative in value. As far as I can tell, Bostrom and co don't consider this possibility when they argue the value of existential risk research, which I think is a mistake. They mostly talk about the expected number of human lives in the future if we don't succumb to X-risk, assuming they are all (or mostly) positive.

Comment author: RyanCarey 14 March 2017 10:07:53PM *  3 points [-]

People like Bostrom have thoroughly considered how valuable the future might be. The view in existential risk reduction circles is simply that the future has positive expected value on likely moral systems. There are a bunch of arguments for this. One can argue from improvements to welfare, decreases in war, emergency of more egalitarian movements over time, anticipated disappearance of scarcity, and reliance on factory farming, increasing societal wisdom over time, and dozens of other reasons. One way of thinking about this if you are a symmetric utilitarian is that we don't have much reason to think either of pain and pleasure is more energy efficient than the other ([Are pain and please equally energy efficient]. Since a singleton would be correlated with some relevant values, it should produce much more pleasure than pain, so the future should have very net positive values. I think that to the extent that we can research this question, we can sit very confidently saying that for usual value systems, the future has positive expectation.

The reason that I think people tend to try to shy away from public debates on this topic, such as when arguing for the value of existential risk research, is that doing so might risk creating a false equivalence between themselves and very destructive positions, which would be very harmful.

Comment author: the_jaded_one 14 March 2017 07:23:06PM *  1 point [-]

you are effectively "bundling" a high-quality post with additional content, which grants this extra content with undue attention.

A post which simply quotes a news source could be criticized as not containing anything original and therefore not worth posting. Someone has already complained that this post is superfluous since a discussion already exists on Facebook.

Actually if I had to criticize my own post I would say its weakness is that it lacks in-depth analysis and research. Unfortunately, in-depth analysis takes a lot of time...

Comment author: RyanCarey 14 March 2017 08:34:08PM 0 points [-]

Posting news together with analysis, arguments, and a few opinions is great. If you find yourself posting news and polemics together, you should think really hard if they should rather be split.

I don't think this post is too bad.

Comment author: ThomasSittler 11 March 2017 10:51:07AM 0 points [-]

Hey Ryan, I'm following up about the idea of using a Medium blog. Medium is beautiful, and allows commenting on particular portion of the document, which is the main advantage of Google Docs commenting. However, you need to create an account to comment, and it seems like that will be too much trouble for most people. Also, it seems like there isn't a simple way to embed Medium into Squarespace ( What are your thoughts?

Comment author: RyanCarey 11 March 2017 07:41:41PM 0 points [-]

I guess you'd get more shares, views, and hence comments on a Medium, even accounting for a small inconvenience from signup. Traffic is almost all through sharing nowadays. e.g. EA Forum gets 70% of referrals from Facebook, >80% if you include other social media, and >90% if you include other blogs.

The proposal would not require embedding anything inside a Squarespace. You can just put it on a subdomain with the right logos, and linking back to the main page as in the recent EA example of

Comment author: ThomasSittler 11 March 2017 10:40:23AM 4 points [-]

I think I've only ever seen cause-neutrality used to mean cause-impartiality.

Comment author: RyanCarey 11 March 2017 07:28:11PM *  4 points [-]

I think one aim here is to stop people from conflating other things with cause impartiality, which does seem like an unhelpful thing that people occasionally consciously or subconsciously do.

In response to EA Funds Beta Launch
Comment author: RyanCarey 28 February 2017 08:13:42PM 15 points [-]

Nice. Design, content and UX-wise, this is my favorite CEA website that I can remember!

Comment author: Daniel_Eth 14 February 2017 06:10:00AM 0 points [-]

Yeah, I agree it doesn't just apply to where to donate, but also how to get money to donate, founding non-profits, etc. Which, taken to it's logical conclusion, means maybe I should angle to run for president?

Comment author: RyanCarey 14 February 2017 07:29:17AM *  7 points [-]

Carl already explored this question too, noting that it is relatively easy to go for PM of the UK in another 2012 article.

Far more people should read Carl's old blog posts.

Comment author: RyanCarey 14 February 2017 05:33:15AM *  5 points [-]

For discussion of risk-aversion in altruism, also see Carl's Salary or startup? How do-gooders can gain more from risky careers.

Comment author: Maxdalton 12 February 2017 06:36:03AM 1 point [-]

Hey Ryan, I'd be particularly interested in hearing more about your reasons for your first point (about theoretical vs. empirical work).

Comment author: RyanCarey 12 February 2017 05:58:39PM *  11 points [-]

Sure. Here are some reasons I think this:

  • Too few EAs are doing object-level work, (excluding donations), and this can be helped by doing empirical research around possible actions. One can note that there were not enough people interested in starting ventures for EAV, and that newbies are often at a loss to figuring out what EA does apart from philosophize. This makes it hard to attract people who are practically competent, such as businesspeople and scientists, and overcome our philosopher-founder effect. From a standpoint of running useful projects, I think that what would be most useful would be business-plans and research agendas, followed by empirical investigations of issues, followed by theoretical prioritization, followed by philosophical investigations. However, it seems to me that most people are working in the latter categories.

  • For EAs who are actually acting, their actions would more easily be swayed by empirical research. Although most people working on high-impact areas were brought there by theoretical reasoning, their ongoing questions are more concrete. For example, in AI, I wonder: To what extent have concerns about edge-instantiation and incorrigibility borne out in actual AI systems? To what extent has AI progress been driven by new mathematical theory, rather than empirical results? What kind of CV do you need to have to participate in governing AI? What can we learn about this from the case of nuclear governance? This would help people to prioritize much more than, for example, philosophical arguments about the different reasons for working on AI as compared to immigration.

  • Empirical research is easier to build on.

One counterargument would be that perhaps these action-oriented EAs have too-short memories. Since their previous decisions relied on theory from people like Bostrom, shouldn't we expect the same from their future decisions? There are two rebuttals to this. One is that theoretical investigations are especially dependent on the talent of their authors. I would not argue that people like Bostrom (if we know of any) should stop philosophizing about deeply theoretical issues, such as infinite ethics or decision theory. However, that research must be supported by many more empirically-minded investigators. Second, there are reasons to expect the usefulness of theoretical investigations to decrease relative to empirical research over time as the important insights are harvested, people start implementing plans, and plausible catastrophes get nearer.

Comment author: RyanCarey 11 February 2017 07:56:16PM *  5 points [-]

Great to see this!

My 2c on what research I and others like me would find useful from groups like this:

  • Overviewing empirical and planning-relevant considerations (rather than philosophical theorizing).
  • Focusing on obstacles and major events on the path to "technological maturity" I.e. risky or transformative techs.
  • Investigate specific risky and transformative tech in detail. FHI has done a little of this but it is very neglected on the margin. Scanning microscopy for neural tissue, invasive brain-computer interfaces, surveillance, brain imaging for mind-reading, CRISPR, genome synthesis, GWAS studies in areas of psychology, etc.
  • Help us understand AI progress. AI Impacts has done a bit of this but they are tiny. It would be really useful to have a solid understanding of growth of capabilities, funding and academic resources in a field like deep learning. How big is the current bubble compared to previous ones, et cetera.

Also, in its last year, GPP largely specialized on tech and long-run issues. This meant it did a higher density of work on prioritization questions that mattered. Prima facie, this and other reasons would also make Oxford Prioritization Project want to specialize on the same.

Lastly, you'll get more views and comments if you use a (more beautiful) Medum blog.

Happy to justify these positions further.

Good luck!

Comment author: Kerry_Vaughan 10 February 2017 11:55:25PM 5 points [-]

My guess is that the optimal solution has people like Nick controlling quite a bit of money since he has a strong track record and strong connections in the space. Yet, the optimal solution probably has an upper limit on how much money he controls for purposes of viewpoint diversification and to prevent power from consolidating in too few hands. I'm not sure whether we've reached the upper limit yet, but I think we will if EA Funds moves a substantial amount of money.

How can we build these incentives and selection pressures, as well as on the object level, getting better ideas into EA orgs? Dviersifying funding would help, but mostly it seems like it would require CEA to care about this problem a lot and take a lot of effort.

I agree that this is worth being concerned about and I would also be interested in ways to avert this problem.

My hope is that as we diversify the selection of fund managers, EA Funds creates an intellectual marketplace of fund managers writing about why their funding strategies are best and convincing people to donate to them. Then our defense against entrenching the power of established groups (e.g. CEA) is that people can vote with their wallets if they think established groups are getting more money than makes sense.

Comment author: RyanCarey 11 February 2017 12:30:32AM *  2 points [-]

Cool. Yeah, I wouldn't want to be pidgeonholed into being someone concerned about concentration of power, though.

We can have powerful organizations, I just think that they are under incentives such that they will only stay big (i.e. get good staff and ongoing funding) if they perform. Otherwise, we become a bad kind of bureaucracy.

View more: Next