Comment author: Daniel_Eth 14 February 2017 06:10:00AM 0 points [-]

Yeah, I agree it doesn't just apply to where to donate, but also how to get money to donate, founding non-profits, etc. Which, taken to it's logical conclusion, means maybe I should angle to run for president?

Comment author: RyanCarey 14 February 2017 07:29:17AM *  7 points [-]

Carl already explored this question too, noting that it is relatively easy to go for PM of the UK in another 2012 article.

Far more people should read Carl's old blog posts.

Comment author: RyanCarey 14 February 2017 05:33:15AM *  5 points [-]

For discussion of risk-aversion in altruism, also see Carl's Salary or startup? How do-gooders can gain more from risky careers.

Comment author: Maxdalton 12 February 2017 06:36:03AM 1 point [-]

Hey Ryan, I'd be particularly interested in hearing more about your reasons for your first point (about theoretical vs. empirical work).

Comment author: RyanCarey 12 February 2017 05:58:39PM *  11 points [-]

Sure. Here are some reasons I think this:

  • Too few EAs are doing object-level work, (excluding donations), and this can be helped by doing empirical research around possible actions. One can note that there were not enough people interested in starting ventures for EAV, and that newbies are often at a loss to figuring out what EA does apart from philosophize. This makes it hard to attract people who are practically competent, such as businesspeople and scientists, and overcome our philosopher-founder effect. From a standpoint of running useful projects, I think that what would be most useful would be business-plans and research agendas, followed by empirical investigations of issues, followed by theoretical prioritization, followed by philosophical investigations. However, it seems to me that most people are working in the latter categories.

  • For EAs who are actually acting, their actions would more easily be swayed by empirical research. Although most people working on high-impact areas were brought there by theoretical reasoning, their ongoing questions are more concrete. For example, in AI, I wonder: To what extent have concerns about edge-instantiation and incorrigibility borne out in actual AI systems? To what extent has AI progress been driven by new mathematical theory, rather than empirical results? What kind of CV do you need to have to participate in governing AI? What can we learn about this from the case of nuclear governance? This would help people to prioritize much more than, for example, philosophical arguments about the different reasons for working on AI as compared to immigration.

  • Empirical research is easier to build on.

One counterargument would be that perhaps these action-oriented EAs have too-short memories. Since their previous decisions relied on theory from people like Bostrom, shouldn't we expect the same from their future decisions? There are two rebuttals to this. One is that theoretical investigations are especially dependent on the talent of their authors. I would not argue that people like Bostrom (if we know of any) should stop philosophizing about deeply theoretical issues, such as infinite ethics or decision theory. However, that research must be supported by many more empirically-minded investigators. Second, there are reasons to expect the usefulness of theoretical investigations to decrease relative to empirical research over time as the important insights are harvested, people start implementing plans, and plausible catastrophes get nearer.

Comment author: RyanCarey 11 February 2017 07:56:16PM *  5 points [-]

Great to see this!

My 2c on what research I and others like me would find useful from groups like this:

  • Overviewing empirical and planning-relevant considerations (rather than philosophical theorizing).
  • Focusing on obstacles and major events on the path to "technological maturity" I.e. risky or transformative techs.
  • Investigate specific risky and transformative tech in detail. FHI has done a little of this but it is very neglected on the margin. Scanning microscopy for neural tissue, invasive brain-computer interfaces, surveillance, brain imaging for mind-reading, CRISPR, genome synthesis, GWAS studies in areas of psychology, etc.
  • Help us understand AI progress. AI Impacts has done a bit of this but they are tiny. It would be really useful to have a solid understanding of growth of capabilities, funding and academic resources in a field like deep learning. How big is the current bubble compared to previous ones, et cetera.

Also, in its last year, GPP largely specialized on tech and long-run issues. This meant it did a higher density of work on prioritization questions that mattered. Prima facie, this and other reasons would also make Oxford Prioritization Project want to specialize on the same.

Lastly, you'll get more views and comments if you use a (more beautiful) Medum blog.

Happy to justify these positions further.

Good luck!

Comment author: Kerry_Vaughan 10 February 2017 11:55:25PM 5 points [-]

My guess is that the optimal solution has people like Nick controlling quite a bit of money since he has a strong track record and strong connections in the space. Yet, the optimal solution probably has an upper limit on how much money he controls for purposes of viewpoint diversification and to prevent power from consolidating in too few hands. I'm not sure whether we've reached the upper limit yet, but I think we will if EA Funds moves a substantial amount of money.

How can we build these incentives and selection pressures, as well as on the object level, getting better ideas into EA orgs? Dviersifying funding would help, but mostly it seems like it would require CEA to care about this problem a lot and take a lot of effort.

I agree that this is worth being concerned about and I would also be interested in ways to avert this problem.

My hope is that as we diversify the selection of fund managers, EA Funds creates an intellectual marketplace of fund managers writing about why their funding strategies are best and convincing people to donate to them. Then our defense against entrenching the power of established groups (e.g. CEA) is that people can vote with their wallets if they think established groups are getting more money than makes sense.

Comment author: RyanCarey 11 February 2017 12:30:32AM *  2 points [-]

Cool. Yeah, I wouldn't want to be pidgeonholed into being someone concerned about concentration of power, though.

We can have powerful organizations, I just think that they are under incentives such that they will only stay big (i.e. get good staff and ongoing funding) if they perform. Otherwise, we become a bad kind of bureaucracy.

Comment author: RyanCarey 10 February 2017 08:11:49PM 10 points [-]

Seems like a great idea!

Re Nick, I trust his analysis of charities, including meta-charities a lot. But the conflict does seem worth thinking a bit about. He is responsible for all 2-3 of the top EA-org grant-makers. From a point of view of redundancy, diverse criticism and incentives, this is not so good.

If I was CEA, I'm not sure I have very much incentive to identify new good strategies, since a lot of my expected funding in the next decade comes from Nick, and most of the other funders are less thoughtful, he is really the one that I need to work to convince. And then If I am Nick, I'm only one person, so there are limits to how much strategic thinking I can transmit, and to the degree to which I will force it to engage with other strategic thinkers. It's also hard to see how, if some of its projects failed, I would allow CEA to go unfunded?

How can we build these incentives and selection pressures, as well as on the object level, getting better ideas into EA orgs? Dviersifying funding would help, but mostly it seems like it would require CEA to care about this problem a lot and take a lot of effort.

Comment author: Linch 09 February 2017 07:21:00AM *  2 points [-]

I'm not sure how you're operationalizing the difference between unlikely and very unlikely, but I think we should not be able to make sizable updates from this data unless the prior is REALLY big.

(You probably already understand this, but other people might read your comment as suggesting something more strongly than you're actually referring to, and this is a point that I really wanted to clarify anyway because I expect it to be a fairly common mistake)

Roughly: Unsurprising conclusions from experiments with low sample sizes should not change your mind significantly, regardless of what your prior beliefs are.

This is true (mostly) regardless of the size of your prior. If a null result when you have a high prior wouldn't cause a large update downwards, then a null result on something when you have a low prior shouldn't cause a large shift downwards either.

[Math with made-up numbers below]

As mentioned earlier:

If your hypothesis is 10%: 23% probability experiment confirms it.

If your hypothesis is 1%: 87% probability experiment is in line with this

5%: 49%

20%: 4.4%

Say your prior belief is that there's a 70% chance of talking to new people having no effect (or meaningfully close enough to zero that it doesn't matter), a 25% chance that it has a 1% effect, and a 5% chance that it has a 10% effect.

Then by Bayes' Theorem, your posterior probability should be: 75.3% chance it has no effect

23.4% chance it has a 1% effect

1.24% chance it has a 10% effect.

If, on the other hand, you originally believed that there's a 50% chance of it have no effect, and a 50% chance of it having a 10% effect, then your posterior should be:

81.3% chance it has no effect

18.7% chance it has a 10% effect.

Finally, if your prior is that it already has a relatively small effect, this study is far too underpowered to basically make any conclusions at all. For example, if you originally believed that there's a 70% chance of it having no effect, and a 30% chance of it having a .1% effect, then your posterior should be:

70.3% chance of no effect

29.7% chance of a .1% effect.

This is all assuming ideal conditions.Model uncertainty and uncertainty about the quality of my experiment should only decrease the size of your update, not increase it.

Do you agree here? If so, do you think I should rephrase the original post to make this clearer?

Comment author: RyanCarey 09 February 2017 07:02:04PM 0 points [-]

I trust that you can explain Bayes theorem, I'm just adding that we now can be fairly confident that the intervention has less than 10% effectiveness.

Comment author: RyanCarey 09 February 2017 03:12:55AM 0 points [-]

You should not update significantly towards “casual outreach about EA is ineffective”, or “outreach has a very low probability of success” since the study is FAR too underpowered to detect even large effects. For example, if talking about GWWC to likely candidates has a 10% chance of making them take the pledge in the next 15-20 days, and the 14 people who were contacted are exactly representative of the pool of “likely candidates”, then we have a .9^14=23% chance of getting 0 pledges.

Given that it was already unlikely that being put in contact with a GWWC member would have a 10% chance of making them take the pledge, we can now call it very unlikely.

Comment author: RyanCarey 05 February 2017 06:25:36PM 7 points [-]

Yes! Probably when we think of Importance, Neglectedness, and Tractability, we should also consider informativeness!

Comment author: RyanCarey 04 February 2017 06:37:31AM *  2 points [-]

I think that AI safety donors, and all those who seek to spread values with the intention of influencing the values guiding a singleton or technological transformation, should probably be positively correlated with U.S. markets.

If you want to correlate with near-term AI development, you would buy GOOG. (Which is ~1% DeepMind).

View more: Next