Comment author: MHarris 10 October 2017 04:10:56PM 0 points [-]

One issue to consider is whether catastrophic risk is a sufficiently popular issue for an agency to use it to sustain itself. Independent organisations can be vulnerable to cuts. This probably varies a lot by country.

Comment author: MHarris 10 October 2017 04:05:54PM *  3 points [-]

This book is a core text on this subject, which explicitly considers when specific agencies are effective and motivated to pursue particular goals: https://www.amazon.co.uk/Bureaucracy-Government-Agencies-Basic-Classics/dp/0465007856

I'm also reminded of Nate Silver's interviews with the US hurricane forecasting agency in The Signal and the Noise.

Comment author: nexech 10 October 2017 02:49:03PM 3 points [-]

Do you know of any resources discussing the pros and cons of the introduction of new government agencies?

An idea worth discussing, regardless.

Comment author: Tuukka_Sarvi 10 October 2017 01:01:53PM 0 points [-]

Hi! Apologies for response delay.

"True, but to if I put myself in the perfect altruist company owner shoes I would really want to delegate the allocation of the my charitable giving, because I am too busy running my company to have much good information about who to donate to."

I agree with that usually it is not efficient for same person to take care and optimize 1) (for-profit private) company operations 2) allocation of charitable giving. So person doing 1) would do well to delegate 2) to someone who she trusts.

In any case, I reiterate my previous point: I don't think having "benevolent" companies would be something I support (benevolent in the sense that the company commits to donate all profits) because: Firstly, it would decrease the possible investor base because only strictly altruistic investors would be interested and thus it would not likely able to raise as much funding as a "non-benevolent" company (altruistic investors are also interested in "non-benevolent" companies because they can freely donate any profits they make). Secondly, there is disagreement among altruists of how to best donate. Thus, if profits are given to investors, each altruist can choose personally how to donate. So even altruistic investors might be hesitant to invest in a "benevolent" company I outlined here.

As far as I am tell, it's best to have a for-profit company optimizing production and maximizing profits which are distributed to investors some of which can be efficient altruists who in turn donate them as they see fit. Charitable givers can delegate their giving to a fund of charities of which I think OpenPhil is an example of.

Comment author: casebash 10 October 2017 06:42:05AM 11 points [-]

I also worried about what the impact will be if too many people stop focusing on poverty despite agreeing that existential risk is much more important.

  • Firstly, I think that successes in global poverty will help establish our credibility. Everyone cares about poverty and if we are having successes in this area, people will respect us more, even if they are skeptical of our other projects. Respect is important as it means that more people will want to join us and that people will provide us with more nuanced criticism.
  • Secondly, some of the people who join the movement initially to have an impact in terms of global poverty will end up being interesting in other cause areas too. We don't want to lose this demographic even if we were to have existential risk as dominating our other priorities.
Comment author: RyanCarey 10 October 2017 02:18:07AM 6 points [-]

Hey Zack,

I agree that we lose a bunch by moving our movement's centre of gravity away from poverty and development econ. But if we do the move properly, we gain a lot on the basis of the new areas we settle in. What rigor we lost, we should be able to patch up with Bayesian rationalist thinking. What institutional capital we might have lost from World Bank / Gates, we might be able to pick up with RAND/IARPA/Google/etc, a rather more diverse yet impressive group of possible contributors. For organization, yes a lot of experience, like that of Evidence Action, will be lost, but also much will be gained, for example, by working instead at technology think tanks, and elsewhere.

I don't think your conclusion that people should start in the arena of poverty is very well-supported either, if you're not comparing it to other arenas that people might be able to start out in. Do you think you might be privileging the hypothesis that people should start in the management of poverty just because that's salient to you, possibly because it's the status quo?

Comment author: Denkenberger 10 October 2017 12:59:52AM 1 point [-]

Does the need to exist of future generations qualify?

Comment author: Maxdalton 09 October 2017 05:15:53PM 2 points [-]

With regards to animal welfare, we passed on several applications which we found promising, but couldn't fully assess, to the Open Philanthropy Project, so we may eventually facilitate more grants in this area.

I would not have predicted such an extreme resource split going in: we received fewer high quality, EA-aligned applications in the global development space than we expected. However, CEA is currently prioritising work on improving the long-term future, so I would have expected the EA community and long-term future categories to receive more funding than global development or animal welfare.

Comment author: KevinWatkinson  (EA Profile) 09 October 2017 11:59:59AM *  2 points [-]

A really interesting project and process. I would like to know how the group first came across NMNW?

In terms of including animal groups that would have been a particularly interesting process in terms of non-utilitarianism and the types of organisations the group would have considered. However, that also could have been an additional process that consumed too much time (on top of the time taken to choose to incorporate speciesism). I would say that even within EAA non-utilitarian perspectives are generally neglected, and sometimes marginalised, so negotiating that issue might have been difficult.

Overall, I think it is a good thing that utilitarians are giving more consideration to non-utilitarian perspectives, and potentially groups that fit into an area that utilitarians and non-utilitarians can agree with. However, this seems to me to largely be the point of Effective Altruism. So the idea EA is more inclined toward Effective Utilitarianism (particularly with EA weighting toward utilitarianism) is quite a complicated issue overall which the movement seems to struggle with, so I appreciate the effort made here with this project.

I think for me this issue continues to point toward the need for meta-evaluation. A commitment to reflection and evaluation ought to be a core component to EA and yet is fairly neglected at the foundational level. I know there are pros and cons for meta-evaluation, but I see few reasons why it couldn't largely benefit the movement and the organisations associated with it.

I hope some of the issues related to this project will be discussed at the forthcoming EA Global conference in London.

Comment author: Michael_S 08 October 2017 05:35:47PM 1 point [-]

As a quick update, I also tried something similar on the EA survey to see whether making certain EA considerations salient would impact people's donation plans. The end result was essentially no effect. Obligation, Opportunity, and emphasizing cost benefit studies on happiness all had slightly negative treatment effects compared to the control group. The dependent variable was how much EA survey takers reported planning to donate in the future.

Comment author: RyanCarey 08 October 2017 11:19:35AM *  4 points [-]

That's the TLDR that I took away from the article too.

I agree that "disentanglement" is unclear. The skillset that I previously thought was needed for this was something like IQ + practical groundedness + general knowledge + conceptual clarity, and that feels mostly to be confirmed by the present article.

It seems plausible that “start iterating and create feedback loops” is a better alternative to the “wait until things are clearer” strategy.

I have some lingering doubts here as well. I would flesh out an objection to the 'disentanglement'-focus as follows: AI strategy depends critically on government, some academic communities and some companies, that are complex organizations. (Suppose that) complex organizations are best understood by an empirical/bottom-up approach, rather than by top-down theorizing. Consider the medical establishment that I have experience with. If I got ten smart effective altruists to generate mutually exclusive collectively exhaustive (MECE) hypotheses about it, as the article proposes doing for AI strategy, they would, roughly speaking, hallucinate some nonsense, that could be invalidated in minutes by someone with years of experience in the domain. So if AI strategy depends in critical components on the nature of complex institutions, then what we need for this research may be, rather than conceptual disentanglement, something more like high-level operational experience of these domains. Since it's hard to find such people, we may want to spend the intervening time interacting with these institutions or working within them on less important issues. Compared to this article, this perspective would de-emphasize the importance of disentanglement, while maintaining the emphasis on entering these institutions, and increasing the emphasis on interacting with and making connections within these institutions.

Comment author: aspencer 07 October 2017 05:29:40PM 0 points [-]

Our conference does have an application process. To provide accommodation for participants, travel reimbursement, and events that allow participants to engage with speakers we can only have a limited number of participants. Right now, we seek to maximize the impact that the conference has by selecting participants who are likely to be leaders in the future. We've narrowed down criteria we believe are indicative of becoming future leaders.

That said, I do agree with your point and I worry about our ability to accurately predict who will become a leader in the future. There are many possible paths to becoming a leader in the future, and what you do in college may not reflect well what you'll do in the future. Beyond that, applications are subjective and we likely have a high false negative rate. Some people are better at writing applications, and skill in writing applications doesn't imply (I think) the applicant will be successful in the future.

We've considered doing randomized evaluation, but haven't yet because we don't know of a rigorous way to measure how we did selecting participants that will have an impact in the future. An impact in the future would, by construction, have to be measured years after the conference. We'd have to track randomized applicants and applicants we selected and have a way to measure their impact, which would be challenging if not infeasible. Right now, the best heuristic we have for identifying applicants that will be leaders in the future is by looking at what they're doing now.

The act of having an application also provides self selection for applicants; many people have communicated to us that completing the application is a barrier to entry. The application itself selects for a certain level of interest in the conference (the interest required to complete the application).

We like to believe that this approach is better than random, but we do admit that it may not be.

Comment author: capybaralet 07 October 2017 04:14:19PM *  2 points [-]

Thanks for writing this. My TL;DR is:

  1. AI policy is important, but we don’t really know where to begin at the object level

  2. You can potentially do 1 of 3 things, ATM: A. “disentanglement” research: B. operational support for (e.g.) FHI C. get in position to influence policy, and wait for policy objectives to be cleared up

  3. Get in touch / Apply to FHI!

I think this is broadly correct, but have a lot of questions and quibbles.

  • I found “disentanglement” unclear. [14] gave the clearest idea of what this might look like. A simple toy example would help a lot.
  • Can you give some idea of what an operations role looks like? I find it difficult to visualize, and I think uncertainty makes it less appealling.
  • Do you have any thoughts on why operations roles aren’t being filled?
  • One more policy that seems worth starting on: programs that build international connections between researchers (especially around policy-relevant issues of AI (i.e. ethics/safety)).
  • The timelines for effective interventions in some policy areas may be short (e.g. 1-5 years), and it may not be possible to wait for disentanglement to be “finished”.
  • Is it reasonable to expect the “disentanglement bottleneck” to be cleared at all? Would disentanglement actually make policy goals clear enough? Trying to anticipate all the potential pitfalls of policies is a bit like trying to anticipate all the potential pitfalls of a particular AI design or reward specification… fortunately, there is a bit of a disanalogy in that we are more likely to have a chance to correct mistakes with policy (although that still could be very hard/impossible). It seems plausible that “start iterating and create feedback loops” is a better alternative to the “wait until things are clearer” strategy.
Comment author: Larks 07 October 2017 03:01:36PM 0 points [-]

Thanks for writing this up! While it's hard to evaluate externally without seeing the eventually outcomes of the projects, and the counterfactuals of who you rejected, it seems like you did a good job!

Comment author: Larks 07 October 2017 02:49:31PM 1 point [-]

EA money is money in the hands of EAs. It is argued that this is more valuable than non-EA money, because EAs are better at turning money into EAs. As such, a policy that cost $100 of non-EA money might be more expensive than one which cost $75 of EA money.

Comment author: weeatquince  (EA Profile) 07 October 2017 01:41:00PM 3 points [-]

I want to add additional thanks to Ellie Karslake for organising these events, finding venues and so on.

Comment author: John_Maxwell_IV 07 October 2017 01:49:58AM 0 points [-]

Was your "WEF" link supposed to point to something involving the World Economic Forum?

Comment author: mhpage 06 October 2017 11:46:00PM *  3 points [-]

Thanks for doing these analyses. I find them very interesting.

Two relatively minor points, which I'm making here only because they refer to something I've seen a number of times, and I worry it reflects a more-fundamental misunderstanding within the EA community:

  1. I don't think AI is a "cause area."
  2. I don't think there will be a non-AI far future.

Re the first point, people use "cause area" differently, but I don't think AI -- in its entirety -- fits any of the usages. The alignment/control problem does: it's a problem we can make progress on, like climate change or pandemic risk. But that's not all of what EAs are doing (or should be doing) with respect to AI.

This relates to the second point: I think AI will impact nearly every aspect of the long-run future. Accordingly, anyone who cares about positively impacting the long-run future should, to some extent, care about AI.

So although there are one or two distinct global risks relating to AI, my preferred framing of AI generally is as an unusually powerful and tractable lever on the shape of the long-term future. I actually think there's a LOT of low-hanging fruit (or near-surface root vegetables) involving AI and the long-term future, and I'd love to see more EAs foraging those carrots.

Comment author: kbog  (EA Profile) 06 October 2017 06:39:31PM 0 points [-]

are you arguing that the article assumes a bug-free AI won't cause AI accidents?

I'm not - I'm saying that when you phrase it as accidents then it creates flawed perceptions about the nature and scope of the problem. An accident sounds like a onetime event that a system causes in the course of its performance; AI risk is about systems whose performance itself is fundamentally destructive. Accidents are aberrations from normal system behavior; the core idea of AI risk is that any known specification of system behavior, when followed comprehensively by advanced AI, is not going to work.

Comment author: HaydnBelfield 06 October 2017 06:18:20PM 2 points [-]

Really glad to see you taking conflicts of interest so seriously!

View more: Prev | Next