Comment author: RyanCarey 13 October 2017 04:01:48AM 2 points [-]

A major risk to the project is people hold on too strongly to their pre-Project views Update: lower credence

Sounds more like increased credence to me. People allowed their charity preferences to move but stuck to their pet causes and pet research areas...

Comment author: RyanCarey 11 October 2017 01:57:19AM 0 points [-]

Maybe you would start with a small part of the defense bureaucracy?

Comment author: RyanCarey 10 October 2017 02:18:07AM 5 points [-]

Hey Zack,

I agree that we lose a bunch by moving our movement's centre of gravity away from poverty and development econ. But if we do the move properly, we gain a lot on the basis of the new areas we settle in. What rigor we lost, we should be able to patch up with Bayesian rationalist thinking. What institutional capital we might have lost from World Bank / Gates, we might be able to pick up with RAND/IARPA/Google/etc, a rather more diverse yet impressive group of possible contributors. For organization, yes a lot of experience, like that of Evidence Action, will be lost, but also much will be gained, for example, by working instead at technology think tanks, and elsewhere.

I don't think your conclusion that people should start in the arena of poverty is very well-supported either, if you're not comparing it to other arenas that people might be able to start out in. Do you think you might be privileging the hypothesis that people should start in the management of poverty just because that's salient to you, possibly because it's the status quo?

Comment author: capybaralet 07 October 2017 04:14:19PM *  2 points [-]

Thanks for writing this. My TL;DR is:

  1. AI policy is important, but we don’t really know where to begin at the object level

  2. You can potentially do 1 of 3 things, ATM: A. “disentanglement” research: B. operational support for (e.g.) FHI C. get in position to influence policy, and wait for policy objectives to be cleared up

  3. Get in touch / Apply to FHI!

I think this is broadly correct, but have a lot of questions and quibbles.

  • I found “disentanglement” unclear. [14] gave the clearest idea of what this might look like. A simple toy example would help a lot.
  • Can you give some idea of what an operations role looks like? I find it difficult to visualize, and I think uncertainty makes it less appealling.
  • Do you have any thoughts on why operations roles aren’t being filled?
  • One more policy that seems worth starting on: programs that build international connections between researchers (especially around policy-relevant issues of AI (i.e. ethics/safety)).
  • The timelines for effective interventions in some policy areas may be short (e.g. 1-5 years), and it may not be possible to wait for disentanglement to be “finished”.
  • Is it reasonable to expect the “disentanglement bottleneck” to be cleared at all? Would disentanglement actually make policy goals clear enough? Trying to anticipate all the potential pitfalls of policies is a bit like trying to anticipate all the potential pitfalls of a particular AI design or reward specification… fortunately, there is a bit of a disanalogy in that we are more likely to have a chance to correct mistakes with policy (although that still could be very hard/impossible). It seems plausible that “start iterating and create feedback loops” is a better alternative to the “wait until things are clearer” strategy.
Comment author: RyanCarey 08 October 2017 11:19:35AM *  4 points [-]

That's the TLDR that I took away from the article too.

I agree that "disentanglement" is unclear. The skillset that I previously thought was needed for this was something like IQ + practical groundedness + general knowledge + conceptual clarity, and that feels mostly to be confirmed by the present article.

It seems plausible that “start iterating and create feedback loops” is a better alternative to the “wait until things are clearer” strategy.

I have some lingering doubts here as well. I would flesh out an objection to the 'disentanglement'-focus as follows: AI strategy depends critically on government, some academic communities and some companies, that are complex organizations. (Suppose that) complex organizations are best understood by an empirical/bottom-up approach, rather than by top-down theorizing. Consider the medical establishment that I have experience with. If I got ten smart effective altruists to generate mutually exclusive collectively exhaustive (MECE) hypotheses about it, as the article proposes doing for AI strategy, they would, roughly speaking, hallucinate some nonsense, that could be invalidated in minutes by someone with years of experience in the domain. So if AI strategy depends in critical components on the nature of complex institutions, then what we need for this research may be, rather than conceptual disentanglement, something more like high-level operational experience of these domains. Since it's hard to find such people, we may want to spend the intervening time interacting with these institutions or working within them on less important issues. Compared to this article, this perspective would de-emphasize the importance of disentanglement, while maintaining the emphasis on entering these institutions, and increasing the emphasis on interacting with and making connections within these institutions.

Comment author: RyanCarey 19 September 2017 12:09:30AM 4 points [-]

Would a transparent idea directory enable refinement of good ideas into great ones, help great ideas find a team, all the while reducing the overall burden of transaction costs associated with considering new ideas?

A transparent idea of proposals should have some effect in this direction. I've asked for a transparent directory of projects for months; it's something I'd like to see funders like EA Grants and thought-leaders like 80,000 work on. However, we need to be cautious because pure ideas are not very scarce. They may be 20% of the bottleneck but 80% is getting talented people. So new project proposals should be presented in such a way that founders will see these ideas and notice if they are a good fit for them.

I- Ready for implementation. These are extremely well considered ideas that support EA principles and have/will contribute good evidence for effectiveness. II- Worth refining. These are promising ideas that can be upgraded to type I with more background research, adjustments in strategy, etc. III- Back to the drawing board. These are well intentioned but miss the mark in an important way, perhaps an over-reliance on intuition or misinformation.

I guess that (II-III) are more like forum posts and should usually be filtered out without need for formal review. I think even most proposals in category (I) are too weak to be likely to suceed. I would use a more stringent checklist e.g. (a) funding may be available (b) part of a founding team is available (c) there is some traction demonstrated.

Too many ideas and not enough doers increases the likelihood that doers will settle on weak ideas... if the number of doers is saturated, they only gum up the works.

There are forces in both directions. If more high-quality ideas are shared, then doers may be less likely to settle on weak ideas.

Finally, the main goal of a transparent idea directory is to reduce the unavoidable transaction costs of new ideas.

Then the focus of such a project should not just be to archive ideas, it should be to have more ideas turned into action.

General thought: I think the quality of ideas is far more important than quantity here. I would much rather see two ultra-high-quality proposals online in a system like this than ten mid-range quality ones. It would be good if people could be encouraged to solicit line-by-line feedback by putting their proposals in google docs, and also if there was a requirement for authors to allow anonymous private feedback. Proposals that are substantially downvoted should perhaps disappear for redrafting. Perhaps team-members should be able to submit themselves as candidates for future projects, awaiting a suitably matched project, IDK. It seems like an important space!

Comment author: RyanCarey 05 September 2017 11:12:21PM *  2 points [-]

For me personally, and sticking just to originals, and not compilations, it would be:

  1. Rationality from A to Z by Eliezer Yudkowsky
  2. Practical Ethics by Peter Singer
  3. Surely You Must be Joking Mr Feynmann by Richard Feynman
  4. Unweaving the Rainbow by Richard Dawkins
  5. Elbow Room by Daniel Dennett
Comment author: RyanCarey 05 September 2017 11:14:20PM *  2 points [-]

But honorable mentions for Superintelligence, the Oxford Handbook of Science Writing, all of Dennett's other books, edge.org, thesciencenetwork.org, Oliver Sacks, lesswrong.com, HPMOR, Wolfram Mathworld, Wikipedia, ...........................................

Comment author: RyanCarey 05 September 2017 11:12:21PM *  2 points [-]

For me personally, and sticking just to originals, and not compilations, it would be:

  1. Rationality from A to Z by Eliezer Yudkowsky
  2. Practical Ethics by Peter Singer
  3. Surely You Must be Joking Mr Feynmann by Richard Feynman
  4. Unweaving the Rainbow by Richard Dawkins
  5. Elbow Room by Daniel Dennett
Comment author: RyanCarey 02 September 2017 05:09:20PM *  7 points [-]

It does look like AI and deep learning will by default push toward greater surveillance, and greater power to intelligence agencies. It could supercharge passive surveillance of online activity, prediction of futuer crime, could make lie detection reliable.

But here's the catch. Year on year, AI and synthetic biology become more powerful and accessible. On the Yudkowsky-Moore law of mad science: "Every 18 months, the minimum IQ necessary to destroy the world drops by one point." How could we possibly expect to be headed toward a stably secure civilization, given that the destructive power of technologies is increasing more quickly than we are really able to adapt our institutions and our selves to deal with them? An obvious answer is that in a world where many can engineer a pandemic in their basement, we'll need to have greater online surveillance to flag when they're ordering a concerning combination of lab equipment, or to more sensitively detect homicidal motives.

On this view, the issue of ideological engineering from governments that are not acting in service of their people is one we're just going to have to deal with...

Another thought is that there will be huge effects from AI (like the internet in general) that come from corporations rather than government. Interacting with apps aggressively tuned for profit (e.g. a supercharged version of the vision described in the Time Well Spent video - http://www.timewellspent.io/) could - I don't know - increase the docility of the populace or have some other wild kind of effects.

Comment author: RyanCarey 03 July 2017 03:56:54AM *  12 points [-]

Thanks Julia!

I would like to add my thanks to Ali Woodman and Rebecca Raible, who did much of the moderation over the last couple of years, as well as Dot Impact, Trike and the rest of the previous moderators. My perspective is that since I've moved toward research and CEA has grown, it no-longer makes sense for me to dedicate my time to continuing to manage the forum. So I'm grateful for CEA's takeover. Of course, I'm still happy to consult if you need help understanding how the forum has run, or thinking about its strategy.

Thanks all and long live effective altruism! ;)

Ryan

Comment author: arunbharatula 24 May 2017 04:24:04AM *  0 points [-]

I try to hyperlink those parts of my writing that are evidenced by a particular source. This avoids the issue that arises in academic writing where it can be unclear what claims a citation relates to. There is a trade-off with the visual appeal of the writing, particularly since my fix for the aforementioned issue is unconventional. However, I believe gain in precision outweighs the stylistic considerations.

Edit: In light of the downvotes and various comments on my pieces recommending I rework my contributions and suggests they may be misleading I am taking down my work till I time I can edit it. Hope this improves things. Thanks for the tips.

Comment author: RyanCarey 24 May 2017 08:04:38AM 1 point [-]

The greater ambiguity, I think, is in which part of the linked document you're citing. If you want to resolve ambiguity, then use footnotes and quote the relevant parts of the sources.

View more: Next