32

John_Maxwell_IV comments on Personal thoughts on careers in AI policy and strategy - Effective Altruism Forum

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (25)

You are viewing a single comment's thread.

Comment author: John_Maxwell_IV 28 September 2017 10:04:03AM *  19 points [-]

In Tetlock's book Superforecasting, he distinguishes between two skills related to forecasting: generating questions, and answering them. This "disentanglement research" business sounds more like the first sort of work. Unfortunately, Tetlock's book focuses on the second skill, but I do believe he talks some about the first skill (e.g. giving examples of people who are good at it).

I would imagine that for generating questions, curiosity and creativity are useful. Unfortunately, the Effective Altruism movement seems to be bad at creativity.

John Cleese gave this great talk about creativity in which he distinguishes between two mental modes, "open mode" and "closed mode". Open mode is good for generating ideas, whereas closed mode is good for accomplishing well-defined tasks. It seems to me that for a lot of different reasons, the topic of AI strategy might put a person in closed mode:

  • Ethical obligation - Effective altruism is often framed as an ethical obligation. If I recall correctly, a Facebook poll indicated that around half of the EA community sees EA as more of an obligation than an opportunity. Obligations don't typically create a feeling of playfulness.

  • Size of the problem - Paul Graham writes: "Big problems are terrifying. There's an almost physical pain in facing them." AI safety strategy is almost the biggest problem imaginable.

  • Big names - People like Nick Bostrom, Eliezer Yudkowsky, and Eric Drexler have a very high level of prestige within the EA community. (The status difference between them and your average EA is greater than what I've observed between the students & the professor in any college class I remember taking.) Eliezer in particular can get very grumpy with you if you disagree with him. I've noticed that I'm much more apt to generate ideas if I see myself as being at the top of the status hierarchy, and if there is no penalty for coming up with a "bad" idea (even a bad idea can be a good starting point). One idea for solving the EA community's creativity problem is to encourage more EAs to develop Richard Feynman-level indifference to our local status norms.

  • Urgency - As you state in this post, every second counts! Unfortunately urgency typically has the effect of triggering closed mode.

  • Difficulty - As you state in this post, many brilliant people have tried & failed. For some people, this fact is likely to create a sense of intimidation which precludes creativity.

For curiosity, one useful exercise I've found is Anna Salamon's practice of setting a 7-minute timer and trying to think of as many questions as possible within that period. The common pattern here seems to be "quantity over quality". If you're in a mental state where you feel a small amount of reinforcement for a bad idea, and a large amount of reinforcement for a good idea, don't be surprised if a torrent of ideas follows (some of which are good).

Another practice I've found useful is keeping a notebook. Harnessing "ambient thought" and recording ideas as they come to me, in the appropriate notebook page, seems to be much more efficient on a per-minute basis than dedicated brainstorming.

If I was attacking this problem, my overall strategic approach would differ a little from what you are describing here.

I would place less emphasis on intellectual centralization and more emphasis on encouraging people to develop idiosyncratic perspectives/form their own ontologies. Rationale: if many separately developed idiosyncratic perspectives all predict that a particular action X is desirable, that is good evidence that we should do X. There's an analogy to stock trading here. (Relatedly, the finance/venture capital industry might be the segment of society that has the most domain expertise related to predicting the future, modulo principle-agent problems that come with investing other peoples' money. Please let me know if you can think of other candidates... perhaps the intelligence community?)

Discipline could be useful for reading books & passing classes which expand one's library of concepts, but once you get to the original reasoning part, discipline gets less useful. Centralization could be useful for making sure that the space of ideas relevant to AI strategy gets thoroughly covered through our collective study, and for helping people find intellectual collaborators. But I would go for beers, whiteboards, and wikis with long lists of crowdsourced pros and cons, structured to maximize the probability that usefully related ideas will at one point or another be co-located in someone's working memory, before any kind of standard curriculum. I suspect it's better to see AI strategy as a fundamentally interdisciplinary endeavor. (It might be useful to look at successful interdisciplinary research groups such as the Santa Fe Institute for ideas.) And forget all that astronomical waste nonsense for a moment. We are in a simulation. We score 1 point if we get a positive singularity, 0 points otherwise. Where is the loophole in the game's rules that the designers didn't plan for?

[Disclaimer: I haven't made a serious effort to survey the literature or systematically understand the recommendations of experts on either creativity or curiosity, and everything in this comment is just made up of bits and pieces I picked up here and there. If you agree with my hunch that creativity/curiosity are a core part of the problem, it might be worth doing a serious lit review/systematically reading authors who write about this stuff such as Thomas Kuhn, plus reading innovators in various fields who have written about their creative process.]

Comment author: John_Maxwell_IV 28 September 2017 06:52:47PM *  5 points [-]

Another thought: Given the nature of this problem, I wonder why the focus is on enabling EAs to discover AI strategy vs trying to gather ideas from experts who are outside the community. Most college professors have office hours you can go to and ask questions. Existing experts aren't suffering from any of the issues that might put EAs in closed mode, and they already have the deep expertise it would take years for us to accumulate. I could imagine an event like the Asilomar AI conference, but for AI safety strategy, where you invite leading experts in every field that seems relevant, do the beer and whiteboards thing, and see what people come up with. (A gathering size much smaller than the Asilomar conference might be optimal for idea generation. I think it'd be interesting to sponsor independent teams where each team consists of one deep learning expert, one AI venture capitalist, one game theory person, one policy person, one historian, one EA/rationalist, etc. and then see if the teams end up agreeing about anything.)

Are there any best practices for getting academics interested in problems?

Comment author: WillPearson 28 September 2017 11:03:12AM 2 points [-]

I agree that creativity is key.

I'd would point out that you may need discipline to do experiments based upon your creative thoughts (if the information you need is not available). If you can't check your original reasoning against the world, you are adrift in a sea of possibilities.

Comment author: John_Maxwell_IV 28 September 2017 06:41:10PM 0 points [-]

Yeah, that sounds about right. Research and idea generation are synergistic processes. I'm not completely sure what the best way to balance them is.

Comment author: capybaralet 16 October 2017 03:00:35AM 1 point [-]

I strongly agree that independent thinking seems undervalued (in general and in EA/LW). There is also an analogy with ensembling in machine learning (https://en.wikipedia.org/wiki/Ensemble_learning).

By "independent" I mean "thinking about something without considering others' thoughts on it" or something to that effect... it seems easy for people's thoughts to converge too much if they aren't allowed to develop in isolation.

Thinking about it now, though, I wonder if there isn't some even better middle ground; in my experience, group brainstorming can be much more productive than independent thought as I've described it.

There is a very high-level analogy with evolution: I imagine sexual reproduction might create more diversity in a population than horizontal gene transfer, since in the latter case, an idea(=gene) which seems good could rapidly become universal, and thus "local optima" might be more of a problem for the population (I have no idea if that's actually how this works biologically... in fact, it seems like it might not be, since at least some viruses/bacteria seem to do a great job of rapidly mutating to become resistant to defences/treatments.)

Comment author: Kathy_Forth 14 October 2017 10:57:37PM *  1 point [-]

I run a group for creatives on Facebook called Altruistic Ideas. In it, I have worked to foster a creative culture. I've also written about the differences between the EA and rationality cultures vs. the culture creatives need. If this might be useful for anyone's EA goals, please feel free to message me.