Comment author: John_Maxwell_IV 30 November 2017 07:28:36AM 0 points [-]

The fragmentation that I perceive in the space may be more by design than is apparent to me. In particular, the US defense department has a strategy for North Korea that may be aligned only with some nonprofits’ goals, and it has ways of encouraging the activism it wants to see and discourage the activism that it’s not interested in. I already mentioned that nonprofits have refused funding from US government sources so not to make themselves dependent on a funder that may not be value aligned with them and does not make exit grants.

Can you talk more about why the DoD might not be value aligned? Perhaps the DoD wants to minimize the probability of a nuclear war, whereas humanitarian activists want to alleviate present suffering, and in some cases these goals trade off?

Comment author: Milan_Griffes 29 November 2017 04:33:30AM 3 points [-]

The study of North Korea may produce insight into how dystopian societal attractor points can be averted or what preventive measures (beyond what is present in today’s North Korea) might help people on the inside destabilize them.

This is a great point.

Comment author: John_Maxwell_IV 30 November 2017 06:06:08AM *  1 point [-]

[Highly speculative]

Maybe there's an unpopularity/coercion downward spiral: the more unpopular a leader becomes, the more the leader needs to rely on coercion in order to stay in power, causing further unpopularity, etc.

Having a source of legitimacy, even if it's completely arbitrary (the "divine right of kings"), helps forestall the spiral, because the leader doesn't need coercion to stay in power during periods of unpopularity.

According to this story, the reason communism doesn't end well is because it's an ultra-egalitarian ideology that holds status differences to be illegitimate and revolution to be virtuous. So the only rulers able to stay in power do so through coercion. (See: "dominance" vs "prestige" in social science.)

A surprising implication of this view: the existence of democratically ruled countries makes authoritarian countries less nice to live in. Because democracies make autocrats look less legitimate, autocrats need to rely more on coercion to maintain power. This argument also work in reverse: If Putin makes US democracy look less legitimate, anti-Putin coalitions in Russia have a harder time gaining steam, so Putin doesn't need to crack down as hard on them.

Chinese leaders want a diplomatic solution to the crisis because deposing Kim makes them look less legitimate.

People instinctively want to take a hard line on Kim, but a soft line is an interesting thought experiment. Suppose the US offered Kim $100M to step down. Kim won't take it, because he knows the US can imprison him as soon as he takes his finger off the nuclear button. And there's no way the US can credibly precommit to not do this. Well, actually, I can think of a way to get the same effect: Let Kim keep Barron Trump and Sasha Obama as hostages. Not politically viable, and creates bad incentives for other autocrats, but fun to think about.

In response to What consequences?
Comment author: Denkenberger 25 November 2017 09:21:42PM *  1 point [-]

I'd be curious how much you think previous attempts at calculating multiple impacts address cluelessness, such as Causal Networks Model, saving lives in the present generation and reducing X risk for AI and alternate foods, and cause area comparison.

Comment author: John_Maxwell_IV 26 November 2017 08:49:35AM 1 point [-]
Comment author: DavidMoss 13 October 2017 01:17:08AM 6 points [-]

Instead, participants strongly preferred to continue researching the area they already knew and cared most about, even as other participants were doing the same thing with a different area.

This is one of the things I fear is most likely to fundamentally undermine EA in the long term: people prefer to discuss and associate with people who share their assumptions, concrete concerns and detailed cause-specific knowledge and EA functionally splits into 3+ movement areas who never speak with each other and don't understand each other's arguments, and cause neutrality essentially stops being a thing. Notably, I think this has already happened to a significant extent.

Comment author: John_Maxwell_IV 17 October 2017 07:23:22AM 1 point [-]

Could public debates be helpful for this?

Comment author: John_Maxwell_IV 13 October 2017 01:19:27AM *  2 points [-]

Regarding “direct benefits for people who would learn from reading” our research: this is very difficult to evaluate, but our tentative feeling was that this was lower than we expected. We received less direct engagement with our research on the EA forum than we expected, and we believe few people read our models. Indirectly, the models were referenced in some newsletters (for example MIRI’s). However, since our writings will remain online, there may be a small but long-lasting trickle of benefits into the future, from people coming across our models.

Facetious interpretation: "Effective Altruism found to have a weak culture of intellectual discourse. Conclusion: Deprioritize intellectual discourse."

Comment author: John_Maxwell_IV 07 October 2017 01:49:58AM 0 points [-]

Was your "WEF" link supposed to point to something involving the World Economic Forum?

Comment author: John_Maxwell_IV 28 September 2017 09:00:01PM 0 points [-]

It sounds like you are doing great work. Congratulations on everything you've accomplished.

You've probably thought about this already, but have you thought much about leveraging the popularity of your large events to increase attendance at your small events? (E.g. encouraging attendees to the large events to sign up for a mailing list where the small events are advertised.)

An idea for seeding new Envision chapters is to get people who are starting new chapters to precommit to weekly Skype calls so you can keep them inspired and figure out where they're failing. Or more broadly, do some case studies of campus student organizations that have successfully seeded chapters in lots of different schools, and try to figure out what they're doing right.

For what it's worth, I think I would relax the conditions for the case where your endowment gets donated away. I can easily imagine you guys missing just 1 or 2 of the criteria you specified and still being an effective organization. Of course you'll have to talk to your funders about that.

Comment author: John_Maxwell_IV 28 September 2017 10:04:03AM *  19 points [-]

In Tetlock's book Superforecasting, he distinguishes between two skills related to forecasting: generating questions, and answering them. This "disentanglement research" business sounds more like the first sort of work. Unfortunately, Tetlock's book focuses on the second skill, but I do believe he talks some about the first skill (e.g. giving examples of people who are good at it).

I would imagine that for generating questions, curiosity and creativity are useful. Unfortunately, the Effective Altruism movement seems to be bad at creativity.

John Cleese gave this great talk about creativity in which he distinguishes between two mental modes, "open mode" and "closed mode". Open mode is good for generating ideas, whereas closed mode is good for accomplishing well-defined tasks. It seems to me that for a lot of different reasons, the topic of AI strategy might put a person in closed mode:

  • Ethical obligation - Effective altruism is often framed as an ethical obligation. If I recall correctly, a Facebook poll indicated that around half of the EA community sees EA as more of an obligation than an opportunity. Obligations don't typically create a feeling of playfulness.

  • Size of the problem - Paul Graham writes: "Big problems are terrifying. There's an almost physical pain in facing them." AI safety strategy is almost the biggest problem imaginable.

  • Big names - People like Nick Bostrom, Eliezer Yudkowsky, and Eric Drexler have a very high level of prestige within the EA community. (The status difference between them and your average EA is greater than what I've observed between the students & the professor in any college class I remember taking.) Eliezer in particular can get very grumpy with you if you disagree with him. I've noticed that I'm much more apt to generate ideas if I see myself as being at the top of the status hierarchy, and if there is no penalty for coming up with a "bad" idea (even a bad idea can be a good starting point). One idea for solving the EA community's creativity problem is to encourage more EAs to develop Richard Feynman-level indifference to our local status norms.

  • Urgency - As you state in this post, every second counts! Unfortunately urgency typically has the effect of triggering closed mode.

  • Difficulty - As you state in this post, many brilliant people have tried & failed. For some people, this fact is likely to create a sense of intimidation which precludes creativity.

For curiosity, one useful exercise I've found is Anna Salamon's practice of setting a 7-minute timer and trying to think of as many questions as possible within that period. The common pattern here seems to be "quantity over quality". If you're in a mental state where you feel a small amount of reinforcement for a bad idea, and a large amount of reinforcement for a good idea, don't be surprised if a torrent of ideas follows (some of which are good).

Another practice I've found useful is keeping a notebook. Harnessing "ambient thought" and recording ideas as they come to me, in the appropriate notebook page, seems to be much more efficient on a per-minute basis than dedicated brainstorming.

If I was attacking this problem, my overall strategic approach would differ a little from what you are describing here.

I would place less emphasis on intellectual centralization and more emphasis on encouraging people to develop idiosyncratic perspectives/form their own ontologies. Rationale: if many separately developed idiosyncratic perspectives all predict that a particular action X is desirable, that is good evidence that we should do X. There's an analogy to stock trading here. (Relatedly, the finance/venture capital industry might be the segment of society that has the most domain expertise related to predicting the future, modulo principle-agent problems that come with investing other peoples' money. Please let me know if you can think of other candidates... perhaps the intelligence community?)

Discipline could be useful for reading books & passing classes which expand one's library of concepts, but once you get to the original reasoning part, discipline gets less useful. Centralization could be useful for making sure that the space of ideas relevant to AI strategy gets thoroughly covered through our collective study, and for helping people find intellectual collaborators. But I would go for beers, whiteboards, and wikis with long lists of crowdsourced pros and cons, structured to maximize the probability that usefully related ideas will at one point or another be co-located in someone's working memory, before any kind of standard curriculum. I suspect it's better to see AI strategy as a fundamentally interdisciplinary endeavor. (It might be useful to look at successful interdisciplinary research groups such as the Santa Fe Institute for ideas.) And forget all that astronomical waste nonsense for a moment. We are in a simulation. We score 1 point if we get a positive singularity, 0 points otherwise. Where is the loophole in the game's rules that the designers didn't plan for?

[Disclaimer: I haven't made a serious effort to survey the literature or systematically understand the recommendations of experts on either creativity or curiosity, and everything in this comment is just made up of bits and pieces I picked up here and there. If you agree with my hunch that creativity/curiosity are a core part of the problem, it might be worth doing a serious lit review/systematically reading authors who write about this stuff such as Thomas Kuhn, plus reading innovators in various fields who have written about their creative process.]

Comment author: John_Maxwell_IV 28 September 2017 06:52:47PM *  5 points [-]

Another thought: Given the nature of this problem, I wonder why the focus is on enabling EAs to discover AI strategy vs trying to gather ideas from experts who are outside the community. Most college professors have office hours you can go to and ask questions. Existing experts aren't suffering from any of the issues that might put EAs in closed mode, and they already have the deep expertise it would take years for us to accumulate. I could imagine an event like the Asilomar AI conference, but for AI safety strategy, where you invite leading experts in every field that seems relevant, do the beer and whiteboards thing, and see what people come up with. (A gathering size much smaller than the Asilomar conference might be optimal for idea generation. I think it'd be interesting to sponsor independent teams where each team consists of one deep learning expert, one AI venture capitalist, one game theory person, one policy person, one historian, one EA/rationalist, etc. and then see if the teams end up agreeing about anything.)

Are there any best practices for getting academics interested in problems?

Comment author: WillPearson 28 September 2017 11:03:12AM 2 points [-]

I agree that creativity is key.

I'd would point out that you may need discipline to do experiments based upon your creative thoughts (if the information you need is not available). If you can't check your original reasoning against the world, you are adrift in a sea of possibilities.

Comment author: John_Maxwell_IV 28 September 2017 06:41:10PM 0 points [-]

Yeah, that sounds about right. Research and idea generation are synergistic processes. I'm not completely sure what the best way to balance them is.

Comment author: John_Maxwell_IV 28 September 2017 10:04:03AM *  19 points [-]

In Tetlock's book Superforecasting, he distinguishes between two skills related to forecasting: generating questions, and answering them. This "disentanglement research" business sounds more like the first sort of work. Unfortunately, Tetlock's book focuses on the second skill, but I do believe he talks some about the first skill (e.g. giving examples of people who are good at it).

I would imagine that for generating questions, curiosity and creativity are useful. Unfortunately, the Effective Altruism movement seems to be bad at creativity.

John Cleese gave this great talk about creativity in which he distinguishes between two mental modes, "open mode" and "closed mode". Open mode is good for generating ideas, whereas closed mode is good for accomplishing well-defined tasks. It seems to me that for a lot of different reasons, the topic of AI strategy might put a person in closed mode:

  • Ethical obligation - Effective altruism is often framed as an ethical obligation. If I recall correctly, a Facebook poll indicated that around half of the EA community sees EA as more of an obligation than an opportunity. Obligations don't typically create a feeling of playfulness.

  • Size of the problem - Paul Graham writes: "Big problems are terrifying. There's an almost physical pain in facing them." AI safety strategy is almost the biggest problem imaginable.

  • Big names - People like Nick Bostrom, Eliezer Yudkowsky, and Eric Drexler have a very high level of prestige within the EA community. (The status difference between them and your average EA is greater than what I've observed between the students & the professor in any college class I remember taking.) Eliezer in particular can get very grumpy with you if you disagree with him. I've noticed that I'm much more apt to generate ideas if I see myself as being at the top of the status hierarchy, and if there is no penalty for coming up with a "bad" idea (even a bad idea can be a good starting point). One idea for solving the EA community's creativity problem is to encourage more EAs to develop Richard Feynman-level indifference to our local status norms.

  • Urgency - As you state in this post, every second counts! Unfortunately urgency typically has the effect of triggering closed mode.

  • Difficulty - As you state in this post, many brilliant people have tried & failed. For some people, this fact is likely to create a sense of intimidation which precludes creativity.

For curiosity, one useful exercise I've found is Anna Salamon's practice of setting a 7-minute timer and trying to think of as many questions as possible within that period. The common pattern here seems to be "quantity over quality". If you're in a mental state where you feel a small amount of reinforcement for a bad idea, and a large amount of reinforcement for a good idea, don't be surprised if a torrent of ideas follows (some of which are good).

Another practice I've found useful is keeping a notebook. Harnessing "ambient thought" and recording ideas as they come to me, in the appropriate notebook page, seems to be much more efficient on a per-minute basis than dedicated brainstorming.

If I was attacking this problem, my overall strategic approach would differ a little from what you are describing here.

I would place less emphasis on intellectual centralization and more emphasis on encouraging people to develop idiosyncratic perspectives/form their own ontologies. Rationale: if many separately developed idiosyncratic perspectives all predict that a particular action X is desirable, that is good evidence that we should do X. There's an analogy to stock trading here. (Relatedly, the finance/venture capital industry might be the segment of society that has the most domain expertise related to predicting the future, modulo principle-agent problems that come with investing other peoples' money. Please let me know if you can think of other candidates... perhaps the intelligence community?)

Discipline could be useful for reading books & passing classes which expand one's library of concepts, but once you get to the original reasoning part, discipline gets less useful. Centralization could be useful for making sure that the space of ideas relevant to AI strategy gets thoroughly covered through our collective study, and for helping people find intellectual collaborators. But I would go for beers, whiteboards, and wikis with long lists of crowdsourced pros and cons, structured to maximize the probability that usefully related ideas will at one point or another be co-located in someone's working memory, before any kind of standard curriculum. I suspect it's better to see AI strategy as a fundamentally interdisciplinary endeavor. (It might be useful to look at successful interdisciplinary research groups such as the Santa Fe Institute for ideas.) And forget all that astronomical waste nonsense for a moment. We are in a simulation. We score 1 point if we get a positive singularity, 0 points otherwise. Where is the loophole in the game's rules that the designers didn't plan for?

[Disclaimer: I haven't made a serious effort to survey the literature or systematically understand the recommendations of experts on either creativity or curiosity, and everything in this comment is just made up of bits and pieces I picked up here and there. If you agree with my hunch that creativity/curiosity are a core part of the problem, it might be worth doing a serious lit review/systematically reading authors who write about this stuff such as Thomas Kuhn, plus reading innovators in various fields who have written about their creative process.]

View more: Next