Comment author: DavidMoss 13 October 2017 01:17:08AM 5 points [-]

Instead, participants strongly preferred to continue researching the area they already knew and cared most about, even as other participants were doing the same thing with a different area.

This is one of the things I fear is most likely to fundamentally undermine EA in the long term: people prefer to discuss and associate with people who share their assumptions, concrete concerns and detailed cause-specific knowledge and EA functionally splits into 3+ movement areas who never speak with each other and don't understand each other's arguments, and cause neutrality essentially stops being a thing. Notably, I think this has already happened to a significant extent.

Comment author: John_Maxwell_IV 17 October 2017 07:23:22AM 1 point [-]

Could public debates be helpful for this?

Comment author: John_Maxwell_IV 13 October 2017 01:19:27AM *  2 points [-]

Regarding “direct benefits for people who would learn from reading” our research: this is very difficult to evaluate, but our tentative feeling was that this was lower than we expected. We received less direct engagement with our research on the EA forum than we expected, and we believe few people read our models. Indirectly, the models were referenced in some newsletters (for example MIRI’s). However, since our writings will remain online, there may be a small but long-lasting trickle of benefits into the future, from people coming across our models.

Facetious interpretation: "Effective Altruism found to have a weak culture of intellectual discourse. Conclusion: Deprioritize intellectual discourse."

Comment author: John_Maxwell_IV 07 October 2017 01:49:58AM 0 points [-]

Was your "WEF" link supposed to point to something involving the World Economic Forum?

Comment author: John_Maxwell_IV 28 September 2017 09:00:01PM 0 points [-]

It sounds like you are doing great work. Congratulations on everything you've accomplished.

You've probably thought about this already, but have you thought much about leveraging the popularity of your large events to increase attendance at your small events? (E.g. encouraging attendees to the large events to sign up for a mailing list where the small events are advertised.)

An idea for seeding new Envision chapters is to get people who are starting new chapters to precommit to weekly Skype calls so you can keep them inspired and figure out where they're failing. Or more broadly, do some case studies of campus student organizations that have successfully seeded chapters in lots of different schools, and try to figure out what they're doing right.

For what it's worth, I think I would relax the conditions for the case where your endowment gets donated away. I can easily imagine you guys missing just 1 or 2 of the criteria you specified and still being an effective organization. Of course you'll have to talk to your funders about that.

Comment author: John_Maxwell_IV 28 September 2017 10:04:03AM *  18 points [-]

In Tetlock's book Superforecasting, he distinguishes between two skills related to forecasting: generating questions, and answering them. This "disentanglement research" business sounds more like the first sort of work. Unfortunately, Tetlock's book focuses on the second skill, but I do believe he talks some about the first skill (e.g. giving examples of people who are good at it).

I would imagine that for generating questions, curiosity and creativity are useful. Unfortunately, the Effective Altruism movement seems to be bad at creativity.

John Cleese gave this great talk about creativity in which he distinguishes between two mental modes, "open mode" and "closed mode". Open mode is good for generating ideas, whereas closed mode is good for accomplishing well-defined tasks. It seems to me that for a lot of different reasons, the topic of AI strategy might put a person in closed mode:

  • Ethical obligation - Effective altruism is often framed as an ethical obligation. If I recall correctly, surveys indicate that around half of the EA community sees EA as more of an obligation than an opportunity. Obligations don't typically create a feeling of playfulness.

  • Size of the problem - Paul Graham writes: "Big problems are terrifying. There's an almost physical pain in facing them." AI safety strategy is almost the biggest problem imaginable.

  • Big names - People like Nick Bostrom, Eliezer Yudkowsky, and Eric Drexler have a very high level of prestige within the EA community. (The status difference between them and your average EA is greater than what I've observed between the students & the professor in any college class I remember taking.) Eliezer in particular can get very grumpy with you if you disagree with him. I've noticed that I'm much more apt to generate ideas if I see myself as being at the top of the status hierarchy, and if there is no penalty for coming up with a "bad" idea (even a bad idea can be a good starting point). One idea for solving the EA community's creativity problem is to encourage more EAs to develop Richard Feynman-level indifference to our local status norms.

  • Urgency - As you state in this post, every second counts! Unfortunately urgency typically has the effect of triggering closed mode.

  • Difficulty - As you state in this post, many brilliant people have tried & failed. For some people, this fact is likely to create a sense of intimidation which precludes creativity.

For curiosity, one useful exercise I've found is Anna Salamon's practice of setting a 7-minute timer and trying to think of as many questions as possible within that period. The common pattern here seems to be "quantity over quality". If you're in a mental state where you feel a small amount of reinforcement for a bad idea, and a large amount of reinforcement for a good idea, don't be surprised if a torrent of ideas follows (some of which are good).

Another practice I've found useful is keeping a notebook. Harnessing "ambient thought" and recording ideas as they come to me, in the appropriate notebook page, seems to be much more efficient on a per-minute basis than dedicated brainstorming.

If I was attacking this problem, my overall strategic approach would differ a little from what you are describing here.

I would place less emphasis on intellectual centralization and more emphasis on encouraging people to develop idiosyncratic perspectives/form their own ontologies. Rationale: if many separately developed idiosyncratic perspectives all predict that a particular action X is desirable, that is good evidence that we should do X. There's an analogy to stock trading here. (Relatedly, the finance/venture capital industry might be the segment of society that has the most domain expertise related to predicting the future, modulo principle-agent problems that come with investing other peoples' money. Please let me know if you can think of other candidates... perhaps the intelligence community?)

Discipline could be useful for reading books & passing classes which expand one's library of concepts, but once you get to the original reasoning part, discipline gets less useful. Centralization could be useful for making sure that the space of ideas relevant to AI strategy gets thoroughly covered through our collective study, and for helping people find intellectual collaborators. But I would go for beers, whiteboards, and wikis with long lists of crowdsourced pros and cons, structured to maximize the probability that usefully related ideas will at one point or another be co-located in someone's working memory, before any kind of standard curriculum. I suspect it's better to see AI strategy as a fundamentally interdisciplinary endeavor. (It might be useful to look at successful interdisciplinary research groups such as the Santa Fe Institute for ideas.) And forget all that astronomical waste nonsense for a moment. We are in a simulation. We score 1 point if we get a positive singularity, 0 points otherwise. Where is the loophole in the game's rules that the designers didn't plan for?

[Disclaimer: I haven't made a serious effort to survey the literature or systematically understand the recommendations of experts on either creativity or curiosity, and everything in this comment is just made up of bits and pieces I picked up here and there. If you agree with my hunch that creativity/curiosity are a core part of the problem, it might be worth doing a serious lit review/systematically reading authors who write about this stuff such as Thomas Kuhn, plus reading innovators in various fields who have written about their creative process.]

Comment author: John_Maxwell_IV 28 September 2017 06:52:47PM *  5 points [-]

Another thought: Given the nature of this problem, I wonder why the focus is on enabling EAs to discover AI strategy vs trying to gather ideas from experts who are outside the community. Most college professors have office hours you can go to and ask questions. Existing experts aren't suffering from any of the issues that might put EAs in closed mode, and they already have the deep expertise it would take years for us to accumulate. I could imagine an event like the Asilomar AI conference, but for AI safety strategy, where you invite leading experts in every field that seems relevant, do the beer and whiteboards thing, and see what people come up with. (A gathering size much smaller than the Asilomar conference might be optimal for idea generation. I think it'd be interesting to sponsor independent teams where each team consists of one deep learning expert, one AI venture capitalist, one game theory person, one policy person, one historian, one EA/rationalist, etc. and then see if the teams end up agreeing about anything.)

Are there any best practices for getting academics interested in problems?

Comment author: WillPearson 28 September 2017 11:03:12AM 2 points [-]

I agree that creativity is key.

I'd would point out that you may need discipline to do experiments based upon your creative thoughts (if the information you need is not available). If you can't check your original reasoning against the world, you are adrift in a sea of possibilities.

Comment author: John_Maxwell_IV 28 September 2017 06:41:10PM 0 points [-]

Yeah, that sounds about right. Research and idea generation are synergistic processes. I'm not completely sure what the best way to balance them is.

Comment author: John_Maxwell_IV 28 September 2017 10:04:03AM *  18 points [-]

In Tetlock's book Superforecasting, he distinguishes between two skills related to forecasting: generating questions, and answering them. This "disentanglement research" business sounds more like the first sort of work. Unfortunately, Tetlock's book focuses on the second skill, but I do believe he talks some about the first skill (e.g. giving examples of people who are good at it).

I would imagine that for generating questions, curiosity and creativity are useful. Unfortunately, the Effective Altruism movement seems to be bad at creativity.

John Cleese gave this great talk about creativity in which he distinguishes between two mental modes, "open mode" and "closed mode". Open mode is good for generating ideas, whereas closed mode is good for accomplishing well-defined tasks. It seems to me that for a lot of different reasons, the topic of AI strategy might put a person in closed mode:

  • Ethical obligation - Effective altruism is often framed as an ethical obligation. If I recall correctly, surveys indicate that around half of the EA community sees EA as more of an obligation than an opportunity. Obligations don't typically create a feeling of playfulness.

  • Size of the problem - Paul Graham writes: "Big problems are terrifying. There's an almost physical pain in facing them." AI safety strategy is almost the biggest problem imaginable.

  • Big names - People like Nick Bostrom, Eliezer Yudkowsky, and Eric Drexler have a very high level of prestige within the EA community. (The status difference between them and your average EA is greater than what I've observed between the students & the professor in any college class I remember taking.) Eliezer in particular can get very grumpy with you if you disagree with him. I've noticed that I'm much more apt to generate ideas if I see myself as being at the top of the status hierarchy, and if there is no penalty for coming up with a "bad" idea (even a bad idea can be a good starting point). One idea for solving the EA community's creativity problem is to encourage more EAs to develop Richard Feynman-level indifference to our local status norms.

  • Urgency - As you state in this post, every second counts! Unfortunately urgency typically has the effect of triggering closed mode.

  • Difficulty - As you state in this post, many brilliant people have tried & failed. For some people, this fact is likely to create a sense of intimidation which precludes creativity.

For curiosity, one useful exercise I've found is Anna Salamon's practice of setting a 7-minute timer and trying to think of as many questions as possible within that period. The common pattern here seems to be "quantity over quality". If you're in a mental state where you feel a small amount of reinforcement for a bad idea, and a large amount of reinforcement for a good idea, don't be surprised if a torrent of ideas follows (some of which are good).

Another practice I've found useful is keeping a notebook. Harnessing "ambient thought" and recording ideas as they come to me, in the appropriate notebook page, seems to be much more efficient on a per-minute basis than dedicated brainstorming.

If I was attacking this problem, my overall strategic approach would differ a little from what you are describing here.

I would place less emphasis on intellectual centralization and more emphasis on encouraging people to develop idiosyncratic perspectives/form their own ontologies. Rationale: if many separately developed idiosyncratic perspectives all predict that a particular action X is desirable, that is good evidence that we should do X. There's an analogy to stock trading here. (Relatedly, the finance/venture capital industry might be the segment of society that has the most domain expertise related to predicting the future, modulo principle-agent problems that come with investing other peoples' money. Please let me know if you can think of other candidates... perhaps the intelligence community?)

Discipline could be useful for reading books & passing classes which expand one's library of concepts, but once you get to the original reasoning part, discipline gets less useful. Centralization could be useful for making sure that the space of ideas relevant to AI strategy gets thoroughly covered through our collective study, and for helping people find intellectual collaborators. But I would go for beers, whiteboards, and wikis with long lists of crowdsourced pros and cons, structured to maximize the probability that usefully related ideas will at one point or another be co-located in someone's working memory, before any kind of standard curriculum. I suspect it's better to see AI strategy as a fundamentally interdisciplinary endeavor. (It might be useful to look at successful interdisciplinary research groups such as the Santa Fe Institute for ideas.) And forget all that astronomical waste nonsense for a moment. We are in a simulation. We score 1 point if we get a positive singularity, 0 points otherwise. Where is the loophole in the game's rules that the designers didn't plan for?

[Disclaimer: I haven't made a serious effort to survey the literature or systematically understand the recommendations of experts on either creativity or curiosity, and everything in this comment is just made up of bits and pieces I picked up here and there. If you agree with my hunch that creativity/curiosity are a core part of the problem, it might be worth doing a serious lit review/systematically reading authors who write about this stuff such as Thomas Kuhn, plus reading innovators in various fields who have written about their creative process.]

Comment author: John_Maxwell_IV 21 September 2017 06:59:40PM *  2 points [-]

The proposed website would need to be fairly sophisticated to handle the multiple inputs, and would likely need nearly constant updating.

It's possible that you could cobble something together using Google Forms + Google Sheets/Fieldbook/Airtable/wiki software/etc. until you really understood the needs of your use case.

Comment author: RyanCarey 19 September 2017 12:09:30AM 4 points [-]

Would a transparent idea directory enable refinement of good ideas into great ones, help great ideas find a team, all the while reducing the overall burden of transaction costs associated with considering new ideas?

A transparent idea of proposals should have some effect in this direction. I've asked for a transparent directory of projects for months; it's something I'd like to see funders like EA Grants and thought-leaders like 80,000 work on. However, we need to be cautious because pure ideas are not very scarce. They may be 20% of the bottleneck but 80% is getting talented people. So new project proposals should be presented in such a way that founders will see these ideas and notice if they are a good fit for them.

I- Ready for implementation. These are extremely well considered ideas that support EA principles and have/will contribute good evidence for effectiveness. II- Worth refining. These are promising ideas that can be upgraded to type I with more background research, adjustments in strategy, etc. III- Back to the drawing board. These are well intentioned but miss the mark in an important way, perhaps an over-reliance on intuition or misinformation.

I guess that (II-III) are more like forum posts and should usually be filtered out without need for formal review. I think even most proposals in category (I) are too weak to be likely to suceed. I would use a more stringent checklist e.g. (a) funding may be available (b) part of a founding team is available (c) there is some traction demonstrated.

Too many ideas and not enough doers increases the likelihood that doers will settle on weak ideas... if the number of doers is saturated, they only gum up the works.

There are forces in both directions. If more high-quality ideas are shared, then doers may be less likely to settle on weak ideas.

Finally, the main goal of a transparent idea directory is to reduce the unavoidable transaction costs of new ideas.

Then the focus of such a project should not just be to archive ideas, it should be to have more ideas turned into action.

General thought: I think the quality of ideas is far more important than quantity here. I would much rather see two ultra-high-quality proposals online in a system like this than ten mid-range quality ones. It would be good if people could be encouraged to solicit line-by-line feedback by putting their proposals in google docs, and also if there was a requirement for authors to allow anonymous private feedback. Proposals that are substantially downvoted should perhaps disappear for redrafting. Perhaps team-members should be able to submit themselves as candidates for future projects, awaiting a suitably matched project, IDK. It seems like an important space!

Comment author: John_Maxwell_IV 21 September 2017 06:42:52AM 0 points [-]

If we've got more ideas than talented people, it seems like being able to prioritize ideas well is very important. Having people work on the best 10% of ideas is much better than having people work on 10% of the ideas that are chosen at random. I think a transparent idea directory could be very valuable for prioritization, because in order to prioritize well, you would like to be able to brainstorm as many possible pros and cons for any given idea as possible.

To put it another way, getting more people involved in prioritization means we can take advantage of a broader array of perspectives. See Givewell on cluster thinking. I think having people brainstorm ways in which a proposal might end up being unexpectedly harmful could be especially valuable.

General thought: I think the quality of ideas is far more important than quantity here.

My impression is that idea production is like pottery in the sense that the best way to get quality is to aim for quantity.

Comment author: John_Maxwell_IV 16 September 2017 06:27:15AM 0 points [-]

View more: Next