Comment author: HoldenKarnofsky 01 March 2017 06:32:45PM 5 points [-]

Thanks for the thoughts!

I'm not sure I fully understand what you're advocating. You talk about "only selectively engag[ing] with criticism" but I'm not sure whether you are in favor of it or against it. FWIW, this post is largely meant to help understand why I only selectively engage with criticism.

I agree that "we should be skeptical of our stories about why we do things, even after we try to correct for this." I'm not sure that the reasons I've given are the true ones, but they are my best guess. I note that the reasons I give here aren't necessarily very different from the reasons others making similar transitions would give privately.

I also agree that there is a significant risk that my views will calcify. I worry about this a fair amount, and I am interested in potential solutions, but at this point I believe that public discourse is not promising as a potential solution, for reasons outlined above. I think there is a bit of a false dichotomy between "engage in public discourse" and "let one's views calcify"; unfortunately I think the former does little to prevent the latter.

I don't understand the claim that "The principles section is an outline of a potential future straightjacket." Which of the principles in that section do you have in mind?

Comment author: RomeoStevens 28 March 2017 07:22:54PM *  0 points [-]

Whoops, I somehow didn't see this until now. Scattered EA discourse, shrug.

I am in support of only engaging selectively.

I also agree that there is a significant risk that my views will calcify. I worry about this a fair amount, and I am interested in potential solutions,

great!

I think there is a bit of a false dichotomy between "engage in public discourse" and "let one's views calcify"; unfortunately I think the former does little to prevent the latter.

agreed

I don't understand the claim that "The principles section is an outline of a potential future straightjacket." Which of the principles in that section do you have in mind?

the whole thing. Principles are better as descriptions and not prescriptions :)

WRT preventing views from calcifying, I think it is very very important to actively cultivate something similar to

"But we ran those conversations with the explicit rule that one could talk nonsensically and vaguely, but without criticism unless you intended to talk accurately and sensibly. We could try out ideas that were half-baked or quarter-baked or not baked at all, and just talk and listen and try them again." -Herbert Simon, Nobel Laureate, founding father of the AI field

I've been researching top and breakout performance and this sort of thing keeps coming up again and again. Fortunately, creative reasoning is not magic. It has been studied and has some parameters that can be intentionally inculcated.

This talk gives a brief overview: https://vimeo.com/89936101

And I recommend skimming one of Edward deBono's books, such as six thinking hats. He outlined much of the sort of reasoning of 0 to 1, the Lean Startup, and others way back in the early nineties. It may be that openPhil is already having such conversations internally. In which case, great! That would make me much more bullish on the idea that openPhil has a chance at outsize impact. My main proxy metric is an Umeshism: if you never output any batshit crazy ideas your process is way too conservative.

Comment author: Zeke_Sherman 28 March 2017 05:26:15PM *  0 points [-]

Optimizing for a narrower set of criteria allows more optimization power to be put behind each member of the set. I think it is plausible that those who wish to do the most good should put their optimization power behind a single criteria, as that gives it some chance to actually succeed.

Only if you assume that there are high thresholds for achievements.

The best candidate afaik is right to exit, as it eliminates the largest possible number of failure modes in the minimum complexity memetic payload.

I do not understand what you are saying.

Edit: do you mean, the option to get rid of technological developments and start from scratch? I don't think there's any likelihood of that, it runs directly counter to all the pressures described in my post.

Comment author: RomeoStevens 28 March 2017 07:07:18PM 1 point [-]

right to exit means right to suicide, right to exit geographically, right to not participate in a process politically etc.

In response to Utopia In The Fog
Comment author: RomeoStevens 28 March 2017 08:31:18AM 1 point [-]

Optimizing for a narrower set of criteria allows more optimization power to be put behind each member of the set. I think it is plausible that those who wish to do the most good should put their optimization power behind a single criteria, as that gives it some chance to actually succeed. The best candidate afaik is right to exit, as it eliminates the largest possible number of failure modes in the minimum complexity memetic payload. Interested in arguments why this might be wrong.

Comment author: RomeoStevens 20 March 2017 07:53:10PM *  2 points [-]

When I zoom out on what sort of thing is happening when an agent engages in deliberative ladders it seems like they are struggling to deal with a multiplicative search space as an agent optimized for additive search spaces. Expanding on this. When I look at human cognition, the structure and limitations of working and associative memory, our innate risk tolerance and hyperbolic discounting, our pursuit of comparative advantage, as well as reasonable conjectures about the payoff distribution in the ancestral environment, I see an additive search space. That is to say that if you have a bunch of slot machines each with a different payout, we find the best one (or more accurately, the minimal set that will satisfy our needs along the various dimensions of payout) and keep pulling. In contrast, we now find ourselves in a potentially multiplicative search space. IE the payout of any given slot machine can (via sign considerations) potentially affects the payout of all others.

This drastically changes the calculus of the exploration-exploitation tradeoff. We're not even sure that the problem is tractable because we don't know the size of the search space. But one thing it definitely prescribes is dramatically more investment in exploration vs exploitation. The number of new crucial considerations discovered from such efforts might give us some data in the sense that if your trajectory asymptotes you have gained some knowledge about the search space, whereas if your trajectory remains spiky with large course corrections you suspect there is still a lot of value to further exploration.

What is the outside view of crucial consideration discovery? What sort of activities are people engaged in when they discover new candidate crucial considerations?

Another lens for looking at this is to say that qualitative model updates (where new distinctions are made) are drastically more important than quantitative model updates where you change the weight or value of some existing distinction within the model. This implies that if you find yourself investing in quantitative model disputes that there is more value elsewhere.

I believe it is possible to push on this by encouraging model diffing as in the recent thread on Alice and Bob's discussion but with an added focus on comparing distinctions that the two people are making over comparing values/weights of those distinctions. Eventually gathering together a more explicit idea of what all the distinctions all the different people are making can potentially then be an input into structured analytic techniques useful for finding holes, such as taxonimization. Harvesting distinctions from the existing AI literature is one potentially useful input to this.

I am looking for people who have had similar thoughts to discuss this with as well as further discussion of search strategies.

Comment author: Ruairi 10 March 2017 11:02:15AM *  8 points [-]

Great post!

I think that suffering focussed altruists should not try to increase existential risks due it being extremely uncooperative, because of the possibility of preventing large amounts of suffering in the future and also for reasons of moral uncertainty.

If you’re interested in reducing as much suffering as possible, you might like to get in touch with us at the Foundational Research Institute. Our mission is to reduce risks of astronomical suffering, or "s-risks."

Comment author: RomeoStevens 13 March 2017 11:32:06PM 2 points [-]

Also, chances of actually sterilizing the biosphere are extremely tiny, which means you're simply ensuring wild animal suffering for the rest of earth's viability at least. Other planets are additionally potentially major terms in the suffering equation.

Comment author: vipulnaik 25 February 2017 05:38:56AM *  6 points [-]

One point to add: the frustratingly vague posts tend to get FEWER comments than the specific, concrete posts.

From my list, the posts I identified as clearly vague:

http://www.openphilanthropy.org/blog/radical-empathy got 1 comment (a question that hasn't been answered)

http://www.openphilanthropy.org/blog/worldview-diversification got 1 comment (a single sentence praising the post)

http://www.openphilanthropy.org/blog/update-how-were-thinking-about-openness-and-information-sharing got 6 comments

http://blog.givewell.org/2016/12/22/front-loading-personal-giving-year/ got 8 comments

In contrast, the posts I identified as sufficiently specific (even though they tended on the fairly technical side)

http://blog.givewell.org/2016/12/06/why-i-mostly-believe-in-worms/ got 17 comments

http://blog.givewell.org/2017/01/04/how-thin-the-reed-generalizing-from-worms-at-work/ got 14 comments

http://www.openphilanthropy.org/blog/initial-grants-support-corporate-cage-free-reforms got 27 comments

http://blog.givewell.org/2016/12/12/amf-population-ethics/ got 7 comments

If engagement is any indication, then people really thirst for specific, concrete content. But that's not necessarily in contradiction with Holden's point, since his goal isn't to generate engagement. In fact comments engagement can even be viewed negatively in his framework because it means more effort necessary to respond to and keep up with comments.

Comment author: RomeoStevens 25 February 2017 08:15:29PM *  6 points [-]

Thinking about what to call this phenomenon because it seems like an important aspect of discourse. Namely, making no claims but only distinctions, which generates no arguments. This was a distinct flavor to Superintelligence, I think intentionally to create a framework within which to have a dialog absent the usual contentious claims. This was good for that particular use case, but I think that deployed indiscriminately it leads to a kind of big tent approach inimical to real progress.

I think potentially it is the right thing for OpenPhil to currently be doing since they are first trying to figure out how the world actually is with pilot grants and research methodology testing etc. Good to not let it infect your epistemology permanently though. Suggested counter force: internal non-public betting market.

Comment author: Robert_Wiblin 23 February 2017 11:16:43PM *  9 points [-]

I don't find the view that publishing a lot of internal thinking for public consumption and feedback is a poor use of time to be implausible on its face. Here are some reasons:

  1. By the time you know enough to write really useful things, your opportunity cost is high (more and better grants, coaching staff internally, etc).
  2. Thoughtful and informative content tends to get very little traffic anyway because it doesn't generate controversy. Most traffic will go to your most dubious work, thereby wasting your time, other people's time and spreading misinformation. I've benefitted greatly from GiveWell/OpenPhil investing in public communication (including this blog post for example) but I think I'm in a small minority that arguably shouldn't be their main focus given the amount of money they have available for granting. If there are a few relevant decision-makers who would benefit from a piece of information, you can just quickly email it to them and they'll understand it without you having to explain things in great detail.
  3. The people with expertise who provide the most useful feedback will email you or meet you eventually anyway - and often end up being hired. I'd say 80% of the usefulness of feedback/learning I've received has come from 5% of providers, who can be identified as the most informed critics pretty quickly.
  4. 'Transparency' and 'engaging with negative public feedback' are applause lights in egalitarian species and societies, like 'public parks', 'community' and 'families'. No one wants to argue against these things, so people who aren't in senior positions remain unaware of their legitimate downsides. And many people enjoy tearing down those they believe to be powerful and successful for the sake of enforced egalitarianism, rather than positive outcomes per se.
  5. The personal desire for attention, and to be adulated as smart and insightful, already pushes people towards public engagement even when it's an inferior use of time.

This isn't to say overall people share too much of the industry expertise they have - there are plenty of forces in the opposite direction - but I don't come with a strong presupposition that they share far too little either.

Comment author: RomeoStevens 24 February 2017 02:27:03AM *  6 points [-]
  1. sharing more things of dubious usefulness is what I advocate.
  2. I am not advocating transparency as their main focus. I am advocating skepticism towards things that the outside view says everyone in your reference class (foundations) does specifically because I think if your methods are highly correlated with others you can't expect to outperform them by much.
  3. I think it is easy to underestimate the effect of the long tail. See Chalmers' comment on the value of the LW and EA communities in his recent AMA.
  4. I also don't care about optimizing for this, and I recognize that if you ask people to be more public, they will optimize for this because humans. Thinking more about this seems valuable. I think of it as a significant bottleneck.
  5. Disagree. Closed is the default for any dimension that relates to actual decision criteria. People push their public discourse into dimensions that don't affect decision criteria because [Insert Robin Hanson analysis here].

I'm not advocating a sea change in policy, but an increase in skepticism at the margin.

Comment author: RomeoStevens 23 February 2017 10:25:12PM *  10 points [-]

I'm skeptical. The trajectory you describe is common among a broad class of people as they age, grow in optimization power, and consider sharp course corrections less. They report a variety of stories about why this is so, so I'm skeptical of any particular story being causal.

To be clear, I also recognize the high cost of public discourse. But part of those costs are not necessary, borne only because EAs are pathologically scrupulous. As a result, letting people shit talk various thing without response causes more worry than is warranted. Naysayers are an unavoidable part of becoming a large optimization process.

There was a thread on Marginal Revolution many years ago about why more economists don't do the blogging thing given that it seems to have resulted in outsize influence for GMU. Cowen said his impression was that many economists tried, quickly 'made fools of themselves' in some minor way, and stopped. Being wrong publicly is very very difficult. And increasingly difficult the more Ra energy one has acquired.

So, three claims.

  • Outside view says we should be skeptical of our stories about why we do things, even after we try to correct for this.
  • Inability to only selectively engage with criticism will lead to other problems/coping strategies that might be harmful.
  • Carefully shepherding the optimization power one has already acquired is a recipe for slow calcification along hard to detect dimensions. The principles section is an outline of a potential future straightjacket.
Comment author: RobBensinger 09 February 2017 04:56:35AM 4 points [-]

I think this would be too open to abuse; see the concerns I raised in the OP.

An example of a variant on this idea that might work is to take 100 established+trusted community members, give them all access to the same forum account, and forbid sharing that account with any additional people.

Comment author: RomeoStevens 22 February 2017 03:15:04AM 0 points [-]

What about an anonymous forum that was both private and had a strict no object level names, personal or organizational, policy such that ideas could be discussed more freely?

Obviously there'd be grey area on the alluding to object level people and organizations, but I think we can simply elect a king who is reasonable and agree not to squabble about the chosen line.

Comment author: DonyChristie 21 February 2017 11:38:57PM 0 points [-]

I have been observing the same thing. What could we do to spark new ideas? Perhaps a recurring thread dedicated to it on this forum or Facebook, or perhaps a new Facebook group? A Giving Game for unexplored topics? How can we encourage creativity?

Comment author: RomeoStevens 22 February 2017 03:04:15AM 1 point [-]

Creativity is a learnable skill and also can be encouraged through conversational/group activity norms. http://malcolmocean.com/2016/05/honing-mode-vs-jamming-mode/ https://vimeo.com/89936101

View more: Next