5

John_Maxwell_IV comments on Alice and Bob on big-picture worldviews (Oxford Prioritisation Project) - Effective Altruism Forum

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (8)

You are viewing a single comment's thread.

Comment author: John_Maxwell_IV 19 March 2017 07:17:10PM *  5 points [-]

[Reinforcing Alice for giving more attention to this consideration despite the fact that it's unpleasant for her]

Maybe something like spreading cooperative agents, which is helpful both if things go well or not well.

[speculative]

What is meant by "cooperative agents"? Personally, I suspect "cooperativeness" is best split into multiple dimensions, analogous to lawful/chaotic and good/evil in a roleplaying game. My sense is that

  • humanity is made up of competing groups

  • bigger groups tend to be more powerful

  • groups get big because they are made up of humans who are capable of large-scale cooperation (in the "lawful" sense, not the "good" sense)

There's probably some effect where humans capable of large-scale cooperation also tend to be more benevolent. But you still see lots of historical examples of empires (big human groups) treating small human groups very badly. (My understanding is that small human groups treat each other badly as well, but we hear about it less because such small-scale conflicts are less interesting and don't hang as well on grand historical narratives.)

If by "spreading cooperative agents" you mean "spreading lawfulness", I'm not immediately seeing how that's helpful. My prior is that the group that's made up of lawful people is already going to be the one that wins, since lawfulness enables large-scale cooperation and thus power. Perhaps spreading lawfulness could make conflicts more asymmetrical, by pitting a large group of lawful individuals against a small group of less lawful ones. In an asymmetrical conflict, the powerful group has the luxury of subduing the much less powerful group in a way that's relatively benevolent. A symmetrical conflict is more likely to be a highly destructive fight to the death. Powerful groups also have stronger deterrence capabilities, which disincentivizes conflict in the first place. So this could be an argument for spreading lawfulness.

Spreading lawfulness within the EA movement seems like a really good thing to me. More lawfulness will allow us to cooperate at a larger scale and be a more influential group. Unfortunately, utilitarian thinking tends to have a strong "chaotic good" flavor, and utilitarian thought experiments often pit our harm-minimization instincts against deontological rules that underpin large-scale cooperation. This is part of why I spent a lot of time arguing in this thread and elsewhere that EA should have a stronger central governance mechanism.

BTW, a lot of this thinking came out of these discussions with Brian Tomasik.