RL

rhys_lindmark

259 karmaJoined Jun 2017

Comments
21

Rhys (also from Roote) here. Agree with Brendon that there isn't too much literature evaluating the "efficacy of various governance models". Some links you may want to look into, Holden:

(This is less about academic research and more about IRL experiments.)
 

  • Lots of governance experiments are happening with DAOs in crypto. See Vitalik's back and forth here: https://twitter.com/VitalikButerin/status/1442039126606311427
  • Or my response here. I find it helpful to visualize these systems:  https://twitter.com/RhysLindmark/status/1446276859109335040 and https://www.rhyslindmark.com/popper-criterion-for-politics/ . Those pieces contain lots of political economy books like The Dictator's Handbook. https://www.goodreads.com/en/book/show/11612989 
  • More crypto stuff: https://gnosisguild.mirror.xyz/OuhG5s2X5uSVBx1EK4tKPhnUc91Wh9YM0fwSnC8UNcg. These are interchangeable "Modules" that DAOs can use like DeGov. https://otherinter.net/research/ is doing research on DAO governance as well.
  • On the non-crypto side, Rob Reich has great thoughts on this. I found this convo between him and Stuart Russell re legitimacy and AI governance helpful. (49:30)
  • Worth differentiating how much groups disagree on what should be (goals) vs. what is (current state). https://twitter.com/RhysLindmark/status/1294107741246517248 
  • This feels close to the work Ian David-Moss et al are doing here https://forum.effectivealtruism.org/tag/effective-institutions-project 
  • Many of the governance issues take the form of one of Meadow's "system traps" https://bytepawn.com/systems-thinking.html#:~:text=Thinking%20in%20Systems%2C%20written%20by,furnace%20to%20a%20social%20system. 
  • In the spirit of your final experimental point: Long term, I do think a lot of this will just be understood (and computationally modeled) as social groups (bounded by a Markov Blanket) abiding by the Free Energy Principle / Active Inference with Bayesian generative models, co-evolving into evolutionarily stable strategies. But we're not there yet! 🙂 

Beyond social choice theory, not sure there's a better field you're looking for. Maybe Political Economy, Public Choice Theory, or Game Theory? ¯\_(ツ)_/¯ 

Anywho, good luck and excited to see what you unearth!

Dig it! Juan Benet from Protocol Labs and Matt Goldenberg are also working on this. Ping 'em! 

Link to an ongoing Twitter discussion with Rob Wiblin, Vitalik Buterin, etc. here: https://twitter.com/glenweyl/status/1163522777644748801

I like this style of thinking. A couple quick notes:


1. Various U.S. presidential candidates have proposals for "democracy dollars", which are similar to philanthropy vouchers, but scoped to political giving. AFAICT, they have a different macro goal as well: to decentralize campaign financing. See https://www.yang2020.com/policies/democracydollars/ and https://www.vox.com/policy-and-politics/2019/5/4/18526808/kirsten-gillibrand-democracy-dollars-2020-campaign-finance-reform

2. I agree that non-politics can be systemic. See this post that expands on your idea of "what if everyone tithed 10%?" https://forum.effectivealtruism.org/posts/N4KSLXgr6J7Z9mByG/an-argument-to-prioritize-tithing-to-catalyze-a-paradigm

3. It would be interesting to see philanthropic vouchers tested in the EA community. Kind of like a reverse EA Funds/donor lottery, where an EA donor gives lots of EAs vouchers (money) and then the EAs donate it.

Woof! Thanks for noting this Stefan! As you say, cause neutrality is used in the exact opposite way (to denote that we select causes based on impartial estimates of impact, not that we are neutral about where another person gives their money/time). I've edited my post slightly to reflect this. Thanks!

Boom, thanks! Dig the push back here. I generally agree with Scott Alexander's comment at the bottom: "I don't think ethical offsetting is antithetical to EA. I think it's orthogonal to EA."

(Though I also believe there are some "macro systemic" reasons for believing that offsetting is a crucial piece to moving more folks to an EA-based non-accumulation mindset. More detailed explanation of this later!)

Awesome resource, thanks for the link! (Also, I had never heard of Pigouvian taxes before—thanks!)

Given your list, I'd group the "categories" of externalities into:

  • Environment (driving, emitting carbon, agriculture, municipal waste)
  • Public health (driving, obesity, alcohol, smoking, antibiotic use, gun ownership)
  • Financial (debt)

And, if I understand it correctly, it's tough for me to offset some of these. This is because:

  • Luckily, I just happen to not do many of them (e.g. driving, obesity, alcohol, smoking, debt).
  • But even if I did, it's not clear to me how to offset. i.e. Given your research in this area, could you help me answer this question—if I (or people in the developed world generally) were to offset the externalities our actions, what should we offset? 1st clear answer is paying to offset our carbon emissions. What would be "#2", and how would we "pay" to offset it? (e.g. If I was obese, who would I pay to offset that?)

Thanks!

Perfect, thanks! I agree with most of your points (and just writing them here for my own understanding/others):

  • Uncertainty hard (long time scale, humans adaptable, risks systemically interdependent so we get zero or double counting)
  • Probabilities have incentives (e.g. Stern's discounting incentive)
  • Probabilities get simplified (0-10% can turn into 5% or 0% or 10%)

I'll ping you as I get closer to a editable draft of my book, so we can ensure I'm painting an appropriate picture. Thanks again!

Hey Simon! Thanks writing up this paper. The final 1/3 is exactly what I was looking for!

Could you give us a bit more texture on why you think it's "best not to put this kind of number on risks"?

Load more