This is meant to be a rough response to the attitude that systemic change is too difficult/intractable as well as a response by performance that EAs don't think about systemic change. Note: by systemic change I'm referring to many possible changes in the fundamental structure of economic, political and international systems, not necessarily to what lots of people naively assume to be the one true proper method of systemic changeTM.
EAs have seemed to congregate at the extremes of direct robust aid (poverty, veg ads) and massive technological risks and transformation (x risk, abolitionism) without many people in the middle. This is curious and cries out for an explanation. There are a few people working in policy spaces to improve how governments deal with the aforementioned issues, but none of that really counts as middle ground or systemic change in my opinion (and many people outside EA would agree). It's just applied activism and politics. Systemic change means improving human society's ability to solve many problems and be more ethical in a general long term sense. Some things that would be systemic change include changing the way our political systems operate, altering the structure of the international order and removing the influence of capital on society.
Since so few EAs have seriously approached systemic change, it's likely that there are more underdeveloped ideas in this intervention space than in other intervention spaces, which indicates that it might be a better cause area than we would naively expect. Also, if we are uncertain about cause areas then systemic change makes sense as a way to attack a variety of problems (but if you think that just a few particular causes are by far the most important then spending your time on systemic change seems inefficient). I think systemic change makes the most sense if you expect new important issues to arise in the future. These considerations indicate that the value of systemic change is covariant with the value of movement building.
I want to sketch a better picture of what systemic change should 'look like.' I can give several desideratum for a systemic change effort:
-
It should be great in expectation. In other words, looking at the potential and likely results of activism should reveal large improvements for the future of sentient life.
-
It should be robust. It should not rely upon any one political ideology, any one empirical expectation, or any one framework of morality or decision theory. Given the opacity of systemic change and the fact that we will probably never get much reliable feedback about the outcomes of our actions, we should demand high prior confidence.
-
It should be scalable. At least, it should be the sort of thing where a tiny fraction of the EA community - maybe fewer than 10 people - could accomplish something non-negligible OR have a small probability of achieving something significant. Otherwise we will probably be wasting our efforts for the time being.
-
It should be ideologically safe depending on how widespread and public we want the campaign to be. Ideally, it should be ideologically positive by getting new people on board with EA in general. But if we are perceived as having views which are repugnant or offensive then we may lose influence with many people. This is a real possibility: people have sneered at EA because of attention given to existential risk and people have rejected it because of the refusal to broadly condemn capitalism. You cannot please everyone but we should think about these factors, especially when we want the movement to have institutional clout with current elites. It sucks, but it's the intelligent road to take.
Before we get into particular intervention ideas, the first questions we should answer are: how good of a cause area do we think systemic change might be, a priori? And are the above desiderata suitable?
Now I can think of several examples of systemic change which would fit some or all of the above criteria.- Evidence/science/impact based governance: Generally instilling a culture of more rational decisionmaking in the government would improve its ability to implement a wide range of programs effectively, as would changes in the structure of political processes that are designed to take better guidelines into account. There are more specific proposals that can be investigated in this area, such as futarchy.
- World government: For very basic game-theoretic reasons, more credible power in the hands of international organizations and the U.N. in particular could go a long way towards solving global coordination problems (like existential risks) and reducing war. It would set a precedent in political relations that might continue indefinitely. Removing the veto from the U.N. security council is a possible step in this direction. On the other hand, shoring up the E.U. could be critical to preventing a reverse trend in the coming decades. I think this cause is potentially the best, depending on how well it can be meshed with the very defensible realist understanding of international relations.
- Public ownership of the means of production: placing more questions about production in democratic hands could go a long way towards reducing poverty and international conflict, according to various theorists. However, these claims are contentious and divisive. It is possible that implementing this change would reduce existential risks as well, due to alleviating coordination problems and rampant consumption drives.
Now the questions to be answered are: how good might the above causes be, and what other types of systemic change should we investigate?
I agree that systematic change should be given more thought in EA, but there's a very specific problem that I think we need to tackle before we can do this seriously: a lot of the tools and mindsets in EA are inadequate for dealing with systematic change.
To explain what I mean, I want to quickly make reference to a chart that Caroline Fiennes uses in her book. Essentially, you can think of work on social issues as a sort of 'pyramid'. At the top of the pyramid you have very direct work (deworming, bed nets, cash transfers, etc.). This work is comparably very certain to work, and you can fairly easily attribute changes in outcomes to these programs. However, the returns are small - you only help those who you directly work with. As you go down the pyramid, you start to consider programs that focus on communities... then those that focus on changing larger policy and practice ... then changing attitudes and norms (or some types of systematic change) ... and eventually you get to things like existential risks. As you go down the pyramid, you get greater returns to scope (can impact a lot more people), but it becomes a lot more uncertain that you will have an impact, and it also becomes very hard to attribute change in any outcome to an program.
My worry is that the tools that the EA movement relies on were created with the top of the pyramid in mind - the main forms of causal research, cost effectiveness analysis, and so on that we rely on were not built with the bottom or even middle of the pyramid. Yes, members of EA have gotten very good at trying to apply these tools to the bottom and middle, but it can get a bit screwy very quickly (as someone with an econ background, I shudder whenever someone uses econ tools to try and forecast the cost effectiveness of X-risk reduction activities - it's like trying to peel a potato while blindfolded using a pencil: it's not what the pencil was made for, and even though it is technically possible I'll be damned if the blindfolded person actually has a clue if it's working or not).
We should definitely keep our commitment to these tools, but if we want to be rigorous about exploring systematic risks, we should probably start by figuring out how to expand our toolbox in order to address these issues as rigorously as possible (and, importantly, to figure out when exactly our current tools are insufficient! We already have these for a lot of our tools - basically assumptions that, when broken, break the tool - but I haven't seen people rigorously consulting them!). I'm sure that a lot of us have in mind some very clear ideas of how we can/should rigorously prioritize and evaluate various systematic risks - but I'm pretty sure we have as many opinions as we have people. We need to get on the same page first, which is why I'd suggest that we work on figuring out some basic standards and tools for moving forward, then going from there. Expanding our toolkit is key, though - perhaps someone should look into other disciplines that could help out? I'd do it, but I'm lazy and tired and probably would make a hash of it anyway.
"I'd do it, but I'm lazy and tired and probably would make a hash of it anyway." - you seem rather knowledgeable, so I doubt that. I've heard it said that the perfect is the enemy of the good and a top level approach that was maybe twice the size of the above comment and which just provided an extremely basic overview would be a great place to start and would encourage further investigation by other people.