One of the main goals of Effective Altruism is to persuade people to think more deeply about how to prioritise causes. This naturally leads us to ask, "What is meant by cause prioritisation?" and "Which aspect of cause prioritisation is most important?".
(Epistemic status: Speculative, Rough framework, see In Praise of Fake Frameworks)
I'd suggest that we can divide cause prioritisation into three main levels. This won't be particularly neat as we could debate exact categorisations, but it'll suffice for my purposes:
- High-level causes: Global poverty, Domestic Poverty, Animal suffering, Existential risk, Scientific Research
- Specific causes: Cancer, Malaria, Factory Farming
- Interventions: Distributing bednets, Developing clean meat, Banning autonomous weapons
An intervention is a solution to a given problem. For example, distributing bednets is a solution to the problem of Malaria. People often don't have attachments to particular interventions, but, even if they do, they are often willing to consider that another intervention might work better.
Specific causes are the level that most charities operate on. For example, the Cancer Council researches cancer and the Against Malaria Foundation seeks to treat malaria. Many altruistic people have a strong emotional attachment to one or more specific causes.
High-level causes are broad categorisations. Trying to prioritise at this level of require often requires philosophy - ie. Do we have any special duties to our fellow compatriots or is this irrelevant? Philosophy is simply not something humans are particularly skilled at discussing. Almost everyone who is altruistic has an attachment at this level and it is often very hard to persuade people to seriously reconsider their views.
I suspect that attempting to persuade people to prioritise causes within a higher level can often be a mistake if they don't already accept that you should prioritise within the lower cause level. Emotions play a very strong role in the beliefs that people adopt and we need to think carefully about how to navigate them. Indeed, discussing prioritisation at too high a level [innoculating](https://www.lesserwrong.com/posts/aYX6s8SYuTNaM2jh3/idea-inoculation-inferential-distance) them against lower levels that they would accept if presented. Further, as soon as we've persuaded someone on a lower level, we've established a foothold that could later be used to attempt to persuade them further. We've reduced the inferential distance: instead of having to persuade them that we should prioritise charitable donations and that we should apply this principle rather radically, we only have to convince them of the later. And I suspect that this will be much easier.
If we want to grow the effective altruism movement, we will have to become skilled in persuasion. A large part of this is understanding how people think so that we can avoid trigger emotions that would interfere with their reasoning. I hope that my suggestion of focusing on the lower levels first will help with this.
I think this framing is a good one, but I don't immediately agree with the conclusion you make about which level to prioritize.
Firstly, consider the benefits we expect from a change in someone's view at each level. Do most people stand to improve their impact the most by choosing the best implementation within their cause area, or switching to an average implementation in a more pressing cause area? I don't think this is obvious, but I lean to the latter.
Higher levels are more generalizable: cross-implementation comparisons are only relevant to people within that cause, whereas cross-cause comparisons are relevant to everyone who shares approximately the same values, so focusing on lower levels limits the size of the audience that can benefit from what you have to say.
Low-level comparisons tend to require domain-specific expertise, which we won't be able to have across a wide range of domains.
I also think there's just a much greater deficit of high-quality discussion of the higher levels. They're virtually unexamined by most people. Speaking personally, my introduction to EA was approximately that I knew I was confused about the medium-level question, so I was directly looking for answers to that: I'm not sure a good discussion of the low-level question would have captured me as effectively.