One of the main goals of Effective Altruism is to persuade people to think more deeply about how to prioritise causes. This naturally leads us to ask, "What is meant by cause prioritisation?" and "Which aspect of cause prioritisation is most important?".

(Epistemic status: Speculative, Rough framework, see In Praise of Fake Frameworks)

I'd suggest that we can divide cause prioritisation into three main levels. This won't be particularly neat as we could debate exact categorisations, but it'll suffice for my purposes:

  • High-level causes: Global poverty, Domestic Poverty, Animal suffering, Existential risk, Scientific Research
  • Specific causes: Cancer, Malaria, Factory Farming 
  • Interventions: Distributing bednets, Developing clean meat, Banning autonomous weapons

An intervention is a solution to a given problem. For example, distributing bednets is a solution to the problem of Malaria. People often don't have attachments to particular interventions, but, even if they do, they are often willing to consider that another intervention might work better.

Specific causes are the level that most charities operate on. For example, the Cancer Council researches cancer and the Against Malaria Foundation seeks to treat malaria. Many altruistic people have a strong emotional attachment to one or more specific causes.

High-level causes are broad categorisations. Trying to prioritise at this level of require often requires philosophy - ie. Do we have any special duties to our fellow compatriots or is this irrelevant? Philosophy is simply not something humans are particularly skilled at discussing. Almost everyone who is altruistic has an attachment at this level and it is often very hard to persuade people to seriously reconsider their views.

I suspect that attempting to persuade people to prioritise causes within a higher level can often be a mistake if they don't already accept that you should prioritise within the lower cause level. Emotions play a very strong role in the beliefs that people adopt and we need to think carefully about how to navigate them. Indeed, discussing prioritisation at too high a level [innoculating](https://www.lesserwrong.com/posts/aYX6s8SYuTNaM2jh3/idea-inoculation-inferential-distance) them against lower levels that they would accept if presented. Further, as soon as we've persuaded someone on a lower level, we've established a foothold that could later be used to attempt to persuade them further. We've reduced the inferential distance: instead of having to persuade them that we should prioritise charitable donations and that we should apply this principle rather radically, we only have to convince them of the later. And I suspect that this will be much easier.

If we want to grow the effective altruism movement, we will have to become skilled in persuasion. A large part of this is understanding how people think so that we can avoid trigger emotions that would interfere with their reasoning. I hope that my suggestion of focusing on the lower levels first will help with this.

9

0
0

Reactions

0
0
Comments6
Sorted by Click to highlight new comments since: Today at 5:25 PM

Another way to frame it is thinking about Marr's three levels of analysis. The computational (what are we even trying to do?), the algorithmic (what algorithms/heuristics should we run given we want to accomplish that?), and implementation (what, concretely should our next actions be to implement those algorithms in reality?). Cleanly separating which step you are working on prevents confusion.

I think this framing is a good one, but I don't immediately agree with the conclusion you make about which level to prioritize.

Firstly, consider the benefits we expect from a change in someone's view at each level. Do most people stand to improve their impact the most by choosing the best implementation within their cause area, or switching to an average implementation in a more pressing cause area? I don't think this is obvious, but I lean to the latter.

Higher levels are more generalizable: cross-implementation comparisons are only relevant to people within that cause, whereas cross-cause comparisons are relevant to everyone who shares approximately the same values, so focusing on lower levels limits the size of the audience that can benefit from what you have to say.

Low-level comparisons tend to require domain-specific expertise, which we won't be able to have across a wide range of domains.

I also think there's just a much greater deficit of high-quality discussion of the higher levels. They're virtually unexamined by most people. Speaking personally, my introduction to EA was approximately that I knew I was confused about the medium-level question, so I was directly looking for answers to that: I'm not sure a good discussion of the low-level question would have captured me as effectively.

Thank you. I commonly try to say something at a "high-level" (such as the difference between relative and absolute/extreme poverty). Now, instead, I will mention something about distributing mosquito bed nets, steel roofs in Kenya (GiveDirectly) or developing clean meat. I anticipate some questions on that last one :)

I want to add something: It probably has been discussed before, but it occurs to me that when thinking about prioritisation in general it's almost always better to think at the lowest level possible. That's because the impact per dollar is only evaluable for specific interventions, and because causes that at first don't appear particularly cost effective can hide particular interventions that are. And those particular interventions could be in principle even more cost effective than other interventions in causes that do appear cost effective overall. I think high-level cause prioritisation is mostly good for gaining a first superficial understanding of the promise of a particular class of altruistic interventions.

I disagree. If we are fairly certain, that the average intervention in Cause X is 10 times more effective than the average Intervention in Cause Y (For a comparision, 80000 hours currently believes, that AI-safety work is 1000 times as effective as global health), it seems like we should strongly prioritize Cause X. Even if there are some interventions in Cause Y, which are more effective, than the average intervention in Cause X, finding them is probably as costly as finding the most effective interventions in Cause X (Unless there is a specific reason, why evaluating cost effectiveness in Cause X is especially costly, or the distributions of Intervention effectiveness are radically different between both causes). Depending on how much we can improve on our current comparative estimates of cause effctiveness, the potential impact of doing so could be quite high, since it is essentially multiplies the effects of our lower level prioritization. Therefore it seems, like high to medium level prioritization in combination with low-level prioritization restricted to the best causes seems the way to go. On the other hand, it seems at least plausible, that we cannot improve our high-level prioritization significantly at the moment and should therefore focus on the lower level within the most effective causes.

Yes, maybe I exaggerated saying "almost always" or at least I have been too vague. If you haven't any idea of specific interventions to evaluate, then a good way to go is to do superficial high level analyses first and then proceed with lower level ones. Sometimes the contrary could happen though, when a particular promising intervention is found without first investigating its cause area.

[This comment is no longer endorsed by its author]Reply