When trying to improve the world, we can either pursue direct interventions, such as directly helping beings in need and doing activism on their behalf, or we can pursue research on how we can best improve the world, as well as on what improving the world even means in the first place.

Of course, the distinction between direct work and research is not a sharp one. We can, after all, learn a lot about the “how” question by pursuing direct interventions, testing out what works and what does not. Conversely, research publications can effectively function as activism, and may thereby help bring about certain outcomes quite directly, even when such publications do not deliberately try to do either.

But despite these complications, we can still meaningfully distinguish more or less research-oriented efforts to improve the world. My aim here is to defend more research-oriented efforts, and to highlight certain factors that may lead us to underinvest in research and reflection. (Note that I here use the term “research” to cover more than just original research, as it also covers efforts to learn about existing research.)

Comments2
Sorted by Click to highlight new comments since:

Thanks for the post! Very interesting and clear. 

I found the following section most thought-provoking:

There is one question that I consider particularly neglected among aspiring altruists — as though it occupies a uniquely impenetrable blindspot. I am tempted to call it “The Big Neglected Question”.

The question, in short, is whether anything can ethically outweigh or compensate for extreme suffering. Our answer to this question has profound implications for our priorities. And yet astonishingly few people seem to seriously ponder it, even among dedicated altruists. In my view, reflecting on this question is among the first, most critical steps in any systematic endeavor to improve the world. (I suspect that a key reason this question tends to be shunned is that it seems too dark, and because people may intuitively feel that it fundamentally questions all positive and meaning-giving aspects of life — although it arguably does not, as even a negative answer to the question above is compatible with personal fulfillment and positive roles and lives.)

As you point out, even within EA there seems to be a somewhat limited amount of research (and maybe personal introspection?) into basic ethical questions, considering how drastically our priorities can change according to answers to these types of questions. 

Personally, I notice many internal conflicts when reflecting on my values. Two that are particularly salient for me are 1. A sense of arbitrariness of my ethical intuitions and a resulting mistrust of my reasoning and an allure toward moral nihilism, and 2. An overwhelming-at-times feeling of strong demandedness, especially when considering high-stakes questions. [I think I'm mostly doing okay dealing with these, but it does mean that it's harder for me to think of basic moral questions compared to more direct prioritization  or object-level research]

[Also, you might prefer to edit the post to use the built-in "linkpost" feature, and remove the "[linkpost]" from the title and the link from the body of the message:

If I'm not wrong, there are some SEO advantages to that, and I think that it makes it look nicer :) ]

Curated and popular this week
Relevant opportunities