I have this idea which I haven't fully fleshed out yet, but I'm looking to get some feedback. To simplify this, I'll embody the idea in a single, hypothetical Effective Altruist called Alex. I'll assume silly things like no inflation for simplicity. I also use 'lives saved' as a proxy for 'good done'; although this is grossly oversimplified it doesn't affect the argument.
Alex is earning to give, and estimates that they will be able to give $1 million over their lifetime. They have thought a lot about existential risk, and agree that reducing existential risk would be a good thing, and also agree that the problem is at least partially tractable. Alex also accepts things like the notion that future lives are equally as valuable as lives today. However, Alex is somewhat risk averse.
After careful modelling, Alex estimates that they could save a life for $4,000, and thus could save 250 lives over their own lifetime. Alex also thinks that their $1 million might slightly reduce the risk of some catastrophic event, but it probably won't. On expected value terms, they estimate that donating to an X-risk organisation is about ten times as good as donating to a poverty charity (they estimate 'saving' 2,500 lives on average).
However, all things considered, Alex still decides to donate to the poverty organisation, because they are risk averse, and the chances of them making a difference by donating to the X-risk organisation are very low indeed.
This seems to embody the attitude of many EAs I know. However, the question I'd like to pose is: is this selfish?
It seems like some kind of moral narcissism to say that one would prefer to increase their chances of their personal actions making a difference at the expense of overall wellbeing in expectation. If a world where everyone gave to X-risk meant a meaningful reduction in the probability of a catastrophe, shouldn't we all be working towards that instead of trying to maximise the chances that our personal dollars make a difference?
As I said, I'm still thinking this through, and don't mean to imply that anyone donating to a poverty charity instead of an X-risk organisation is selfish. I'm very keen on criticism and feedback here.
Things that would imply I'm wrong include existential risk reduction not being tractable or not being good, some argument for risk aversion that I'm overlooking, an argument for discounting future life, or something that doesn't assume a hardline classical hedonistic utilitarian take on ethics (or anything else I've overlooked).
For what it's worth, my donations to date have been overwhelmingly to poverty charities, so to date at least, I am Alex.
I just want to push back against your statement that "economists believe that risk aversion is irrational". In development economics in particular, risk aversion is often seen as a perfectly rational approach to life, especially in cases where the risk is irreversible.
To explain this, I just want to quickly point out that, from an economic standpoint, there's no correct formal way of measuring risk aversion among utils. Utility is an ordinal, not cardinal, measure. Risk aversion is something that is applied to real measures, like crop yields, in order to better estimate people's revealed preferences - in essence, risk aversion is a way of taking utility into account when measuring non-utility values.
So, to put this in context, let's say you are a subsistence farmer, and have an expected yield of X from growing Sorghum or a tuber, and you know that you'll always roughly get a yield X (since Sorghum and many tubers are crazily resilient), but now someone offers you an 'improved Maize' growth package that will get you an expected yield of 2X, but there's a 10% chance that you're crops will fail completely. A rational person at the poverty line should always choose the Sorghum/tuber. This is because that 10% chance of a failed crop is much, much worse than could be revealed by expected yield - you could starve, have to sell productive assets, etc. Risk aversion is a way of formalizing the thought process behind this perfectly rational decision. If we could measure expected utility in a cardinal way, we would just do that, and get the correct answer without using risk aversion - but because we can't measure it cardinally, we have to use risk aversion to account for things like this.
As a last fun point, risk aversion can also be used to formalize the idea of diminishing marginal utility without using cardinal utility functions, which is one of the many ways that we're able to 'prove' that diminishing marginal utility exists, even if we can't measure it directly.
I agree that dmu over crop yields is perfectly rational. I mean a slightly different thing. Risk aversion over utilities. Which is why people fail the Allais pradadox. Rational choice theory is dominated by expected utility theory (exceptions Buchak, McClennen) which suggests risk aversion over utilities is irrational. Risk aversion over utilities seems pertinent here because most moral views don't have dmu of people's lives.