I often see media coverage of effective altruism that says "effective altruists want to maximise the number of QALYs in the world." (e.g. London Review of Books).
This is wrong. QALYs only measure health, and health is not all that matters. Most effective altruists care about increasing the number of "WALYs" or well-being adjusted life years, where health is just one component of wellbeing.
(Some effective altruists also care about goods besides welfare, such as the environment and justice. Depending on your view of population ethics, you might also distinguish between achieving WALYs by improving lives vs. adding to the number of people alive).
This is a bad misconception since it makes it look like we have a laughably narrow view of what's good. Even a hardcore hedonistic utilitarian would at least care about *happiness* rather than just health, and very few people are hedonistic utilitarians.
Why does effective altruism get misinterpreted as thinking we only care about QALYs?
1) Sometimes community members actually say "we want to maximise the number of QALYs". I think we should stop doing this. Instead say something more like: "we want to maximise the number of people who have good lives", or just "maximise the good we do" and then if someone asks what "good" means, you can say it's people having happy or flourishing lives.
2) Sometimes when people ask us "how do you measure 'good'?" we talk about QALYs as an example. This is what happens in Will's book. I think this is a reasonable move - the idea of QALYs is really important to introduce to new people - but it can create the impression that you only care about QALYs. The QALY bit will be the most memorable, and people won't remember your disclaimers. This means if you're explaining QALYs you need to put a lot of emphasis on how that's not all that matters. You can do this by instead leading with "you can measure your impact by choosing good proxies within the cause you're working in e.g. in health there's QALYs, in education you can look at improvements in test scores and income, in economic empowerment programs you can look at income change, and so on. Use the best proxies you have available." Alternatively you can introduce the idea of QALYs, but then point out what we ultimately care about is welfare; it's just that right now health is the cause where quantification is easiest.
Edit: I don't propose publicly promoting and using the term "WALYs" - just bear in mind "WALYs not QALYs" to help you remember. (My suggestions about what to say publicly are just above in 1 and 2).
I'm pleased and surprised to hear you say this. I'd thought that the default EA metric was QALYs: I've had more conversations than I can count with EAs saying that QALYs are probably not the best metric overall or even for health itself. A couple of points.
There no common sense understanding of the term - unlike 'health' - there are also three different and incompatible accounts of well-being (hedonism, desire-satisfaction, objective list).Saying "we're in favour of well-being" can be equally mysterious.
I think there's something a bit weird about saying "this is what we actually mean, but don't tell anyone in case they think we're weird". I worry it's getting a bit, well, Scientological, if you're at the stage where you have various versions of the truth: one for the public, another for those in the know.
My suggestions is to talk about "HALYs" - Happiness adjusted life years. Not only is this what we actually care about (if we're utilitarians, though I presume any view should value happiness, all other things being equal), happiness is much more intuitive than well-being and it sounds less silly.
In defence of WALYs, and in reply to your specific points:
I don't share your intuition here. Well-being is what we're talking about when we say "I'm not sure he's doing so well at the moment", or when we say "I want to help people as much as possible". It's a general term for how well someone is doing, overall. It's an advantage, in my eyes, that it's not committed to any specific account of well-being, for any such account might have its drawbacks.
I worry that, in adopting HALYs, EA would tie its aims to a narrow view of what huma