On August 8th, Robert Wiblin, owner of probably the most intellectually stimulating facebook wall of all time asked “What past social movement is effective altruism most similar to?” This is a good question and there were some interesting answers. In the end, the most liked post (well actually the second, after Kony 2012), was ‘evidence-based medicine’. I think effective altruism used to have a lot of similarities to evidence-based medicine but is increasingly moving in the opposite direction.
What is it that makes them similar? Obviously a focus on evidence. “Effective altruism is a philosophy and social movement that applies evidence and reason to determine the most effective ways to improve the world.” (Wikipedia)
The trouble is, evidence and reason aren’t the same thing.
Reason, in effective altruism seems to often be equated with maximising expected utility. It is characterised by organisations like the Future of Humanities Institute and often ends up prioritising things for which we have almost no evidence, like protection against deathbots.
Evidence is very different. It’s about ambiguity aversion, not maximising expected utility. It's a lot more modest in its aims and is characterised by organisations like Givewell, prioritising charities like AMF or SCI, for which we have a decent idea of the quantum of their effects.
I place myself in the evidence camp. One of the strengths of evidence-based medicine in my view is that it realises the limits of our rationality. It realises that, actually, we are VERY bad at working out how to maximise expected utility through abstract reasoning so we should actually test stuff empirically to find out what works. It also allows consensus-building by decreasing uncertainty.
I’m not saying there isn’t room for both. There should definitely be people in the world who think about existential risk and there should definitely be people in the world providing evidence on the effectiveness of charitable interventions. I’m just not sure they should be the same people.
There are also similarities between the two camps. They’re both motivated by altruism and they’re both explicitly consequentialist, or at least act like it. The trouble is, they also both claim to be doing the most good and so in a way they disagree. Maybe I shouldn’t be worried about this. After all, healthy debate within social movements is a good thing. On the other hand, the two camps often seem to have such fundamentally different approaches to the question of how to do the most good that it is difficult to know if they can be reconciled.
In any case, I think it can only be a good thing that this difference is explicitly recognised.
Bayesian stats is not the panacea of logic it is often held out to be; I say this as someone who practices statistics for the purpose of social betterment (see e.g. https://projects.propublica.org/surgeons/ for an example of what I get up to)
First, my experience is that quantification is really, really hard. Here are a few reasons why.
I have seen few discussions, within EA, of the logistics of data collection in developing countries, which is a HUGE problem. For example, how do you get people to talk to you? How do you know if they're telling you the truth? These folks have often talked to wave after wave of well meaning foreigners over their lives and would rather ignore or lie to you and your careful survey. The people I know who actually collect data in field have all sorts of nasty things to say about the realities of working in fluid environments.
Even worse: for a great many outcomes there just ISN'T a way to get good indicator data. Consider the problem of attribution of outcomes to interventions. We can't even reliably solve the problem of attributing a purchase to an ad in the digital advertising industry, where all actions are online and therefore recorded somewhere. How then do we solve attribution at the social intervention level? The answers revolve around things like theories of change and qualitative indicators, neither of which the EA community takes seriously. But often this is the ONLY type of evidence we can get.
Second, Bayesian stats is built entirely on a single equation that follows from the axioms of probability. All of this update, learning, rationality stuff is an interpretation we put on top of it. Andrew Gelman and Cosma Shalizi have the clearest exposition of this, from "Philosophy and the Practice of Bayesian Statistics",
"A substantial school in the philosophy of science identifies Bayesian inference with inductive inference and even rationality as such, and seems to be strengthened by the rise and practical success of Bayesian statistics. We argue that the most successful forms of Bayesian statistics do not actually support that particular philosophy but rather accord much better with sophisticated forms of hypothetico-deductivism. We examine the actual role played by prior distributions in Bayesian models, and the crucial aspects of model checking and model revision, which fall outside the scope of Bayesian confirmation theory. We draw on the literature on the consistency of Bayesian updating and also on our experience of applied work in social science."
Bayesianism is not rationality. It's a particular mathematical model of rationality. I like to analogize it to propositional logic: it captures some important features of successful thinking, but it's clearly far short of the whole story.
We need much more sophisticated frameworks for analytical thinking. This is my favorite general purpose approach, which applies to mixed quant/qual evidence, and was developed by consideration of cognitive biases at the CIA:
https://www.cia.gov/library/center-for-the-study-of-intelligence/csi-publications/books-and-monographs/psychology-of-intelligence-analysis/art11.html
But of course this isn't rationality either. It's never been codified completely, and probably cannot be.