RobBensinger comments on Why I left EA - Effective Altruism Forum

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (60)

You are viewing a single comment's thread. Show more comments above.

Comment author: RobBensinger 20 February 2017 06:14:32PM *  7 points [-]

I really like this response -- thanks, Eric. I'd say the way I think about maximizing expected value is that it's the natural thing you'll end up doing if you're trying to produce a particular outcome, especially a large-scale one that doesn't hinge much on your own mental state and local environment.

Thinking in 'maximizing-ish ways' can be useful at times in lots of contexts, but it's especially likely to be helpful (or necessary) when you're trying to move the world's state in a big way; not so much when you're trying to raise a family or follow the rules of etiquette, and possibly even less so when the goal you're pursuing is something like 'have fun and unwind this afternoon watching a movie'. There my mindset is a much more dominant consideration than it is in large-scale moral dilemmas, so the costs of thinking like a maximizer are likelier to matter.

In real life, I'm not a perfect altruist or a perfect egoist; I have a mix of hundreds of different goals like the ones above. But without being a strictly maximizing agent in all walks of life, I can still recognize that (all else being equal) I'd rather spend $1000 to protect two people from suffering from violence (or malaria, or what-have-you) than spend $1000 to protect just one person from violence. And without knowing the right way to reason with weird extreme Pascalian situations, I can still recognize that I'd rather spend $1000 to protect those two people, than spend $1000 to protect three people with 50% probability (and protect no one the other 50% of the time).

Acting on preferences like those will mean that I exhibit the outward behaviors of an EV maximizer in how I choose between charitable opportunities, even if I'm not an EV maximizer in other parts of my life. (Much like I'll act like a well-functioning calculator when I'm achieving the goal of getting a high score on a math quiz, even though I don't act calculator-like when I pursue other goals.)

Comment author: RobBensinger 20 February 2017 06:30:50PM *  4 points [-]

For more background on what I mean by 'any policy of caring a lot about strangers will tend to recommend behavior reminiscent of expected value maximization, the more so the more steadfast and strong the caring is', see e.g. 'Coherent decisions imply a utility funtion' and The "Intuitions" Behind "Utilitarianism":

When you’ve read enough heuristics and biases research, and enough coherence and uniqueness proofs for Bayesian probabilities and expected utility, and you’ve seen the “Dutch book” and “money pump” effects that penalize trying to handle uncertain outcomes any other way, then you don’t see the preference reversals in the Allais Paradox as revealing some incredibly deep moral truth about the intrinsic value of certainty. It just goes to show that the brain doesn’t goddamn multiply.

The primitive, perceptual intuitions that make a choice “feel good” don’t handle probabilistic pathways through time very skillfully, especially when the probabilities have been expressed symbolically rather than experienced as a frequency. So you reflect, devise more trustworthy logics, and think it through in words.

When you see people insisting that no amount of money whatsoever is worth a single human life, and then driving an extra mile to save $10; or when you see people insisting that no amount of money is worth a decrement of health, and then choosing the cheapest health insurance available; then you don’t think that their protestations reveal some deep truth about incommensurable utilities.

Part of it, clearly, is that primitive intuitions don’t successfully diminish the emotional impact of symbols standing for small quantities—anything you talk about seems like “an amount worth considering.”

And part of it has to do with preferring unconditional social rules to conditional social rules. Conditional rules seem weaker, seem more subject to manipulation. If there’s any loophole that lets the government legally commit torture, then the government will drive a truck through that loophole.

So it seems like there should be an unconditional social injunction against preferring money to life, and no “but” following it. Not even “but a thousand dollars isn’t worth a 0.0000000001% probability of saving a life.” Though the latter choice, of course, is revealed every time we sneeze without calling a doctor.

The rhetoric of sacredness gets bonus points for seeming to express an unlimited commitment, an unconditional refusal that signals trustworthiness and refusal to compromise. So you conclude that moral rhetoric espouses qualitative distinctions, because espousing a quantitative tradeoff would sound like you were plotting to defect.

On such occasions, people vigorously want to throw quantities out the window, and they get upset if you try to bring quantities back in, because quantities sound like conditions that would weaken the rule.

But you don’t conclude that there are actually two tiers of utility with lexical ordering. You don’t conclude that there is actually an infinitely sharp moral gradient, some atom that moves a Planck distance (in our continuous physical universe) and sends a utility from zero to infinity. You don’t conclude that utilities must be expressed using hyper-real numbers. Because the lower tier would simply vanish in any equation. It would never be worth the tiniest effort to recalculate for it. All decisions would be determined by the upper tier, and all thought spent thinking about the upper tier only, if the upper tier genuinely had lexical priority.

As Peter Norvig once pointed out, if Asimov’s robots had strict priority for the First Law of Robotics (“A robot shall not harm a human being, nor through inaction allow a human being to come to harm”) then no robot’s behavior would ever show any sign of the other two Laws; there would always be some tiny First Law factor that would be sufficient to determine the decision.

Whatever value is worth thinking about at all must be worth trading off against all other values worth thinking about, because thought itself is a limited resource that must be traded off. When you reveal a value, you reveal a utility.

I don’t say that morality should always be simple. I’ve already said that the meaning of music is more than happiness alone, more than just a pleasure center lighting up. I would rather see music composed by people than by nonsentient machine learning algorithms, so that someone should have the joy of composition; I care about the journey, as well as the destination. And I am ready to hear if you tell me that the value of music is deeper, and involves more complications, than I realize—that the valuation of this one event is more complex than I know.

But that’s for one event. When it comes to multiplying by quantities and probabilities, complication is to be avoided—at least if you care more about the destination than the journey. When you’ve reflected on enough intuitions, and corrected enough absurdities, you start to see a common denominator, a meta-principle at work, which one might phrase as “Shut up and multiply.” Where music is concerned, I care about the journey. When lives are at stake, I shut up and multiply.

It is more important that lives be saved, than that we conform to any particular ritual in saving them. And the optimal path to that destination is governed by laws that are simple, because they are math. And that’s why I’m a utilitarian—at least when I am doing something that is overwhelmingly more important than my own feelings about it—which is most of the time, because there are not many utilitarians, and many things left undone.

... Also, just to be clear -- since this seems to be a weirdly common misconception -- acting like an expected value maximizer is totally different from utilitarianism. EV maximizing is a thing wherever you consistently care enough about your actions' consequences; utilitarianism is specifically the idea that the thing people should (act as though they) care about is how good things are for everyone, impartially.

But often people argue against the consequentialism aspect of utilitarianism and the consequent willingness to quantitatively compare different goods, rather than arguing against the altruism aspect or the egalitarianism; hence the two ideas get blurred together a bit in the above, even though you can certainly maximize expected utility for conceptions of "utility" that are partial to your own interests, your friends', etc.