Comment author: RobBensinger 17 March 2017 10:43:57PM *  3 points [-]

I think wild animal suffering isn't a long-term issue except in scenarios where we go extinct for non-AGI-related reasons. The three likeliest scenarios are:

  1. Humans leverage AGI-related technologies in a way that promotes human welfare as well as (non-human) animal welfare.

  2. Humans leverage AGI-related technologies in a way that promotes human welfare and is effectively indifferent to animal welfare.

  3. Humans accidentally use AGI-related technologies in a way that is indifferent to human and animal welfare.

In all three scenarios, the decision-makers are likely to have "ambitious" goals that favor seizing more and more resources. In scenario 2, efficient resource use almost certainly implies that biological human bodies and brains get switched out for computing hardware running humans, and that wild animals are replaced with more computing hardware, energy/cooling infrastructure, etc. Even if biological humans who need food stick around for some reason, it's unlikely that the optimal way to efficiently grow food in the long run will be "grow entire animals, wasting lots of energy on processes that don't directly increase the quantity or quality of the food transmitted to humans".

In scenario 1, wild animals might be euthanized, or uploaded to a substrate where they can live whatever number of high-quality lives seems best. This is by far the best scenario, especially for people who think (actual or potential) non-human animals might have at least some experiences that are of positive value, or at least some positive preferences that are worth fulfilling. I would consider this extremely likely if non-human animals are moral patients at all, though scenario 1 is also strongly preferable if we're uncertain about this question and want to hedge our bets.

Scenario 3 has the same impact on wild animals as scenario 1, and for analogous reasons: resource limitations make it costly to keep wild animals around. 3 is much worse than 1 because human welfare matters so much; even if the average present-day human life turned out to be net-negative, this would be a contingent fact that could be addressed by improving global welfare.

I consider scenario 2 much less likely than scenarios 1 and 3; my point in highlighting it is to note that scenario 2 is similarly good for the purpose of preventing wild animal suffering. I also consider scenario 2 vastly more likely than "sadistic" scenarios where some agent is exerting deliberate effort to produce more suffering in the world, for non-instrumental reasons.

Comment author: ThomasSittler 11 March 2017 10:40:23AM 4 points [-]

I think I've only ever seen cause-neutrality used to mean cause-impartiality.

Comment author: RobBensinger 15 March 2017 01:18:31AM 2 points [-]

The discussion of CFAR's pivot to focusing on existential risk seemed to use "cause-neutral" to mean something like "cause-general".

Confusingly, the way "cause-neutral" was used there directly contradicts its use here: there, it meant avoiding cause-impartially favoring a specific cause based on its apparent expected value, in favor of a cause-partial commitment to pet causes like rationality and EA capacity-building. (Admittedly, at the organizational level it often makes sense to codify some "pet causes" even if in principle the individuals in that organization are trying to maximize global welfare impartially.)

Comment author: RobBensinger 25 February 2017 11:15:11PM 4 points [-]

One of the spokes of the Leverhulme Centre for the Future of Intelligence is at Imperial College London, headed by Murray Shanahan.

Comment author: RobBensinger 27 February 2017 08:02:25PM 3 points [-]
Comment author: Raemon 25 February 2017 10:37:20PM 1 point [-]

Thanks, fixed.

Actually, is anyone other than DeepMind in London? (the section I brought this up was on volunteering, which I assume is less relevant for DeepMind than FHI)

Comment author: RobBensinger 25 February 2017 11:15:11PM 4 points [-]

One of the spokes of the Leverhulme Centre for the Future of Intelligence is at Imperial College London, headed by Murray Shanahan.

In response to comment by kbog  (EA Profile) on Why I left EA
Comment author: klloyd 20 February 2017 10:47:17PM 0 points [-]

You're probably correct, reading up I realise I didn't understand it as well as I think I did, but I still have a few questions. If one is a particularist and anti-realist how do those judgements have any force that can possibly be called moral? As for moral uncertainty, I meant that if one ascribes some non-zero probability to there being genuine moral demands on one, it would seem one still has reason to follow them. If you're right then nothing you do matters so you've lost nothing. If you're wrong you have done something good. So, it would seem moral uncertainty gives one reasons to act in a certain way, because some probability of doing good has some motivating power even if not as much as certainly doing good. I think I was mixed up about non-cognitivism, but some people seem to be called non-cognitivists and realists? For example David Hume, who I've heard called a non-cognitivist and a consequentialist, and Simon Blackburn who is called a quasi-realist despite being a non-cognitivist. Are either of these people properly called realists?

In response to comment by klloyd on Why I left EA
Comment author: RobBensinger 20 February 2017 10:58:07PM *  2 points [-]

I think the intuition that moral judgments need to have "force" or make "demands" is a bit of a muddle, and some good readings for getting un-muddled here are:

  1. Peter Hurford's "A Meta-Ethics FAQ"
  2. Eliezer Yudkowsky's Mere Goodness
  3. Philippa Foot's "Morality as a System of Hypothetical Imperatives"
  4. Peter Singer's "The Triviality of the Debate Over 'Is-Ought' and the Definition of 'Moral'"

Kyle might have some better suggestions for readings here.

In response to comment by EricHerboso  (EA Profile) on Why I left EA
Comment author: RobBensinger 20 February 2017 06:14:32PM *  7 points [-]

I really like this response -- thanks, Eric. I'd say the way I think about maximizing expected value is that it's the natural thing you'll end up doing if you're trying to produce a particular outcome, especially a large-scale one that doesn't hinge much on your own mental state and local environment.

Thinking in 'maximizing-ish ways' can be useful at times in lots of contexts, but it's especially likely to be helpful (or necessary) when you're trying to move the world's state in a big way; not so much when you're trying to raise a family or follow the rules of etiquette, and possibly even less so when the goal you're pursuing is something like 'have fun and unwind this afternoon watching a movie'. There my mindset is a much more dominant consideration than it is in large-scale moral dilemmas, so the costs of thinking like a maximizer are likelier to matter.

In real life, I'm not a perfect altruist or a perfect egoist; I have a mix of hundreds of different goals like the ones above. But without being a strictly maximizing agent in all walks of life, I can still recognize that (all else being equal) I'd rather spend $1000 to protect two people from suffering from violence (or malaria, or what-have-you) than spend $1000 to protect just one person from violence. And without knowing the right way to reason with weird extreme Pascalian situations, I can still recognize that I'd rather spend $1000 to protect those two people, than spend $1000 to protect three people with 50% probability (and protect no one the other 50% of the time).

Acting on preferences like those will mean that I exhibit the outward behaviors of an EV maximizer in how I choose between charitable opportunities, even if I'm not an EV maximizer in other parts of my life. (Much like I'll act like a well-functioning calculator when I'm achieving the goal of getting a high score on a math quiz, even though I don't act calculator-like when I pursue other goals.)

In response to comment by RobBensinger on Why I left EA
Comment author: RobBensinger 20 February 2017 06:30:50PM *  4 points [-]

For more background on what I mean by 'any policy of caring a lot about strangers will tend to recommend behavior reminiscent of expected value maximization, the more so the more steadfast and strong the caring is', see e.g. 'Coherent decisions imply a utility funtion' and The "Intuitions" Behind "Utilitarianism":

When you’ve read enough heuristics and biases research, and enough coherence and uniqueness proofs for Bayesian probabilities and expected utility, and you’ve seen the “Dutch book” and “money pump” effects that penalize trying to handle uncertain outcomes any other way, then you don’t see the preference reversals in the Allais Paradox as revealing some incredibly deep moral truth about the intrinsic value of certainty. It just goes to show that the brain doesn’t goddamn multiply.

The primitive, perceptual intuitions that make a choice “feel good” don’t handle probabilistic pathways through time very skillfully, especially when the probabilities have been expressed symbolically rather than experienced as a frequency. So you reflect, devise more trustworthy logics, and think it through in words.

When you see people insisting that no amount of money whatsoever is worth a single human life, and then driving an extra mile to save $10; or when you see people insisting that no amount of money is worth a decrement of health, and then choosing the cheapest health insurance available; then you don’t think that their protestations reveal some deep truth about incommensurable utilities.

Part of it, clearly, is that primitive intuitions don’t successfully diminish the emotional impact of symbols standing for small quantities—anything you talk about seems like “an amount worth considering.”

And part of it has to do with preferring unconditional social rules to conditional social rules. Conditional rules seem weaker, seem more subject to manipulation. If there’s any loophole that lets the government legally commit torture, then the government will drive a truck through that loophole.

So it seems like there should be an unconditional social injunction against preferring money to life, and no “but” following it. Not even “but a thousand dollars isn’t worth a 0.0000000001% probability of saving a life.” Though the latter choice, of course, is revealed every time we sneeze without calling a doctor.

The rhetoric of sacredness gets bonus points for seeming to express an unlimited commitment, an unconditional refusal that signals trustworthiness and refusal to compromise. So you conclude that moral rhetoric espouses qualitative distinctions, because espousing a quantitative tradeoff would sound like you were plotting to defect.

On such occasions, people vigorously want to throw quantities out the window, and they get upset if you try to bring quantities back in, because quantities sound like conditions that would weaken the rule.

But you don’t conclude that there are actually two tiers of utility with lexical ordering. You don’t conclude that there is actually an infinitely sharp moral gradient, some atom that moves a Planck distance (in our continuous physical universe) and sends a utility from zero to infinity. You don’t conclude that utilities must be expressed using hyper-real numbers. Because the lower tier would simply vanish in any equation. It would never be worth the tiniest effort to recalculate for it. All decisions would be determined by the upper tier, and all thought spent thinking about the upper tier only, if the upper tier genuinely had lexical priority.

As Peter Norvig once pointed out, if Asimov’s robots had strict priority for the First Law of Robotics (“A robot shall not harm a human being, nor through inaction allow a human being to come to harm”) then no robot’s behavior would ever show any sign of the other two Laws; there would always be some tiny First Law factor that would be sufficient to determine the decision.

Whatever value is worth thinking about at all must be worth trading off against all other values worth thinking about, because thought itself is a limited resource that must be traded off. When you reveal a value, you reveal a utility.

I don’t say that morality should always be simple. I’ve already said that the meaning of music is more than happiness alone, more than just a pleasure center lighting up. I would rather see music composed by people than by nonsentient machine learning algorithms, so that someone should have the joy of composition; I care about the journey, as well as the destination. And I am ready to hear if you tell me that the value of music is deeper, and involves more complications, than I realize—that the valuation of this one event is more complex than I know.

But that’s for one event. When it comes to multiplying by quantities and probabilities, complication is to be avoided—at least if you care more about the destination than the journey. When you’ve reflected on enough intuitions, and corrected enough absurdities, you start to see a common denominator, a meta-principle at work, which one might phrase as “Shut up and multiply.” Where music is concerned, I care about the journey. When lives are at stake, I shut up and multiply.

It is more important that lives be saved, than that we conform to any particular ritual in saving them. And the optimal path to that destination is governed by laws that are simple, because they are math. And that’s why I’m a utilitarian—at least when I am doing something that is overwhelmingly more important than my own feelings about it—which is most of the time, because there are not many utilitarians, and many things left undone.

... Also, just to be clear -- since this seems to be a weirdly common misconception -- acting like an expected value maximizer is totally different from utilitarianism. EV maximizing is a thing wherever you consistently care enough about your actions' consequences; utilitarianism is specifically the idea that the thing people should (act as though they) care about is how good things are for everyone, impartially.

But often people argue against the consequentialism aspect of utilitarianism and the consequent willingness to quantitatively compare different goods, rather than arguing against the altruism aspect or the egalitarianism; hence the two ideas get blurred together a bit in the above, even though you can certainly maximize expected utility for conceptions of "utility" that are partial to your own interests, your friends', etc.

In response to Why I left EA
Comment author: EricHerboso  (EA Profile) 20 February 2017 06:49:10AM 22 points [-]

Thank you, Lila, for your openness on explaining your reasons for leaving EA. It's good to hear legitimate reasons why someone might leave the community. It's certainly better than the outsider anti-EA arguments that do tend to misrepresent EA too often. I hope that other insiders who leave the movement will also be kind enough to share their reasoning, as you have here.

While I recognize that Lila does not want to participate in a debate, I nevertheless would like to contribute an alternate perspective for the benefit of other readers.

Like Lila, I am a moral anti-realist. Yet while she has left the movement largely for this reason, I still identify strongly with the EA movement.

This is because I do not feel that utilitarianism is required to prop up as many of EA's ideas as Lila does. For example, non-consequentialist moral realists can still use expected value to try and maximize good done without thinking that the maximization itself is the ultimate source of that good. Presumably if you think lying is bad, then refraining from lying twice may be better than refraining from lying just once.

I agree with Lila that many EAs act too glib about deaths from violence being no worse than deaths from non-violence. But to the extent that this is true, we can just weight these differently. For example, Lila rightly points out that "violence causes psychological trauma and other harms, which must be accounted for in a utilitarian framework". EAs should definitely take into account these extra considerations about violence.

But the main difference between myself and Lila here is that when she sees EAs not taking things like this into consideration, she takes that as an argument against EA; against utilitarianism; against expected value. Whereas I take it as an improper expected value estimate that doesn't take into account all of the facts. For me, this is not an argument against EA, nor even an argument against expected value -- it's an argument for why we need to be careful about taking into account as many considerations as possible when constructing expected value estimates.

As a moral anti-realist, I have to figure out how to act not by discovering rules of morality, but by deciding on what should be valued. If I wanted, I suppose I could just choose to go with whatever felt intuitively correct, but evolution is messy, and I trust a system of logic and consistency more than any intuitions that evolution has forced upon me. While I still use my intuitions because they make me feel good, when my intuitions clash with expected value estimates, I feel much more comfortable going with the EV estimates. I do not agree with everything individual EAs say, but I largely agree with the basic ideas behind EA arguments.

There are all sorts of moral anti-realists. Almost by definition, it's difficult to predict what any given moral anti-realist would value. I endorse moral anti-realism, and I just want to emphasize that EAs can become moral anti-realist without leaving the EA movement.

In response to comment by EricHerboso  (EA Profile) on Why I left EA
Comment author: RobBensinger 20 February 2017 06:14:32PM *  7 points [-]

I really like this response -- thanks, Eric. I'd say the way I think about maximizing expected value is that it's the natural thing you'll end up doing if you're trying to produce a particular outcome, especially a large-scale one that doesn't hinge much on your own mental state and local environment.

Thinking in 'maximizing-ish ways' can be useful at times in lots of contexts, but it's especially likely to be helpful (or necessary) when you're trying to move the world's state in a big way; not so much when you're trying to raise a family or follow the rules of etiquette, and possibly even less so when the goal you're pursuing is something like 'have fun and unwind this afternoon watching a movie'. There my mindset is a much more dominant consideration than it is in large-scale moral dilemmas, so the costs of thinking like a maximizer are likelier to matter.

In real life, I'm not a perfect altruist or a perfect egoist; I have a mix of hundreds of different goals like the ones above. But without being a strictly maximizing agent in all walks of life, I can still recognize that (all else being equal) I'd rather spend $1000 to protect two people from suffering from violence (or malaria, or what-have-you) than spend $1000 to protect just one person from violence. And without knowing the right way to reason with weird extreme Pascalian situations, I can still recognize that I'd rather spend $1000 to protect those two people, than spend $1000 to protect three people with 50% probability (and protect no one the other 50% of the time).

Acting on preferences like those will mean that I exhibit the outward behaviors of an EV maximizer in how I choose between charitable opportunities, even if I'm not an EV maximizer in other parts of my life. (Much like I'll act like a well-functioning calculator when I'm achieving the goal of getting a high score on a math quiz, even though I don't act calculator-like when I pursue other goals.)

In response to Why I left EA
Comment author: casebash 20 February 2017 12:59:55AM -1 points [-]

If morality isn't real, then perhaps we should just care about our selves.

But suppose we do decide to care about other people's interests - maybe not completely, but at least to some degree. To the extent that we decide to devote resources to helping other people, it makes sense that we should do this to the maximal extent possible and this is what utilitarianism does.

In response to comment by casebash on Why I left EA
Comment author: RobBensinger 20 February 2017 05:45:27PM 2 points [-]

If morality isn't real, then perhaps we should just care about our selves.

Lila's argument that "morality isn't real" also carries over to "self-interest isn't real". Or, to be more specific, her argument against being systematic and maximizing EV in moral dilemmas also applies to prudential dilemmas, aesthetic dilemmas, etc.

That said, I agree with you that it's more important to maximize when you're dealing with others' welfare. See e.g. One Life Against the World:

For some people, the notion that saving the world is significantly better than saving one human life will be obvious, like saying that six billion dollars is worth more than one dollar, or that six cubic kilometers of gold weighs more than one cubic meter of gold. (And never mind the expected value of posterity.)

Why might it not be obvious? Well, suppose there's a qualitative duty to save what lives you can - then someone who saves the world, and someone who saves one human life, are just fulfilling the same duty. Or suppose that we follow the Greek conception of personal virtue, rather than consequentialism; someone who saves the world is virtuous, but not six billion times as virtuous as someone who saves one human life. Or perhaps the value of one human life is already too great to comprehend - so that the passing grief we experience at funerals is an infinitesimal underestimate of what is lost - and thus passing to the entire world changes little.

I agree that one human life is of unimaginably high value. I also hold that two human lives are twice as unimaginably valuable. Or to put it another way: Whoever saves one life, if it is as if they had saved the whole world; whoever saves ten lives, it is as if they had saved ten worlds. Whoever actually saves the whole world - not to be confused with pretend rhetorical saving the world - it is as if they had saved an intergalactic civilization.

Two deaf children are sleeping on the railroad tracks, the train speeding down; you see this, but you are too far away to save the child. I'm nearby, within reach, so I leap forward and drag one child off the railroad tracks - and then stop, calmly sipping a Diet Pepsi as the train bears down on the second child. "Quick!" you scream to me. "Do something!" But (I call back) I already saved one child from the train tracks, and thus I am "unimaginably" far ahead on points. Whether I save the second child, or not, I will still be credited with an "unimaginably" good deed. Thus, I have no further motive to act. Doesn't sound right, does it?

Why should it be any different if a philanthropist spends $10 million on curing a rare but spectacularly fatal disease which afflicts only a hundred people planetwide, when the same money has an equal probability of producing a cure for a less spectacular disease that kills 10% of 100,000 people? I don't think it is different. When human lives are at stake, we have a duty to maximize, not satisfice; and this duty has the same strength as the original duty to save lives.

Comment author: RobBensinger 14 February 2017 04:40:53PM *  4 points [-]

Thanks for summarizing this, Ben!

First, the adversarial framing here seems unnecessary. If the other player hasn’t started defecting in the iterated prisoner’s dilemma, why start?

I might be getting this wrong, but my understanding is that a bunch of donors immediately started 'defecting' (= pulling out of funding the kinds of work GV is excited about) once they learned of GV's excitement for GW/OPP causes, on the assumption that GV would at some future point adopt a general policy of (unconditionally?) 'cooperating' (= fully funding everything to the extent it cares about those things).

I think GW/GV/OPP arrived at their decision in an environment where they saw a non-trivial number of donors preemptively 'defecting' either based on a misunderstanding of whether GW/GV/OPP was already 'cooperating' (= they didn't realize that GW/GV/OPP was funding less than the full amount it wanted funded), or based on the assumption that GW/GV/OPP was intending to do so later (and perhaps could even be induced to do if others withdrew their funding). If my understanding of this is right, then it both made the cooperative equilibrium seem less likely, and made it seem extra important for GW/GV/OPP to very loudly and clearly communicate their non-CooperateBot policy lest the misapprehension spread even further.

I think the difficulty of actually communicating en masse with smaller GW donors, much less having a real back-and-forth negotiation with them, played a very large role in GW/GV/OPP's decisions here, including their decision to choose an 'obviously arbitrary' split number like 50% rather than something more subtle.

It also assumes that people are taking the cost-per-life-saved numbers at face value, and if so, then GiveWell already thinks they’ve been misled.

I'm not sure I understand this point. Is this saying that if people are already misled to some extent, or in some respect, then it doesn't matter what related ways one's actions might confuse them?

(Disclaimer: I work for MIRI, which has received an Open Phil grant. As usual, the above is me speaking on my own behalf, not on MIRI's.)

Comment author: RobBensinger 07 February 2017 11:00:48PM 3 points [-]

Anonymous #38:

Anonymous #33's comment makes me angry. I am trying to build a tribe that I can live in while we work on the future; please stop trying to kick people in the face for being normal whenever they get near us.

Comment author: RobBensinger 11 February 2017 03:04:03PM 2 points [-]

There are versions of this I endorse, and versions I don't endorse. Anon #38 seems to be interpreting #33 as saying 'let's be less tolerant of normal people/behaviors', but my initial interpretation of #33 was that they were saying 'let's be more tolerant of weird people/behaviors'.

View more: Next