R

Rhyss

138 karmaJoined Aug 2015

Comments
12

You might enjoy "On the Survival of Humanity" (2017) by Johann Frick. Frick makes the same point there that you quote Torres as making—that total utilitarians care about the total number and quality of experiences but are indifferent to whether these experiences are simultaneous or extended across time. Torres has favorably cited Frick elsewhere, so I wouldn't be surprised if they were inspired by this article. You can download it here: https://oar.princeton.edu/bitstream/88435/pr1rn3068s/1/OnTheSurvivalOfHumanity.pdf

I don't know if StrongMinds explicitly has a goal of reducing suicides, or what its predicted effect on suicide risk might be, but searching for "suicide" on the StrongMinds site (https://strongminds.org/?s=suicide) brings up a lot of results. Whether or not suicide prevention is part of their mission, treating depression would seem to potentially reduce the risk of suicide for some people . If so, some of the value of StrongMinds might come from the extension of lives. This would mean the value of StrongMinds could vary depending on which view of the harm of death we take.  

I'm currently taking a class with Jeff McMahan in which he discusses prenatal injury, and I'm pretty sure he would agree with how you put it here, Richard. This doesn't affect your point, but he now likes to discuss a complication to this: what he calls "the divergent lives problem." The idea is that an early injury can lead to a very different life path, and that once you're far enough down this path—and have the particular interests that you do, and the particular individuals in your life who are important to you—Jeff thinks it can be irrational to regret the injury. So, if someone's being injured as a fetus leads them to later meet the particular life partner they love and to have particular the children they have, and if their life is good,  Jeff thinks they probably shouldn't regret the injury—even if avoiding the injury would have led to their having a life with more wellbeing. That's because avoiding the injury would have led to them having particular people and interests in their life which they don't in fact care about from their standpoint now. However, Jeff does add that if an early injury makes the later life not worth living, or maybe even barely worth living, then the future person who developed from the injured fetus does have reason to regret that injury. He would say that children of mothers who took Thalidomide have reason to regret that. 

The article seems to contradict itself in the end. In the beginning of the article, I thought you were saying you're not an EA because you're not a utilitarian (because utilitarianism is poison), and to be an EA is just to be a utilitarian in some form—and that even if EAs are utilitarians in a very diluted form, the philosophy they are diluting is still a poison, no matter how diluted, and so is unacceptable. So, I was expecting you to offer some alternative framework or way of thinking to build an altruistic movement on, like moral particularism or contractualism or something, but the solution I read you as giving is for EAs to keep doing exactly what you start off saying they shouldn’t do: for them to base their movement on utilitarianism in a diluted form. 

I hope this isn't uncharitable, but this is how your core argument actually comes across to me: “I’m not an EA because EAs ground their movement on a diluted form of utilitarianism, which is poisonous no matter how much you dilute it. What do I suggest instead? EAs should keep doing exactly what they’re doing—diluting utilitarianism until it’s no longer poisonous!” (Maybe my biggest objection to your article is that you missed the perfect opportunity for a homeopathy analogy here!) This seems contradictory to me, which makes it hard to figure out what to take from your article.  

To highlight what made me confused, I'll quote what I saw as the most contradictory seeming passages. In the first four quotes here, you seem to say that diluting utilitarianism is futile because this can’t remove the poison:

“the origins of the effective altruist movement in utilitarianism means that as definitions get more specific it becomes clear that within lurks a poison, and the choice of all effective altruists is either to dilute that poison, and therefore dilute their philosophy, or swallow the poison whole.

“This poison, which originates directly from utilitarianism (which then trickles down to effective altruism), is not a quirk, or a bug, but rather a feature of utilitarian philosophy, and can be found in even the smallest drop. And why I am not an effective altruist is that to deal with it one must dilute or swallow, swallow or dilute, always and forever.” … 

“But the larger problem for the effective altruist movement remains that diluting something doesn’t actually make it not poison. It’s still poison! That’s why you had to dilute it.” 

“I’m of the radical opinion that the poison means something is wrong to begin with.”

It seems like you’re saying we should neither drink the poison of pure utilitarianism nor try to fool ourselves into thinking it’s okay to drink it because we have diluted it. Yet here, at the end of the article, you sound like a Dr Bronner’s bottle commanding “dilute dilute dilute!”: 

“So here’s my official specific and constructive suggestion for the effective altruism movement: … keep diluting the poison. Dilute it all the way down, until everyone is basically just drinking water. You’re already on the right path. …


“I really do think that by continuing down the path of dilution, even by accelerating it, the movement will do a lot of practical good over the next couple decades as it draws in more and more people who find its moral principles easier and easier to swallow…  
 

“What I’m saying is that, in terms of flavor, a little utilitarianism goes a long ways. And my suggestion is that effective altruists should dilute, dilute, dilute—dilute until everyone everywhere can drink.”
 

If “everyone everywhere” is drinking diluted utilitarianism at the end, doesn’t that include you? Are you saying you’re not an EA because EAs haven’t diluted utilitarianism quite enough yet, but eventually you think they’ll get there? Doesn’t this contradict what you say earlier about poison being poison no matter how diluted? You seem to begin the essay in fundamental opposition to EA, and you conclude by explicitly endorsing what you claim is the EA status quo: “Basically, just keep doing the cool shit you’ve been doing”! 
 

I assume there are ways to re-phrase the last section of your article so you’re not coming across as contradicting yourself, but as it’s written now, your advice to EA seems identical to your critique of EA. 


What made you shy away from suggesting EA shift from utilitarianism to a different form of consequentialism or to a different moral (or even non-moral) framework entirely? It can't be that you think all the utilitarian EAs would have ignored you if you did that, because you say in the article that you know you’ll be ignored and lose this contest because your critique of EA is too fundamental and crushing for EAs to be able to face. Do you think diluted utilitarianism is the best basis for an altruistic movement? It certainly doesn't seem so in the beginning, but that's all I am able to get out of the ending. 

This post leaves some dots unconnected. 

Are you suggesting that people pretend to have beliefs they don't have in order to have a good career and also shift the Republican party from the inside? 

Are you suggesting that anyone can be a Republican as long as they have a couple of beliefs or values that are not totally at odds with those of the Republican party — even if the majority of their beliefs and values are far more aligned with another party? 

Or by telling people to join the Republican party, are you suggesting they actively change some of their beliefs or stances in order to fit in, but then focus on shaping the party to be aligned with EA values that it is currently kind of neutral about?

It doesn't seem you're saying the first thing, because you don't say anything about hiding one's true beliefs, and you have the example of the openly left-wing acquaintance who got a job at a conservative NGO. 

If you're saying the second thing, I think this is more difficult then you're imagining. I don't mean emotionally difficult because of cold uggies. I mean strategically or practically difficult because participation in certain political parities is generally meant to be constrained by preexisting beliefs. If you are going to join a party and work up a career ladder in that party, you can't do this without interacting with other people in that party. And those people are going to want to talk to you about your political beliefs. If they find out your political beliefs are mostly or totally unaligned with the Republican party, but you have these other interests (like AI safety) that are for now maybe party neutral — but really you're joining the party because it is more desperate for young people and/or you want to steer the party away from its current direction — you're going to have trouble being taken seriously as a Republican, and you could be treated as a hostile invader. That could make it hard to achieve your goals in joining. The example of your acquaintance suggests this may not be impossible, but you haven't said what she has done within the NGO. Is she changing the NGO to better fit with her values, or does she now have to ignore her own values to keep the job? Did the NGO happen to focus on the one area within the Republican party she already agreed with? 


You imply the notion of replaceability to defend joining the Republican party. If your values are aligned with the Democrats, and you become a Democrat and try to get jobs within the Democratic party, then you've taken a spot from someone who would have behaved similarly to you. But if your values and beliefs are aligned with the Democrats and you join the Republican party and get Republican jobs, you've displaced an actual Republican who would have had worse values and done worse things in the job, and by doing this, you can more drastically change the values of the party than you would change the values of a party you agree with.

This is interesting but I doubt replaceability works in this case. First, it assumes parties and the jobs within them are zero sum. This seems wrong. Parties and the number of jobs within them can grow. There is not an inevitably limited set of Republican spots. There can be more of these if more people join the party. So if your values are unaligned with Republicans, and you join the party to block an actual Republican from getting a job and influencing the party, it may turn out that you've blocked no one from anything, and have only grown a party that you think is largely a force for bad. 

Second, this isn't like earning to give by getting a finance job and donating lots of money under the assumption that the next person who would have got that job would not have donated. You don't actually have to have or pretend to have a certain set of values to work in finance (though some values would make the job more emotionally difficult than others, and certain values will make it easier to get along with co-workers than others). The main thing you have to do is be good at the job. If you donate most of your income from your high paid job, people you work with might find it weird, but they probably won't treat you as a hostile invader. In contrast, you do need to have certain beliefs or values to be accepted as a Republican. 

So, replaceability doesn't really seem to apply to joining ideological organizations. It doesn't make sense to join ideological organizations that are unaligned with your own values because of a perceived redundancy in joining an ideological organization you actually agree with. Again, because it's not zero sum, and because you will not be easily accepted by ideological organizations you disagree with.

Maybe you're thinking that if young people who don't like the Republican party join it nevertheless, their values could drift and become more Republican over time, and so they will eventually fit in while hopefully maintaining their concern for AI safety or whatever EA interests they started with. This avoids the hostile invader problem but not the problem of growing a party they were initially at odds with.

You come across as sympathetic to the Republican party. This makes me think you might be telling people to do the third thing: actively change their beliefs to be more Republican, maybe by hanging around Republicans and letting value drift take over, but still trying to hold on to some core EA ideas that have not been politicized yet. Perhaps you even think the value drift itself would be good.

I think this approach would make the most sense to someone who is on the fence between different political ideologies and maybe leans slightly toward the Democrats but doesn't think Republicans are horrible. Maybe a lot of libertarians would qualify. It would make sense to tell all this to them. But you've claimed any young EA who isn't already on a career track incompatible with the Republican party should join the Republican party. This is unrealistic. I think most people who dislike the Republican Party are not going to want to risk the harm that a future version of themselves could do if they start agreeing with Republicans on a lot of things and help to grow the Republican party. This is not because of cold uggs, but because they might worry that taking your advice could lead them to make the world worse.

The original argument you're reacting to is flawed, which carries over into your second one. To make both arguments clearer, we need to know the significance of an embryo being human, why this matters to utilitarians, and what sort of utilitarians you mean. Does an embryo being human mean it has the same moral status as an adult human? Does it mean it has a similar interest in continued living as an adult embryo does? Does it mean it is harmed by death -- and if so, does this harm of death leave it worse off than if it were never conceived at all?

And what type of utilitarians do you have in mind? Total hedonic utilitarians presumably wouldn't be that worried about miscarriages for the sake of the embryos themselves even if the embryos were human, had the same moral status as adult humans, and had an interest in continued existence. That's because embryos are usually relatively replaceable and their deaths are usually less traumatic for others than the deaths of those who are already born.

As for the harm of death, total utilitarians care about intrinsic value of outcomes, and most philosophers of death don't think of death as an intrinsic harm. This means that for a total utilitarian, if someone pops into existence, has a good fleeting moment of existence and then dies painlessly without anything else being affected, the only thing that counts is the good moment. We don't count the death as something bad, or something worse than the being never existing in the first place. It would be better if the being lived longer and had more good experiences, but it's not worth preventing an existence just to prevent a death.

Total hedonic utilitarians care about saving lives because that seems like an effective way to increase good. Stopping miscarriages does not seem like an effective way to do that.

But maybe you have other sorts of utilitarians in mind. There could be some utilitarians with certain person-affecting views, or who think of death as an intrinsic harm, who might be more worried about this. But even then it would be important for them what the moral status of embryos is and whether embryos have an interest in continued existence. I would expect them to think an interest in continued existence would require sentience at least, or a greater conscious awareness than we expect embryos to have.

Answer by RhyssNov 24, 20202
0
0

Balliol tends to have a lot of philosophy graduate students, and Wadham is considered to be one of the most left-wing colleges. Looking at the list of current Oxford philosophy graduate students, I noticed there are a lot at St Anne's right now as well. But this can change depending on the year, and philosophy student obviously doesn't mean EA. I would be surprised if any college reliably had a higher number of EAs. 

AlasdairGives' suggestion to consider funding options makes sense, though you should also keep in mind that the wealthiest colleges get the most applications, so if you apply to St John's, there's more of a risk they won't pick you, and then there's more randomness in the college you end up at. 

I had a similar question myself. It seems like believing in a "long reflection" period requires denying that there will be a human-aligned AGI. My understanding would have been that once a human-aligned AGI is developed, there would not be much need for human reflection—and whatever human reflection did take place could be accelerated through interactions with the superintelligence, and would therefore not be "long." I would have thought, then, that most of the reflection on our values would need to have been completed before the creation of an AGI. From what I've read of The Precipice, there is no explanation for how a long reflection is compatible with the creation of a human-aligned AGI.

A lot of good ideas here!

In interested in how Demeny Voting is expected to work psychologically. I would expect just about everyone who is given a second vote (which they are told to submit on behalf of future generations) to use that second vote as a second vote for whatever their first vote was for. I imagine they would either think their first vote was for the best policy/person, in which case they could convince themselves that's best for future generations too, or they would realize their first vote is only good for the short term, but they would double vote for short-termism anyway because that's what matters to them. Either way, I wouldn't expect them to vote against their own preferences with the second vote for future generations. I wouldn't expect many people to vote in some way that they saw as directly good for themselves, and then to cast a second contradictory vote that they saw as possibly neutral or bad for themselves but good for future people.

The system could be set up so they cannot vote for the same thing twice, but then I would expect most people to not use their second vote, unless there were at least two options they happened to like. (To prevent this, it could be set up so everyone voting was required to vote twice, for two different options, but then people may be more likely to just stay home when there's not two options they like. Maybe then there could be fines for not voting, but fining people for not voting for a candidate they don't like might lead to resentment.) However, people being required to vote for more than one option could be interesting as a version of approval voting in which everyone has two totally separate votes for different candidates.

If double voting were allowed, I would expect most people to do exactly that, but framing the second vote as allegedly on behalf of future generations could at least get people thinking more about the longterm, which might change which candidate some people double vote for. Is this sort of indirect effect what you'd be going for with this?

I haven't read the Srinivasan, Gray, and Nussbaum critiques. However, I did read the Krishna critique, and that one uses another rhetorical technique (aside from the sneering dismissal McMahan mentions) to watch out for in critiques of effective altruism. The technique is for the critic of EA to write in as beautiful, literary and nuanced a way possible, in part to subtly frame the EA critic as a much more fully developed, artistic and mature human than the (implied) shallow utilitarian robots who devote their lives to doing a lot of good.

Effective altruism can then be rejected, not on the the basis of logic or anything like that (in fact, caring too much about this kind of logic would be evidence of your lack of humanity), but on grounds that rejecting EA goes along with being nuanced, sophisticated, socially wise, and truly human.

Load more