Comment author: Andaro 08 July 2018 10:36:24PM 0 points [-]

I would beware the political backlash and retaliation costs from #2. What you are classifying as "ethical flaws" is actually about agenda.

In a representative democracy, government spending is supposed to be allocated according to the best interests of tax payers, voters, and citizens. Of course those are human beings living in the relatively present time with citizenship in the respective country. Trying to game the system so that it starts allocating those resources differently is not fixing an ethical flaw, it's a shift in agenda that does not match the principle of representation.

You may not care about that, but you should care about the political and social backlash EA will deservedly get if it undermines our best interests as voters, taxpayers and citizens of the countries you are trying to coopt.

Comment author: Andaro 03 July 2018 01:42:50AM 1 point [-]

Read free stories online. The biggest cost is the effort to find the best 10% among all existing free stories. But those are very much worth reading and you can spend countless hours quite entertained, effectively free of cost.

Comment author: Andaro 11 May 2018 10:09:17AM *  -6 points [-]

Value drift is not necessarily a bad thing.

If it gets people away from cultish movements with morally questionable ideologies, value drift is a good thing.

If you're a collage kid who drinks the KoolAid and then outgrows it over time, all the more power to that future self.

Grounding your spending in your own wellbeing has high information value; the purchasing power allocated to your own preferences gives tangible feedback inside your own brain - you know what brings you utility and what doesn't. You know what purchases you like and which ones you dislike.

Compare this with giving money to strangers who merely promise to make the world a better place based on lots of highly questionable empirical assumptions and even more questionable moral axioms. Surely you can see the difference in information value.

Frankly I am shocked that there are people who give 50% of their income away to Effective Altruism; the social dynamics and moral uncertainties surrounding the Effective Altruism movements don't even remotely justify such a speculative investment.

Comment author: Gregory_Lewis 25 November 2017 09:18:46AM *  5 points [-]

You go badly wrong in giving a concatenation of implausible beliefs into a generalized misanthropic conclusion (i.e. the future will suck, people on xrisk rationalise this away and just want status, etc.)

1) Wildly implausible and ill-motivated axiological trade-off ratios

You suggest making the future vastly bigger may be no great thing even if the ratio of happiness:sadness is actually very high, as the sadness dominates. Yet it is antinatalist/negutils who are outliers in how they trade-off pleasure versus pain.

FRI offers a '1 week torture versus 40 years of happiness' trade-off for an individual to motivate the 'care much more about suffering' idea (about 1:2000 by time length). I'd take this, and I guess my indifference is someone between months and years (~~1:100-1:10). Claims like "wouldn't even undergo a minute of torture" (so ~~ 1:10^8 if you get 40 years afterwards) look wild:

  • Expressed preferences are otherwise. Most say they're glad to be alive, that their lives are worth living, etc.
  • Virtually everyone's implied preferences are otherwise. I'd be happy to stand in the rain for a few minutes for a back concert, suffer a pinprick to have sex with someone I love, and so on.

In essence, we take ourselves to have direct access to the goodness of happiness and the badness of suffering, and so we trade-off these at not-huge ratios. A personal example. One of the (happily, many) joyful experiences of my life was playing games in a swimming pool at a summer camp. Yet I had a very severe muscle cramp (worst of my life) during the frolicking. The joyful experience (which lasted a few hours) greatly outweighs the minute or so of excruciating pain from the cramp

I don't propose 'bad muscle cramp' even approaches the depths of suffering humans have experienced - so maybe there's some threshold between pinpricks and 'true' torture where the trade-off ratio should become vast. Others have suffered the torture which you think (effectively) no amount of happiness can outweigh. Michelle Knight was abducted at the age of 21 and beaten, raped, starved, and many horrendous things besides, for eleven years. I quote from her memoir:

I want to bless other people as much [sic] I've been blessed. Whenever I say that, some people seem surprised I see my life as a blessing after all the terrible things I went through. But the blessing is that I made it out alive. I'm still here. Still breathing every day. And I'm able to do something for other people. There is no better blessing than that.

I take it she thinks the happiness has outweighed the suffering in her life, and suspect she would say her life has been on balance good even if she died tomorrow. This roughly implies a trade off of 1:3. Her view is generally shared by survivors of horrendous evils: the other two women in the Cleveland Kidnapping say similar things (ditto other survivors of torture). I hope, like I, you have much worse access to the depths of how bad suffering can be compared to these people. Yet the they agree with me, not you.

One could offer debunking defeaters for this. Yet the offers tend to be pretty weak ("Because of Buddhist Monks and meditation really all that is good is the tranquil lack of experience" - nah, meditation is great, but I would still want the pool parties too; "Maybe the 'pleasure' you get is just avoiding the (negative) craving" - nah, I often enjoy stuff I didn't crave beforehand). Insofar as they're more plausible (e.g. maybe evolution would make us desire maintain a net-bad life), they're also reversible: as Shulman notes its much worse for our fitness to get killed than it is good for our fitness to have sex, and so we're biased into thinking the suffering can go lower than happiness can go higher.

The challenge is this:

  • Ultra high trade-offs between bad experiences like torture and happy bits of life is a (marked) minority position across the general population. Epistemic modesty implies deference.
  • When one looks at putative expert classes (e.g. philosophers, 'elite common sense', the 'EA cognoscenti') this fraction does not dramatically increase.
  • Indeed, for some expert classes the update perhaps should be common-sense leans too negative: my impression is being tortured for 11 years would make my life of (expectedly) around 80 years not worth living, but people who have been tortured for 11 years say otherwise; my impression is life with locked in syndrome is hellish and better off not lived, yet those with locked in syndrome generally report good quality of life.
  • The undercutting defeater that would transform this to think antinatalists/whoever really are the expert class cannot be found. Especially as one could throw in debunking explanations against them too: depression seems to predispose one to negative leaning views, and a cardinal feature of depression is anhedonia - so maybe folks with high trade-off ratios just aren't able to appreciate the magnitude of a happy experience in a typical person.

2) Most life isn't wrongful, and expectedly worth the risk

Despite the above, it would overreach to say that everyone has a life worth celebrating no matter what happens to them. Although most quadriplegics report a life worth living, some on reflectionopt for euthanasia.

Yet preventing such cases should not be lexically prior to any other consideration: we should be willing to gamble utopia against extinction at the chance of a single terrible life of 1/TREE(9). Similar to the above, myself (and basically everyone else) take our futures to be worth living for on selfish grounds, even though it must be conceded there's some finite chance of our lives becoming truly horrendous.

Given it seems most people have lives worth living (as they tell us), it seems the chances of a typical person who is born having a life worth living is very good indeed. If I had a guardian angel who was solely advocating for my welfare, they should choose me to be, even if they only have vague reference class steers (e.g. "He'll be born into a middle-classish life in the UK; he'll be born to someone, somewhere in 1989; etc.")

Statistical outliers say life, even in the historically propitious circumstances of the affluent west, is not good for them. Their guardian angels shouldn't actualize them. Yet uncertainty over this, given the low base-rates of this being the case, doesn't give them right of veto across the innumerable multitudes who could rejoice in an actual future. Some technologically mature Eschaton grants (among any things) assurance we only bring into existence beings who would want to exist.

3) Things are getting better, and the future should be good

Humanity's quantitative track record is obviously upward (e.g. life expectancy, child mortality, disease rates, DALY rates, etc.).

Qualitatively, it looks like things are getting better too. Whatever reprehensible things Trump has said about torture would look anodyne from the perspective of the 16th century where it was routine to torture criminals, dissidents, etc. Quantitatively, ones risk of ending up a victim of torture has surely fallen over the millennia (consider astonishingly high rates of murder in pre-technological human groups - one suspects non-death harms were also much more prevalent). We also don't take burning cats alive as wholesome fun.

There remain moral catastrophes in the periphery of our moral vision (wild animal suffering), and I would be unsurprised that the future will see more we've overlooked. Not going extinct grants us more time to make amends, and capture all the goods we could glean from the cosmic endowment whilst avoiding terrible scenarios. Limiting x-risk, in essence, is a convergent instrumental goal for mature moral action in the universe.

4) Universal overconfidence

I am chary to claim knowledge of what the morally best thing the universe should be optimised for (you could do with similar circumspection: there have been ~ 10^11 childbirths in human history, do you really your account makes it plausible that not one was motivated by altruism?) Yet this knowledge is unnecessary - one can pass this challenge on to descendants much better situated than us to figure it out.

What is required is reason that the option value of a vast future is worth preserving. It seems so: If it turns out that the only thing that makes things good is happiness, we can tile the universe in computronium and simulate ecstasy (which should give amounts of pleasure to pain over the universe's history not '10% higher', but more like 10^10:1, even with extreme trade-off ratios). If there's other items on an objective list (or just uncertainty about what to value) one can divvy up the cosmic endowment accordingly. If our descendants realise you were right all along they can turn the whole thing off - or perhaps better use the cosmic endowment as barter for acausal trade with other universes to reduce the suffering in those. Even some naïve sci-fi scenario of humans like us jumping on space ships and jetting around the cosmos looks good to me.

Cosmic hellscapes are also possible - but their probability falls in step with our moral development. The 'don't care about X risk' view requires both that humans would fashion some cosmic hellscape, and that they couldn't fix it later (I'd take an existence lottery with 10^18 torture tickets and 10^35 wonderful life tickets - my life seems pretty great despite > 1/100 Quadrillion chance of torture). Sufficient confidence in both of these to make x-risk not a big deal looks gravely misplaced.

Comment author: Andaro 28 November 2017 12:00:25PM *  -1 points [-]

Yet preventing such cases should not be lexically prior to any other consideration: we should be willing to gamble utopia against extinction at the chance of a single terrible life of 1/TREE(9).

I disagree; it is lexically, deontologically more important not to cause an innocent rape or nc torture victim than to cause any amount of happiness or utopian gain for others; also the number is absurd, terrible lives in the millions are a stochastical inevitability even just on Earth within each generation. Just look at the attempted suicide rates.

Statistical outliers say life, even in the historically propitious circumstances of the affluent west, is not good for them. Their guardian angels shouldn't actualize them. Yet uncertainty over this, given the low base-rates of this being the case, doesn't give them right of veto across the innumerable multitudes who could rejoice in an actual future.

I disagree; the right not to be tortured or raped without one's consent is lexically more morally important than the interest of others to rejoice in a good future. Rape doesn't become moral even if enough spectators enjoy the rape video; nc torture doesn't become moral even if enough others rejoice in the knowledge of the torture. Victimizing nc innocents in this way is not morally redeemable by the creation of utopias populated by lucky others. There is no knowledge that our descendants could discover that would change this.

I often read rape and torture scenes in fiction - you could also watch Game of Thrones for the same effect - and while I enjoy the reading, I am often horrified by the thought that equivalents are real. If you want a good example, read this. (content warning: rape and torture, obviously). Now, I love these story as much as the next guy, but they also make me reflect: If I could choose to create a universe where this happens once and also intergalacitc utopias filled with happy life exist, or a universe that is empty, I would choose the universe that is empty. And I think it's utterly morally absurd to choose otherwise. It's churched-up evil.

Of course, you don't have to look for fiction, just remember that actual nc child torture is still legal in the US, the UK, and France, among other countries. Or read the piece about North Korea on this forum. Humanity has no redeeming qualities that could morally justify the physical reality of these systems. It never will.

Similar to the above, myself (and basically everyone else) take our futures to be worth living for on selfish grounds

I don't. Plus, for those who see it your way, it's consensual (though not necessarily rational). Those who disagree, are of course victimized by the anti-suicide religionists and their anti-choice laws. It's not like people have an actual right to exit from this shitshow.

Humanity's quantitative track record is obviously upward (e.g. life expectancy, child mortality, disease rates, DALY rates, etc.).

This can turn around as per-capita incomes fall, which inevitably happens in a Malthusian scenario. And Malthusian scenarios are not outlier probability scenarios, but expected with high (mainstream) probability, because any fast reproduction technology without global centralized suppression predicts a near-inevitable Malthusian outcome (any fast reproduction tech, not just ems).

Moral progress is not a robust law of nature, but could be contingent on other factors that can turn around, or it could simply be a random walk with reversals to the mean to be expected, combined with distortions of perception (any generation will consider its values superior to prior generations and therefore see moral progress, no matter what directions the values actually took or why).

If it turns out that the only thing that makes things good is happiness, we can tile the universe in computronium and simulate ecstasy (which should give amounts of pleasure to pain over the universe's history not '10% higher', but more like 10^10:1, even with extreme trade-off ratios).

Several problems here. (1) the numbers are absurdly overoptimistic, you assume lots of hedonium with near-zero torture. Hedonium doesn't carry its own economic weight and the future will likely be dominated by Malthusian replicators who are not optimized for ecstasy, but competitive success in replication,

(2) you assume our descendants will be rational moral beings who implement our idealized moral values (far mode), when in reality they will almost certainly be constrained by intense competitive pressures and implement selfish incentives (near mode); they would use victimization as a means to an end just as likely as current people are to eat factory-farmed meat; indeed value drift makes it even more likely that they won't share our already-meager humane values, e.g. their altered psychology may have optimized empathy and justice instincts out completely.

Maybe what's really going on here is you're making a bid for status by accusing others of being status seeking

Hahahaha. I'm at -12 karma because I wrote what I think instead of what people here want to hear. And I knew well in advance that this would happen. If I wanted status, I'd join a group in person and give lip-service to the community dogma. Probably the Catholic church, then I could sing hallelujah all day long and scoff at those filthy atheists while covertly grooming young girls for sexual use. And you know what, I'd probably be happier that way. Problem is, I'm not a good enough liar, and I despise gullible people far too much to play the pretend game.

Comment author: Andaro 27 November 2017 06:02:57PM -4 points [-]

It is great to see that some people in EA recognize that the greater evils of this world are disproportionately worth damaging. North Korea is one of the greatest evils on Earth, and any political force that will increase the probability of an eliminatory war with North Korea is morally worthy of considerable support.

"Effective Altruism" is continuously overestimating the value of unconditional niceness and underestimating the altruistic value of systematically damaging victimizers.

Comment author: BenMillwood  (EA Profile) 26 November 2017 09:46:31AM 2 points [-]

Can't help but feel this thoughtful and comprehensive critique of negative utilitarianism is wasted on being buried deep in the comments of a basically unrelated post :)

Promote to its own article?

Comment author: Andaro 27 November 2017 05:48:24PM -4 points [-]

critique of negative utilitarianism

Except I never argued for Negative Utilitarianism. Misrepresenting the arguments I made as such is a complete strawman.

For example, I don't believe there's a moral reason to prevent people who want pain and consent to it, from having pain.

Neither do I believe that there's a moral reason to prevent suffering for the guilty who have forced it on nonconsenting innocents. You, for example, have actively worked to cause it for a very large number of innocent nc victims, and therefore I do not believe there is a moral reason to prevent your suffering or victimization, even if it is nc.

It appears I was downvoted to -10 karma by people who didn't even read my posts.

Comment author: kbog  (EA Profile) 20 November 2017 07:04:07AM *  2 points [-]

You can't make a thread saying sexual violence is bad because of suicide, and then not allow people to discuss the consent principle as it pertains to suicide.

If you use "lives saved" numbers that imply involuntary survival is good, then you will get commenters pointing out that this violates the consent principle.

Well that is just a terrible argument, because no one's consent is being violated when we prevent their lives from being bad enough that they want to commit suicide.

and that the x-risk reduction efforts implies actively causing a future that contains astronomical amounts of additional rape.

That's not really new. Having more population implies having more of... everything.

This is both true and relevant, even if it goes against the usual euphemistic framing and may therefore sound counterintuitive to you

Look dude, if you want to go around saying "we should let the planet go extinct so that wildlife doesn't endure the tragedy of existence" then the onus of justifying things that sound counterintuitive on their face is on you.

Comment author: Andaro 24 November 2017 03:48:03PM -3 points [-]

Nice exercise in goalpost-moving, kbog.

Look dude

Errrr, no.

Comment author: Gregory_Lewis 18 November 2017 05:15:40PM 4 points [-]

Why not? It's not like I'm heroically walking into Omelas forevermore. It's one minute. As acts of self sacrifice go, it's trivial: I understand childbirth can be very painful, and it generally lasts longer than a minute, among many examples.

I also don't see where you're going with the consent thing. If I'm offered the trade-off, I take it; if you add a rider like "you'll forget this conversation ever happened, but I'll randomly swoop in and torture you at some moment or another," I still take it.

Comment author: Andaro 24 November 2017 03:35:41PM -2 points [-]

(This is a long comment. Only the first four paragraphs are in direct response to you. The rest is still true and relevant, but more general. I don't expect a response.)

Childbirth is not an act of self-sacrifice. It never was. There was not even one altruistic childbirth in all of history. It was either involuntary for the female (vast majority) or self-serving (females wanting to have children, to bind a male in commitment, or to get on the good side of the guy who can and will literally burn you alive forever).

I'm not saying there is never any heroism if the hero can harvest the status and material advantages from it. But if they can discreetly omit it and there's no such external reward, motivation in practice does look slim indeed.

Even if you're a statistical outlier, consider the possiblity that you'd be saving a large ethical negative, which is a tragic mistake rather than a good thing.

If you personally would be willing to pre-commit, that's at least some form of consent. In contrast, the actual victimization in the future is largely going to be forced on nonconsenting victims. There's a moral difference. It's hard to come up with something even in principle that could justify that.

Not to mention humanity's quantitative track record is utterly horrible. Some improvements have been made, but it's still completely irredeemable overall. Politics is a disgusting, vile shitshow, with top leaders like the POTUS openly glorifying torture-blackmail.

Seriously, I have never seen an x-risk reducer paint a realistic vision of the future, outline its positives without handwaving, stay honest and within the realm of probable outcomes, so that a sane person could look at it and say, "Okay, that really is worth torturing quintillions of nc victims in the worst ways possible."

If they can be bothered to address it at all, you'll find mostly handwaving, e.g. Derek Parfit in his last publication dismissing the concern with one sentence about how "our successors would be able to prevent most human suffering". It's the closest they've got to an actual defense. Ignoring, of course, that torture is on purpose and technology just makes that more effective. Ignoring also that even if suffering becomes relatively rarer, it will still happen frequently, and space colonization implies a mind-boggling increase in the total.

Ignoring also the more fundamental question why even one innocent nc victim should be tortured for the sake of... what, exactly? Pleasure? Human biomass? Monuments? They never really say. It's not like these people are actually rooting for some specific positive thing that they're willing to put their names on, and then actually optimize that thing.

If Peter Singer came out and said he wants x-risk reduced because he expects 10% more pleasure than pain from it and he'll bite all the utilitarian bullets to get there, advocating to spread optimized pleasure minds rather than humans as much as possible and prevent as much pain as possible by any means necessary, I would understand. I would disagree, but it would be an actual, consistent goal.

But in practice, this usually doesn't happen. X-risk reducers use strategic vagueness instead. The reason for that is rather simple: "Yay us" yields social status points in the tribe, and humanity is the current default tribe for most intellectuals of the internet era. So x-risk reduction advocacy is really just intellectualized "yay us" in the internet era. As long as it is not required, bullets will not be bitten and no specific goals will be given. The true optimization function of course is the advocate's own social status.

Comment author: kbog  (EA Profile) 17 November 2017 01:53:50PM 4 points [-]

You're being downvoted because you're using a thread about sexual violence as a platform for pushing your POV on an entirely different subject.

Comment author: Andaro 18 November 2017 01:00:20PM *  0 points [-]

That's incorrect.

You can't make a thread saying sexual violence is bad because of suicide, and then not allow people to discuss the consent principle as it pertains to suicide.

If you use "lives saved" numbers that imply involuntary survival is good, then you will get commenters pointing out that this violates the consent principle. You are not immune to criticism.

Don't want to discuss suicde? Then don't bring it up.

The other points crossed some inferential distance, but were both relevant and correct. It really is true that most rape currently happens in nonhuman animals, and that the x-risk reduction efforts implies actively causing a future that contains astronomical amounts of additional rape. This is both true and relevant, even if it goes against the usual euphemistic framing and may therefore sound counterintuitive to you.

Comment author: Gregory_Lewis 17 November 2017 08:04:59PM 10 points [-]

Do you think any of the x-risk reduction advocates would voluntarily go through even one minute of personal torture if it were necessary to prevent civilization from collapsing by 2100?

I would do so gladly.

Comment author: Andaro 18 November 2017 12:50:16PM 0 points [-]

This may sound rude, but I don't believe you.

Of course, if you consented, it would be consensual. The actual torture will be nonconsensual.

View more: Next