Comment author: Halstead 02 February 2018 06:37:52PM *  2 points [-]

Genetics might be a constraint on ultra-fragility. If all of the most practically important traits are highly heritable, then one wouldn't expect the contingency of conception to produce as much variation in outcomes as in the state of affairs in which the contingency of conception has a very large effect on average individual traits. While it is true that the individual born is a highly contingent matter, the traits of the individual produced might not be. If my parents had an argument on the blessed night of my conception but overcame their disagreement the next day, then there would be some reason to think that a one day older version of me would be writing this comment.

Chaos also doesn't seem inimical to attempts to rationally steer the future. Notwithstanding the fact that the climate system is chaotic, pumping lots of greenhouse gases into it looks like a bad idea in expectation.

Comment author: Gregory_Lewis 02 February 2018 06:48:12PM 2 points [-]

That seems surprising to me, given the natural model for the counterpart in the case you describe would be a sibling, and observed behaviour between sibs is pretty divergent. I grant your counterfactual sibling would be more likely than a random member of the population to be writing something similar to the parent comment, but the absolute likelihood remains very low.

The fairly intermediate heritabilities of things like intelligence, personality traits etc. also look pretty variable. Not least, there's about a 0.5 chance your counterpart would be the opposite sex to you.

I agree even if history is chaotic in some respects, it is not chaotic to everything, and there can be forcing interventions (one can grab a double pendulum, etc), yet less overwhelming interventions may be pretty hard to fathom in the chaotic case (It's too early to say whether the french revolution was good or bad, etc.)

9

How fragile was history?

Elsewhere  (and better): 1 , 2 . If one could go back in time and make a small difference in the past, would one expect it to effect dramatic changes to the future? Questions like these are fertile soil for fiction writers (generally writing under speculative or alternative history) but... Read More
Comment author: Jan_Kulveit 26 December 2017 11:27:43PM *  3 points [-]

Obviously the toy model is wrong in describing reality: it's one end of the possible spectrum, where you have complete randomness. On the other you have another toy model: results in a field neatly ordered by cognitive difficulty, and the best person at a time picks all the available fruit. My actual claims roughly are

  • reality is somewhere in between

  • it is field-dependent

  • even in fields more toward the random end, there actually would be differences like different speeds of travel among prospectors

It is quite unclear to me where on this scale the relevant fields are.

I believe your conclusion, that the power law distribution is all due to the properties of the peoples cognitive processes, and no to the randomness of the field, is not supported by the scientometric data for many research fields.

Thanks for a good preemptive answer :) Yes if you are good enough in identifying the "golden" cognitive processes. While it is clear you would be better than random chance, it is very unclear to me how good you would be. *

I think its worth digging into an example in detail: if you look a at early Einstein, you actually see someone with an unusually developed geometric thinking and the very lucky heuristic of interpreting what the equations say as the actual reality. Famously special relativity transformations were written first by Poincare. "All" what needed to be done was to take it seriously. General relativity is a different story, but at that point Einstein was already famous and possibly one of the few brave enough to attack the problem.

Continuing with the same example, I would be extremely doubtful if Einstein would be picked by selection process similar to what CEA or 80k hours will be probably running, before he become famous. 2nd grade patent clerk? Unimpressive. Well connected? No. Unusual geometric imagination? I'm not aware of any LessWrong sequence which would lead to picking this as that important :) Lucky heuristic? Pure gold, in hindsight.

(*) At the end you can take this as an optimization problem depending how good your superior-cognitive-process selection ability is. Let's have a practical example: You have 1000 applicants. If your selection ability is great enough, you should take 20 for individual support. But maybe its just good, and than you may get better expected utility if you are able to reach 100 potentially great people in workshops. Maybe you are much better than chance, but not really good... than, maybe you should create online course taking in 400 participants.

Comment author: Gregory_Lewis 27 December 2017 02:27:20AM 4 points [-]

I share your caution on the difficulty of 'picking high impact people well', besides the risk of over-fitting on anecdata we happen to latch on to, the past may simply prove underpowered for forward prediction: I'm not sure any system could reliably 'pick up' Einstein or Ramanujan, and I wonder how much 'thinking tools' etc. are just epiphenomena of IQ.

That said, fairly boring metrics are fairly predictive. People who do exceptionally well at school tend to do well at university, those who excel at university have a better chance of exceptional professional success, and so on and so forth. SPARC (a program aimed at extraordinarily mathematically able youth) seems a neat example. I accept none of these supply an easy model for 'talent scouting' intra-EA, but they suggest one can do much better than chance.

Optimal selectivity also depends on the size of boost you give to people, even if they are imperfectly selected. It's plausible this relationship could be convex over the 'one-to-one mentoring to webpage' range, and so you might have to gamble on something intensive even in expectation of you failing to identify most or nearly all of the potentially great people.

(Aside: Although tricky to put human ability on a cardinal scale, normal-distribution properties for things like working memory suggest cognitive ability (however cashed out) isn't power law distributed. One explanation of how this could drive power-law distributions in some fields would be a Matthew effect: being marginally better than competing scientists lets one take the majority of the great new discoveries. This may suggest more neglected areas, or those where the crucial consideration is whether/when something is discovered, rather than who discovers it (compare a malaria vaccine to an AGI), are those where the premium to really exceptional talent is less. )

Comment author: RyanCarey 19 December 2017 08:14:04PM *  9 points [-]

That is an excellent update. The strategic directions broadly make sense to me for all of the teams, and I, like many people, am really happy with the ways CEA has improved over the last year.

One item of feedback on the post: the description of mistakes is a bit long, boring, and over-the-top. Many of these things are not actually very important issues.

One suggestion re the EA Forum revamp: the lesserwrong.com site is looking pretty great these days. My main gripes --- things like the front being slightly small for my preferences --- could be easily fixed with some restyling. Some of their features, like including sequences of archived material, could also be ideal for the EA Forum use case. IDK whether the codebase is good but recall that the EA Forum was originally created by restyling LessWrong1, so the notion of stealing that code comes from a healthy tradition! Also, This last part is probably a bit too crazy (and too much work), but one can imagine a case where you post content (and accept comments) from both sites at once.

That aside, it's really appreciated that you guys have taken the forum over this year. And in general, it's great to see all of this progress, so here's to 2018!

Comment author: Gregory_Lewis 20 December 2017 01:03:22AM 3 points [-]

I agree both with Ryan's overall evaluation (this is excellent) and that the 'mistakes' section, although laudable in intent, errs slightly too far in the 'self-flagellatory' direction. Some of the mistakes listed either seem appropriate decisions (e.g. "We prioritized X over Y, so we didn't do as much Y as we'd like"), or are the result of reasonable decisions or calculations ex ante which didn't work out.

I think the main value of publicly recording mistakes is to allow others to learn from them or (if egregious) be the context for a public mea culpa. The line between, "We made our best guess, it turned out wrong, but we're confident we made the right call ex ante" and "Actually, on reflection, we should have acted differently given what we knew at the time" is blurry, as not all decisions can (or should) be taken with laborious care.

Perhaps crudely categorising mistakes into 'major' and 'minor' given magnitude, how plausibly could have been averted, etc., and putting the former in updates like these but the latter linked to in an appendix might be a good way forward.

Comment author: Peter_Hurford  (EA Profile) 18 December 2017 10:01:34PM 3 points [-]

My excuses in order of importance:

1.) While I do think AI as a cause area could be plausibly better than global poverty or animal welfare, I don't think it's so plausibly better that the expected value given my uncertainty dwarfs my current recommendations.

2a.) I think I'm basically okay with the streetlight effect. I think there's a lot of benefit in donating now to support groups that might not be able to expand at all without my donation, which is what the criteria I outlined here accomplish. Given the entire EA community collaborating as a whole, I think there's less need for me to focus tons of time on making sure my donations are as cost-effective as possible, and more just a need to clear a bar of being "better than average". I think my recommendations here accomplish that.

2b.) Insofar as my reasoning in (2a) is some "streetlight effect" bias, I think you could accuse nearly anyone of this, since very few have thoroughly explored every cause area and no one could fully rule out being wrong about a cause area.

3.) There is still more I could donate later. This money is being saved mainly as a hedge to large financial uncertainty in my immediate future, but could also be used as savings to donate later when I learn more.

Comment author: Gregory_Lewis 19 December 2017 12:38:18AM 2 points [-]

[Note: I work on existential risk reduction]

Although I laud posts like the OP, I'm not sure I understand this approach to uncertainty.

I think a lot turns on what you mean by the AI cause area being "Plausibly better" than global poverty or animal welfare on EV. The Gretchenfrage seems to be this conditional forecast: "If I spent (lets say) 6 months looking at the AI cause area, would I expect to identify better uses of marginal funding in this cause area than those I find in animal welfare and global poverty?"

If the answer is "plausibly so, but probably not" (either due to a lower 'prima facie' central estimate, or after pricing in regression to the mean etc.), then I understand the work uncertainty is doing here (modulo the usual points about VoI): one can't carefully look at everything, and one has to make some judgments on what cause areas look most promising to investigate on current margins.

Yet if the answer is "Probably, yes", then offering these recommendations simpliciter (i.e. "EA should fully fund this") seems premature to me. The evaluation is valuable, but should be presented with caveats like, "Conditional on thinking global poverty is the best cause area, fund X; conditional on thinking animal welfare is the best cause area, fund Y (but, FWIW, I believe AI is the best cause area, but I don't know what to fund within it)." It would also lean against making ones own donations to X, Y etc., rather than spending time thinking about it/following the recommendations of someone one trusts to make good picks in the AI cause area.

Comment author: [deleted] 02 December 2017 06:53:29PM *  -2 points [-]

I left most EA Facebook groups and concluded that EA will be an ineffective movement as a whole because I found basically NONE of the above being done in your organization. Ever. "Being intellectually fair can help people to resolve disagreements, so we have norms against overconfidence and fallacious reasoning." No, you have a norm of extreme overconfidence and fallacious reasoning, in the form of DEMANDS for "arguments by authority" that are the consistent response I encountered. More than half a dozen EA people "explained" to me that they would pay no attention to my claims or work until I went back to university and got a PhD, and others who had only a Bachelors in computer programming wanted to "review" my work in unrelated areas before even accepting an unpaid article for their blog from me. Others responded as if EA was some sort of popularity contest and not an effort to help others altruistically.

As constituted, EA is a practice in the wildly overblown egos of privileged young white males (mostly) who will accomplish very very little. The norm is that they believe they know literally everything and have no interest in hearing ideas that are new to them, at all.

I joined because I have knowledge to share. The "moderators" of the FB group consistently felt my knowledge was of no value and refused to permit my posts to be seen. I have shared my knowledge at three international academic conferences, but it was not deemed worthy of a single FB post on EA. The message was abundantly clear, EA does not want new ideas or knowledge, does not want to see any of their current ideas and assumptions questioned at all. My advice to anyone who wants to "Share knowledge. If you know a lot about an area, help others to learn by writing up what you’ve found" is to find a group where people might have even a slight interest, your efforts to do so at "Effective" altruism will be entirely ineffective.

It is a damned shame, the concept of EA is a good one.

Comment author: Gregory_Lewis 02 December 2017 10:27:14PM *  2 points [-]

For the benefit of readers: The individual who wrote this is almost certainly Carmi Turchick, an (his words) "autodidact independent scholar". He reports he presented works relating to his blog at the Symposium on the Psychology of War and the Association for Politics and the Life Sciences, and presented a poster at the Human Evolution and Behaviour Society.

I take this academic record to be pretty modest for someone who claims to have novel understanding about how to 'solve war', so it doesn't seem unreasonable for people to screen out claims like this on this heuristic, and doesn't imply they take themselves to know literally everything nor have no interest in new ideas. Just that the likelihood of good new ideas arising from this reference class is too low for it to be worth indulging them with scarce attention.

Of course, such a screening heuristic means one won't see diamonds in the rough. I can reassure others this is unlikely the case here. For my sins I had a look at the Altruism and War work. It is very long, not very well written, and falls into the standard autodictat's trap of taking as startlingly original insights already made elsewhere - in this case, the idea 'maybe intra-group altruism can drive intragroup conflict' was first ventured by Darwin in the Origin of Species, and there has been considerable research since, usually under the heading of 'parochial altruism'.

When I made these suggestions to Turchick (alongside a recommendation he would be better served trying to work in academia) he offered in reply a vituperative parting shot suggesting I was demonstrably incompetent in the subject of my PhD, that I failed to review his second paper because I plan to steal ideas from it for my own academic career, that I'm an 'egotistical little punk running my mouth', and so on and so forth ad nauseam.

I hope the wider EA movement does not mourn the loss of his contributions too heavily, and beg forgiveness to whatever extent my interaction with him provoked this state of affairs - which I, of course, gravely and bitterly lament. I hope others take some solace from, as Achilles was spurred on by guilt by his role at causing the death of his friend Patroclus to redouble his efforts against the Trojans, so I redouble my meagre egotistical punk-like efforts to in some small part compensate for what Turchick would have provided. I also take further solace that Turchick is not wholly lost to us, and the shrewd and penetrating criticism he offers may provide some glimmer of hope for our movement to avoid his prognostications, although I fear they are Cassandra-esque in their accuracy.

[I am a moderator for the EA FB group, but moderation decisions regarding any of Turchick's posts were 'before my time'.]

Comment author: BenMillwood  (EA Profile) 26 November 2017 09:46:31AM 2 points [-]

Can't help but feel this thoughtful and comprehensive critique of negative utilitarianism is wasted on being buried deep in the comments of a basically unrelated post :)

Promote to its own article?

Comment author: Gregory_Lewis 27 November 2017 11:26:09AM 2 points [-]

Eh, I think a lot of this requires the context of previous replies, and I'm hesitant to signal boost a reply addressed to a not-that-great proponent of the view being critiqued. I might try and transfigure this into a more standalone piece when time permits, but probably not soon.

Comment author: Gregory_Lewis 25 November 2017 09:18:46AM *  5 points [-]

You go badly wrong in giving a concatenation of implausible beliefs into a generalized misanthropic conclusion (i.e. the future will suck, people on xrisk rationalise this away and just want status, etc.)

1) Wildly implausible and ill-motivated axiological trade-off ratios

You suggest making the future vastly bigger may be no great thing even if the ratio of happiness:sadness is actually very high, as the sadness dominates. Yet it is antinatalist/negutils who are outliers in how they trade-off pleasure versus pain.

FRI offers a '1 week torture versus 40 years of happiness' trade-off for an individual to motivate the 'care much more about suffering' idea (about 1:2000 by time length). I'd take this, and I guess my indifference is someone between months and years (~~1:100-1:10). Claims like "wouldn't even undergo a minute of torture" (so ~~ 1:10^8 if you get 40 years afterwards) look wild:

  • Expressed preferences are otherwise. Most say they're glad to be alive, that their lives are worth living, etc.
  • Virtually everyone's implied preferences are otherwise. I'd be happy to stand in the rain for a few minutes for a back concert, suffer a pinprick to have sex with someone I love, and so on.

In essence, we take ourselves to have direct access to the goodness of happiness and the badness of suffering, and so we trade-off these at not-huge ratios. A personal example. One of the (happily, many) joyful experiences of my life was playing games in a swimming pool at a summer camp. Yet I had a very severe muscle cramp (worst of my life) during the frolicking. The joyful experience (which lasted a few hours) greatly outweighs the minute or so of excruciating pain from the cramp

I don't propose 'bad muscle cramp' even approaches the depths of suffering humans have experienced - so maybe there's some threshold between pinpricks and 'true' torture where the trade-off ratio should become vast. Others have suffered the torture which you think (effectively) no amount of happiness can outweigh. Michelle Knight was abducted at the age of 21 and beaten, raped, starved, and many horrendous things besides, for eleven years. I quote from her memoir:

I want to bless other people as much [sic] I've been blessed. Whenever I say that, some people seem surprised I see my life as a blessing after all the terrible things I went through. But the blessing is that I made it out alive. I'm still here. Still breathing every day. And I'm able to do something for other people. There is no better blessing than that.

I take it she thinks the happiness has outweighed the suffering in her life, and suspect she would say her life has been on balance good even if she died tomorrow. This roughly implies a trade off of 1:3. Her view is generally shared by survivors of horrendous evils: the other two women in the Cleveland Kidnapping say similar things (ditto other survivors of torture). I hope, like I, you have much worse access to the depths of how bad suffering can be compared to these people. Yet the they agree with me, not you.

One could offer debunking defeaters for this. Yet the offers tend to be pretty weak ("Because of Buddhist Monks and meditation really all that is good is the tranquil lack of experience" - nah, meditation is great, but I would still want the pool parties too; "Maybe the 'pleasure' you get is just avoiding the (negative) craving" - nah, I often enjoy stuff I didn't crave beforehand). Insofar as they're more plausible (e.g. maybe evolution would make us desire maintain a net-bad life), they're also reversible: as Shulman notes its much worse for our fitness to get killed than it is good for our fitness to have sex, and so we're biased into thinking the suffering can go lower than happiness can go higher.

The challenge is this:

  • Ultra high trade-offs between bad experiences like torture and happy bits of life is a (marked) minority position across the general population. Epistemic modesty implies deference.
  • When one looks at putative expert classes (e.g. philosophers, 'elite common sense', the 'EA cognoscenti') this fraction does not dramatically increase.
  • Indeed, for some expert classes the update perhaps should be common-sense leans too negative: my impression is being tortured for 11 years would make my life of (expectedly) around 80 years not worth living, but people who have been tortured for 11 years say otherwise; my impression is life with locked in syndrome is hellish and better off not lived, yet those with locked in syndrome generally report good quality of life.
  • The undercutting defeater that would transform this to think antinatalists/whoever really are the expert class cannot be found. Especially as one could throw in debunking explanations against them too: depression seems to predispose one to negative leaning views, and a cardinal feature of depression is anhedonia - so maybe folks with high trade-off ratios just aren't able to appreciate the magnitude of a happy experience in a typical person.

2) Most life isn't wrongful, and expectedly worth the risk

Despite the above, it would overreach to say that everyone has a life worth celebrating no matter what happens to them. Although most quadriplegics report a life worth living, some on reflectionopt for euthanasia.

Yet preventing such cases should not be lexically prior to any other consideration: we should be willing to gamble utopia against extinction at the chance of a single terrible life of 1/TREE(9). Similar to the above, myself (and basically everyone else) take our futures to be worth living for on selfish grounds, even though it must be conceded there's some finite chance of our lives becoming truly horrendous.

Given it seems most people have lives worth living (as they tell us), it seems the chances of a typical person who is born having a life worth living is very good indeed. If I had a guardian angel who was solely advocating for my welfare, they should choose me to be, even if they only have vague reference class steers (e.g. "He'll be born into a middle-classish life in the UK; he'll be born to someone, somewhere in 1989; etc.")

Statistical outliers say life, even in the historically propitious circumstances of the affluent west, is not good for them. Their guardian angels shouldn't actualize them. Yet uncertainty over this, given the low base-rates of this being the case, doesn't give them right of veto across the innumerable multitudes who could rejoice in an actual future. Some technologically mature Eschaton grants (among any things) assurance we only bring into existence beings who would want to exist.

3) Things are getting better, and the future should be good

Humanity's quantitative track record is obviously upward (e.g. life expectancy, child mortality, disease rates, DALY rates, etc.).

Qualitatively, it looks like things are getting better too. Whatever reprehensible things Trump has said about torture would look anodyne from the perspective of the 16th century where it was routine to torture criminals, dissidents, etc. Quantitatively, ones risk of ending up a victim of torture has surely fallen over the millennia (consider astonishingly high rates of murder in pre-technological human groups - one suspects non-death harms were also much more prevalent). We also don't take burning cats alive as wholesome fun.

There remain moral catastrophes in the periphery of our moral vision (wild animal suffering), and I would be unsurprised that the future will see more we've overlooked. Not going extinct grants us more time to make amends, and capture all the goods we could glean from the cosmic endowment whilst avoiding terrible scenarios. Limiting x-risk, in essence, is a convergent instrumental goal for mature moral action in the universe.

4) Universal overconfidence

I am chary to claim knowledge of what the morally best thing the universe should be optimised for (you could do with similar circumspection: there have been ~ 10^11 childbirths in human history, do you really your account makes it plausible that not one was motivated by altruism?) Yet this knowledge is unnecessary - one can pass this challenge on to descendants much better situated than us to figure it out.

What is required is reason that the option value of a vast future is worth preserving. It seems so: If it turns out that the only thing that makes things good is happiness, we can tile the universe in computronium and simulate ecstasy (which should give amounts of pleasure to pain over the universe's history not '10% higher', but more like 10^10:1, even with extreme trade-off ratios). If there's other items on an objective list (or just uncertainty about what to value) one can divvy up the cosmic endowment accordingly. If our descendants realise you were right all along they can turn the whole thing off - or perhaps better use the cosmic endowment as barter for acausal trade with other universes to reduce the suffering in those. Even some naïve sci-fi scenario of humans like us jumping on space ships and jetting around the cosmos looks good to me.

Cosmic hellscapes are also possible - but their probability falls in step with our moral development. The 'don't care about X risk' view requires both that humans would fashion some cosmic hellscape, and that they couldn't fix it later (I'd take an existence lottery with 10^18 torture tickets and 10^35 wonderful life tickets - my life seems pretty great despite > 1/100 Quadrillion chance of torture). Sufficient confidence in both of these to make x-risk not a big deal looks gravely misplaced.

Comment author: Gregory_Lewis 25 November 2017 09:25:55AM *  1 point [-]

I also can't help but note accusations about status are generally a double edged sword. Maybe what's really going on here is you're making a bid for status by accusing others of being status seeking, thus pronouncing judgement and diagnosis on their petty motives, and for implying you (of course!) are above such things.

Grandiosely overconfident and really edgy stuff like "There was not even one altruistic childbirth in all of history" also seems more apt for getting iconoclastic status than 'strategic vagueness' (i.e. not being sure in the face of moral and empirical uncertainty).

Comment author: Andaro 24 November 2017 03:35:41PM -2 points [-]

(This is a long comment. Only the first four paragraphs are in direct response to you. The rest is still true and relevant, but more general. I don't expect a response.)

Childbirth is not an act of self-sacrifice. It never was. There was not even one altruistic childbirth in all of history. It was either involuntary for the female (vast majority) or self-serving (females wanting to have children, to bind a male in commitment, or to get on the good side of the guy who can and will literally burn you alive forever).

I'm not saying there is never any heroism if the hero can harvest the status and material advantages from it. But if they can discreetly omit it and there's no such external reward, motivation in practice does look slim indeed.

Even if you're a statistical outlier, consider the possiblity that you'd be saving a large ethical negative, which is a tragic mistake rather than a good thing.

If you personally would be willing to pre-commit, that's at least some form of consent. In contrast, the actual victimization in the future is largely going to be forced on nonconsenting victims. There's a moral difference. It's hard to come up with something even in principle that could justify that.

Not to mention humanity's quantitative track record is utterly horrible. Some improvements have been made, but it's still completely irredeemable overall. Politics is a disgusting, vile shitshow, with top leaders like the POTUS openly glorifying torture-blackmail.

Seriously, I have never seen an x-risk reducer paint a realistic vision of the future, outline its positives without handwaving, stay honest and within the realm of probable outcomes, so that a sane person could look at it and say, "Okay, that really is worth torturing quintillions of nc victims in the worst ways possible."

If they can be bothered to address it at all, you'll find mostly handwaving, e.g. Derek Parfit in his last publication dismissing the concern with one sentence about how "our successors would be able to prevent most human suffering". It's the closest they've got to an actual defense. Ignoring, of course, that torture is on purpose and technology just makes that more effective. Ignoring also that even if suffering becomes relatively rarer, it will still happen frequently, and space colonization implies a mind-boggling increase in the total.

Ignoring also the more fundamental question why even one innocent nc victim should be tortured for the sake of... what, exactly? Pleasure? Human biomass? Monuments? They never really say. It's not like these people are actually rooting for some specific positive thing that they're willing to put their names on, and then actually optimize that thing.

If Peter Singer came out and said he wants x-risk reduced because he expects 10% more pleasure than pain from it and he'll bite all the utilitarian bullets to get there, advocating to spread optimized pleasure minds rather than humans as much as possible and prevent as much pain as possible by any means necessary, I would understand. I would disagree, but it would be an actual, consistent goal.

But in practice, this usually doesn't happen. X-risk reducers use strategic vagueness instead. The reason for that is rather simple: "Yay us" yields social status points in the tribe, and humanity is the current default tribe for most intellectuals of the internet era. So x-risk reduction advocacy is really just intellectualized "yay us" in the internet era. As long as it is not required, bullets will not be bitten and no specific goals will be given. The true optimization function of course is the advocate's own social status.

Comment author: Gregory_Lewis 25 November 2017 09:18:46AM *  5 points [-]

You go badly wrong in giving a concatenation of implausible beliefs into a generalized misanthropic conclusion (i.e. the future will suck, people on xrisk rationalise this away and just want status, etc.)

1) Wildly implausible and ill-motivated axiological trade-off ratios

You suggest making the future vastly bigger may be no great thing even if the ratio of happiness:sadness is actually very high, as the sadness dominates. Yet it is antinatalist/negutils who are outliers in how they trade-off pleasure versus pain.

FRI offers a '1 week torture versus 40 years of happiness' trade-off for an individual to motivate the 'care much more about suffering' idea (about 1:2000 by time length). I'd take this, and I guess my indifference is someone between months and years (~~1:100-1:10). Claims like "wouldn't even undergo a minute of torture" (so ~~ 1:10^8 if you get 40 years afterwards) look wild:

  • Expressed preferences are otherwise. Most say they're glad to be alive, that their lives are worth living, etc.
  • Virtually everyone's implied preferences are otherwise. I'd be happy to stand in the rain for a few minutes for a back concert, suffer a pinprick to have sex with someone I love, and so on.

In essence, we take ourselves to have direct access to the goodness of happiness and the badness of suffering, and so we trade-off these at not-huge ratios. A personal example. One of the (happily, many) joyful experiences of my life was playing games in a swimming pool at a summer camp. Yet I had a very severe muscle cramp (worst of my life) during the frolicking. The joyful experience (which lasted a few hours) greatly outweighs the minute or so of excruciating pain from the cramp

I don't propose 'bad muscle cramp' even approaches the depths of suffering humans have experienced - so maybe there's some threshold between pinpricks and 'true' torture where the trade-off ratio should become vast. Others have suffered the torture which you think (effectively) no amount of happiness can outweigh. Michelle Knight was abducted at the age of 21 and beaten, raped, starved, and many horrendous things besides, for eleven years. I quote from her memoir:

I want to bless other people as much [sic] I've been blessed. Whenever I say that, some people seem surprised I see my life as a blessing after all the terrible things I went through. But the blessing is that I made it out alive. I'm still here. Still breathing every day. And I'm able to do something for other people. There is no better blessing than that.

I take it she thinks the happiness has outweighed the suffering in her life, and suspect she would say her life has been on balance good even if she died tomorrow. This roughly implies a trade off of 1:3. Her view is generally shared by survivors of horrendous evils: the other two women in the Cleveland Kidnapping say similar things (ditto other survivors of torture). I hope, like I, you have much worse access to the depths of how bad suffering can be compared to these people. Yet the they agree with me, not you.

One could offer debunking defeaters for this. Yet the offers tend to be pretty weak ("Because of Buddhist Monks and meditation really all that is good is the tranquil lack of experience" - nah, meditation is great, but I would still want the pool parties too; "Maybe the 'pleasure' you get is just avoiding the (negative) craving" - nah, I often enjoy stuff I didn't crave beforehand). Insofar as they're more plausible (e.g. maybe evolution would make us desire maintain a net-bad life), they're also reversible: as Shulman notes its much worse for our fitness to get killed than it is good for our fitness to have sex, and so we're biased into thinking the suffering can go lower than happiness can go higher.

The challenge is this:

  • Ultra high trade-offs between bad experiences like torture and happy bits of life is a (marked) minority position across the general population. Epistemic modesty implies deference.
  • When one looks at putative expert classes (e.g. philosophers, 'elite common sense', the 'EA cognoscenti') this fraction does not dramatically increase.
  • Indeed, for some expert classes the update perhaps should be common-sense leans too negative: my impression is being tortured for 11 years would make my life of (expectedly) around 80 years not worth living, but people who have been tortured for 11 years say otherwise; my impression is life with locked in syndrome is hellish and better off not lived, yet those with locked in syndrome generally report good quality of life.
  • The undercutting defeater that would transform this to think antinatalists/whoever really are the expert class cannot be found. Especially as one could throw in debunking explanations against them too: depression seems to predispose one to negative leaning views, and a cardinal feature of depression is anhedonia - so maybe folks with high trade-off ratios just aren't able to appreciate the magnitude of a happy experience in a typical person.

2) Most life isn't wrongful, and expectedly worth the risk

Despite the above, it would overreach to say that everyone has a life worth celebrating no matter what happens to them. Although most quadriplegics report a life worth living, some on reflectionopt for euthanasia.

Yet preventing such cases should not be lexically prior to any other consideration: we should be willing to gamble utopia against extinction at the chance of a single terrible life of 1/TREE(9). Similar to the above, myself (and basically everyone else) take our futures to be worth living for on selfish grounds, even though it must be conceded there's some finite chance of our lives becoming truly horrendous.

Given it seems most people have lives worth living (as they tell us), it seems the chances of a typical person who is born having a life worth living is very good indeed. If I had a guardian angel who was solely advocating for my welfare, they should choose me to be, even if they only have vague reference class steers (e.g. "He'll be born into a middle-classish life in the UK; he'll be born to someone, somewhere in 1989; etc.")

Statistical outliers say life, even in the historically propitious circumstances of the affluent west, is not good for them. Their guardian angels shouldn't actualize them. Yet uncertainty over this, given the low base-rates of this being the case, doesn't give them right of veto across the innumerable multitudes who could rejoice in an actual future. Some technologically mature Eschaton grants (among any things) assurance we only bring into existence beings who would want to exist.

3) Things are getting better, and the future should be good

Humanity's quantitative track record is obviously upward (e.g. life expectancy, child mortality, disease rates, DALY rates, etc.).

Qualitatively, it looks like things are getting better too. Whatever reprehensible things Trump has said about torture would look anodyne from the perspective of the 16th century where it was routine to torture criminals, dissidents, etc. Quantitatively, ones risk of ending up a victim of torture has surely fallen over the millennia (consider astonishingly high rates of murder in pre-technological human groups - one suspects non-death harms were also much more prevalent). We also don't take burning cats alive as wholesome fun.

There remain moral catastrophes in the periphery of our moral vision (wild animal suffering), and I would be unsurprised that the future will see more we've overlooked. Not going extinct grants us more time to make amends, and capture all the goods we could glean from the cosmic endowment whilst avoiding terrible scenarios. Limiting x-risk, in essence, is a convergent instrumental goal for mature moral action in the universe.

4) Universal overconfidence

I am chary to claim knowledge of what the morally best thing the universe should be optimised for (you could do with similar circumspection: there have been ~ 10^11 childbirths in human history, do you really your account makes it plausible that not one was motivated by altruism?) Yet this knowledge is unnecessary - one can pass this challenge on to descendants much better situated than us to figure it out.

What is required is reason that the option value of a vast future is worth preserving. It seems so: If it turns out that the only thing that makes things good is happiness, we can tile the universe in computronium and simulate ecstasy (which should give amounts of pleasure to pain over the universe's history not '10% higher', but more like 10^10:1, even with extreme trade-off ratios). If there's other items on an objective list (or just uncertainty about what to value) one can divvy up the cosmic endowment accordingly. If our descendants realise you were right all along they can turn the whole thing off - or perhaps better use the cosmic endowment as barter for acausal trade with other universes to reduce the suffering in those. Even some naïve sci-fi scenario of humans like us jumping on space ships and jetting around the cosmos looks good to me.

Cosmic hellscapes are also possible - but their probability falls in step with our moral development. The 'don't care about X risk' view requires both that humans would fashion some cosmic hellscape, and that they couldn't fix it later (I'd take an existence lottery with 10^18 torture tickets and 10^35 wonderful life tickets - my life seems pretty great despite > 1/100 Quadrillion chance of torture). Sufficient confidence in both of these to make x-risk not a big deal looks gravely misplaced.

Comment author: Gregory_Lewis 09 November 2017 01:22:42AM *  8 points [-]

I am wiser, albeit poorer: the bet resolved in Carl's favour. I will edit this comment with the donation destination he selects, with further lamentations from me in due course.

Comment author: Gregory_Lewis 22 November 2017 08:10:02PM 6 points [-]

Carl has gotten back to me with where he would like to donate his gains, ill-gotten through picking on epistemic inferiors - akin to crocodiles in the Serengeti river picking off particularly frail or inept wildebeest on their crossing. The $1000 will go to MIRI.

With cognitive function mildly superior to the median geriatric wildebeest, I can take some solace that these circumstances imply this sum is better donated by him than I, and that MIRI is doing better on a crucial problem for the far future than I had supposed.

View more: Next