Comment author: MichaelStJules 05 April 2018 03:34:00AM *  1 point [-]

allocate funds to the top charities in their cause area, and donate to those charities on a regular basis until the fund manager comes along and updates the allocation

Because of discount rates, wouldn't it then be better to do all of the disbursements between updates right after the update, instead of dragging them out?

Or, since people will continue donating between disbursements, disburse as funding becomes available, and save a chunk (everything after a certain date) for the next update because it will be better allocated.

Comment author: MichaelPlant 20 February 2018 09:41:35AM 1 point [-]

unsure why this was downvoted. I assume because many EAs think X-risk is a better bet than aging research. That would be a reason to disagree with a comment, but not to downvote, which is snarky. I upvoted for balance.

Comment author: MichaelStJules 26 February 2018 02:39:17AM 0 points [-]

I'm not sure I'd put it only on X-risk people. My understanding is that disease burden and DALYs are calculated using as a reference the highest life expectancy of any country by gender, which was previously Japanese women (now South Korean women?), and somewhere between 80 and 90 years. This means that deaths after this reference life expectancy simply don't count towards disease burden at all. I'd like to hypothesize that this and some of the downvotes may be due to what I suspect is a common intuition (perhaps not common in EA; I don't know): everyone ought to have an overall good life with a decent lifespan, i.e. "fair innings".

This "fair innings" might be part of why EAs are generally more concerned with global health and poverty than anti-aging. Maybe the stronger evidence for specific poverty/health interventions explains this better, though.

Mostly guesses on my part, of course.

Comment author: DonyChristie 10 January 2018 12:18:59PM 6 points [-]

Effective altruism has had three main direct broad causes (global poverty, animal rights, and far future), for quite some time.

The whole concept of EA having specific recognizable compartmentalized cause areas and charities associated with it is bankrupt and should be zapped, because it invites stagnation as founder effects entrench further every time a newcomer joins and devotes mindshare to signalling ritual adherence to the narrative of different finite tribal Houses to join and build alliances between or cannibalize, crowding out new classes of intervention and eclipsing the prerogative to optimize everything as a whole without all these distinctions. "Oh, I'm an (animal, poverty, AI) person! X-risk aversion!"

"Effective altruism" in itself should be a scaleable cause-neutral methodology de-identified from its extensional recommendations. It should stop reinforcing these arbitrary divisions as though they were somehow sancrosanct. The task is harder when people and organizations ostensibly about advancing that methodology settle into the same buildings and object-level positions, or when charity evaluators do not even strive for cause-neutrality in their consumer offerings. Not saying those can't be net-goods, but the effects on homogenization, centralization, and bias all restrict the purview of Effective Altruism.

I have often heard people worry that it’s too hard for a new cause to be accepted by the effective altruism movement.

Everyone here knows there are new causes and wants to accept them, but they don't know that everyone knows there are new causes, etc, a common-knowledge problem. They're waiting for chosen ones to update the leaderboard.

If the tribally-approved list were opened it would quickly spiral out of working memory bounds. This is a difficult problem to work with but not impossible. Let's make the list and put it somewhere prominent for salient access.

Anyway, here is an experimental Facebook group explicitly for initial cause proposal and analysis. Join if you're interested in doing these!

Comment author: MichaelStJules 11 January 2018 05:57:23PM 1 point [-]

One thing that's very useful about having separate cause areas is that it helps people decide what to study and research in depth, e.g. get a PhD in. This probably doesn't need to be illustrated, but I'll do it anyway:

If you consider two fields of study, A and B, such that A has only one promising intervention, and B has two, and all three interventions are roughly equal in expectation (or whatever other measures are important to you); then it would be better to study B, because if one of its two interventions don't pan out, you can more easily switch to the other; with A, you might have to move onto a new field entirely. Studying B actually has higher expected value than studying A, despite all three interventions being equal in expectation.

Comment author: MichaelStJules 22 November 2017 01:39:50AM 1 point [-]

Relevant recent systematic review:

Quality Assessment of Economic Evaluations of Suicide and Self-Harm Interventions: A Systematic Review.

http://psycnet.apa.org/record/2017-41357-001

PDF: http://psycnet.apa.org/fulltext/2017-41357-001.pdf

Comment author: Carl_Shulman 28 July 2017 05:52:08PM *  18 points [-]

Thinking from the perspective of a beneficiary, I would rather get $100 than remove a 1/10,000,000 risk of death. That level of risk is in line with traveling a few kilometers by walking, and a small fraction of the risk associated with a day skiing: see the Wikipedia entry on micromorts. We all make such tradeoffs every day, taking on small risks of large harm for high probabilities of smaller benefits that have better expected value.

So behind the veil of ignorance, for a fixed population size, the 'altruistic repugnant conclusion' is actually just what beneficiaries would want for themselves. 'Repugnance' would involve the donor prioritizing their scope-insensitive response over the interests of the beneficiaries.

An article by Barbara Fried makes a very strong case against this sort of anti-aggregationism based on the ubiquity of such tradeoffs.

Comment author: MichaelStJules 03 August 2017 04:40:00AM *  0 points [-]

Thinking from the perspective of a beneficiary, I would rather get $100 than remove a 1/10,000,000 risk of death.

Would you also volunteer to be killed so that 10,000,000 people just like you could have $100 that they could only spend to counterfactually benefit themselves?

I think the probability here matters beyond just its effect on the expected utility, contrary, of course, to EU maximization. I'd take $100 at the cost of an additional 1/10,000,000 risk of eternal torture (or any outcome that is finitely but arbitrarily bad). On the other hand, consider the 5 following worlds:

A. Status quo with 10,000,000 people with finite lives and utilities. This world has finite utility.

B. 9,999,999 people get an extra $100 compared to world A, and the other person is tortured for eternity. This world definitely has a total utility of negative infinity.

C. The 10,000,000 people each decide to take $100 for an independent 1/10,000,000 risk of eternal torture. This world, with probability ~ 1-1/e ~ 0.63 (i.e. "probably") has a total utility of negative infinity.

D. The 10,000,000 people together decide to take $100 for a 1/10,000,000 risk that they all are tortured for eternity (i.e. none of them are tortured, or all of them are tortured together). This world, with probability 9,999,999/10,000,000 has finite utility.

E. Only one out of the 10,000,000 people decides to take $100 for a 1/0,000,000 risk of eternal torture. This world, with probability 9,999,999/10,000,000 has finite utility.

I would say D >> E > A >>>> C >> B, despite the fact that in expected total utility, A >>>> B=C=D=E. If I were convinced this world will be reproduced infinitely many times (or e.g. 10,000,000 times) independently, I'd choose A, consistently with expected utility.

So, when I take $100 for a 1/10,000,000 risk of death, it's not because I'm maximizing expected utility; it's because I don't care about any 1/10,000,000 risk. I'm only going to live once, so I'd have to take that trade (or similar such trades) hundreds of times for it to even start to matter to me. However, I also (probably) wouldn't commit to taking this trade a million times (or a single equivalent trade, with $100,000,000 for a ~0.1 probability of eternal torture; you can adjust the cash for diminishing marginal returns). Similarly, if hundreds of people took the trade (with independent risk), I'd start to be worried, and I'd (probably) want to prevent a million people from doing it.

Comment author: Robert_Wiblin 02 July 2017 10:24:22PM 0 points [-]

Imagine a universe that lasts forever, with zero uncertainty, constant equally good opportunities to turn wealth into utility, and constant high investment returns (say, 20% per time period).

In this scenario you could (mathematically) save your wealth for an infinite number of periods and then donate it, generating infinite utility.

It sounds paradoxical but infinities generally are, and the paradox only exists if you think there's a sufficient chance that the next period will exist and have opportunities to turn wealth into utility relative to the interest rate - that is, you 'expect' an infinitely long lasting universe.

A less counterintuitive approach with the same result would be to save everything with that 20% return and also donate some amount that's less than 20% of the principal each period. This way each period the principal continues to grow, while each year you give away some amount between 0-20% (non-inclusive) and generate a finite amount of utility. After an infinite number of time periods you have accumulated an infinite principal and also generated infinite utility - just as high an expected value as the 'save it all for an infinite number of time periods and then donate it' approach suggested above!

Infinities are weird. :)

Comment author: MichaelStJules 03 July 2017 09:21:04PM 0 points [-]

In this scenario you could (mathematically) save your wealth for an infinite number of periods and then donate it, generating infinite utility.

How is there anything (i.e. "and then") after an infinite amount of periods (taking altogether an infinite amount of time)? Are you introducing hyperreals or nonstandard analysis? Are you claiming this is just a possibility (from our ignorance about the nature of time) or a fact, conditional on the universe lasting forever?

I think it's extremely unlikely that time works this way, but if you're an EU maximizer and assign some positive probability to this possibility, then, sure, you can get an infinite return in EU. Most likely you'll get nothing. It's a lot like Pascal's wager.

Comment author: MichaelStJules 27 June 2017 03:56:29AM 0 points [-]

In recent discussion Patrick Kaczmarek informs me I’m absolutely mistaken to think it can problem with decision theory and helpfully suggested the issue might be the bridging principle between one’s axiology and one’s decisions theory.

The problem seems essentially the same as Parfit's Hitchhiker: you must pre-commit to win, but you know that when the time comes to pay/spend, you'll want to change your mind.

Comment author: TruePath 26 June 2017 07:33:29AM 4 points [-]

I simply don't believe that anyone is really (when it comes down to it) a presentist or a necessitist.

I don't think anyone is willing to actually endorse making choices which eliminate the headache of an existing person at the cost of bringing an infant into the world who will be tortured extensively for all time (but no one currently existing will see it and be made sad).

More generally, these views have more basic problems than anything considered here. Consider, for instance, the problem of personal identity. For either presentism or necessitism to be true there has to be a PRINCIPLED fact of the matter about when I become a new person if you slowly modify my brain structure until it matches that of some other possible (but not currently actual) person. The right answer to these Thesus's ship style worries is to shrug and say there isn't any fact of the matter but the presentist can't take that line because there are huge moral implications to where we draw the line for them.

Moreover, both these views have serious puzzles about what to say about when an individual exists. Is it when they actually generate qualia (if not you risk saying that the fact they will exist in the future actually means they exist now)? How do we even know when that happens?

Comment author: MichaelStJules 27 June 2017 03:40:15AM 0 points [-]

We can make necessitarianism asymmetric: only people who will necessarily exist OR would have negative utility (or less than the average/median utility, etc.) count.

Some prioritarian views, which also introduce some kind of asymmetry between good and bad, might also work.