Comment author: Gregory_Lewis 02 February 2018 06:48:12PM 2 points [-]

That seems surprising to me, given the natural model for the counterpart in the case you describe would be a sibling, and observed behaviour between sibs is pretty divergent. I grant your counterfactual sibling would be more likely than a random member of the population to be writing something similar to the parent comment, but the absolute likelihood remains very low.

The fairly intermediate heritabilities of things like intelligence, personality traits etc. also look pretty variable. Not least, there's about a 0.5 chance your counterpart would be the opposite sex to you.

I agree even if history is chaotic in some respects, it is not chaotic to everything, and there can be forcing interventions (one can grab a double pendulum, etc), yet less overwhelming interventions may be pretty hard to fathom in the chaotic case (It's too early to say whether the french revolution was good or bad, etc.)

Comment author: BenMillwood  (EA Profile) 09 February 2018 05:48:26PM *  0 points [-]

Not that it's obviously terribly important to the historical chaos discussion, but I think siblings aren't a great natural model. Siblings differ by at least (usually more than) nine months, which you can imagine affecting them biologically, via the physiology of the mother during pregnancy, or via the medical / material conditions of their early life. They also differ in social context -- after all, one of them has one more older sibling, while the other has one more younger one. Two agents interacting may exaggerate their differences over time, or perhaps they sequentially fill particular roles in the eyes of the parents, which leads to differences in treatment. So I think there are lots of sources of sibling difference that aren't present in hypothetical genetic reshuffles.

(That said, the coinflip on sex seems pretty compelling.)

Comment author: RomeoStevens 29 January 2018 08:59:50PM *  7 points [-]

Another framing of that solution: EA needs a full time counselor who works with EAs gratis. I expect that paying the salary of such a person would be +ROI.

Comment author: BenMillwood  (EA Profile) 09 February 2018 04:40:45PM 5 points [-]

I would be interested in funding this.

Comment author: BenMillwood  (EA Profile) 16 December 2017 04:38:03PM 3 points [-]

For the benefit of future readers: Giving Tuesday happened, and the matching funds were exhausted within about 90 seconds. In total of ~$370k in donations we matched ~$46k, or about 13%, which was lower than hoped. William wrote up a lessons-learned document as a Google doc.

Comment author: BenMillwood  (EA Profile) 25 November 2017 08:07:49AM 0 points [-]

I'm going to write a relatively long comment making a relatively narrow objection to your post. Sorry about that, but I think it's a particularly illustrative point to make. I disagree with these two points against the neglectedness framing in particular:

  1. that it could divide by zero, and this is a serious problem
  2. that it splits a fraction into unnecessarily conditional parts (the "dragons in Westeros" problem).

Firstly in response to (1), this is a legitimate illustration that the framework only applies where it applies, but it seems like in practice like it isn't an obstacle. Specifically, the framing works well when your proposed addition is small relative to the existing resource, and it seems like that's true of most people in most situations. I'll come back to this later.

More importantly, I feel like (2) misses the point of what the framework was developed for. The goal is to get a better handle on what kinds of things to look for when evaluating causes. So the fact that the fraction simplifies to "good done per additional resource" is sort of trivial – that's the goal, the metric we're trying to optimize. It's hard to measure that directly, so the value added by the framework is the claim that certain conditionalizations of the metric (if that's the right word) yield questions that are easier to answer, and answers that are easier to compare.

That is, we write it as "scale times neglectedness times solvability" because we find empirically that those individual factors of the metric tend to be more predictable, comparable and measurable than the metric as a whole. The applicability of the framework is absolutely contingent on what we in-practice discover to be the important considerations when we try to evaluate a cause from scratch.

So while there's no fundamental reason why neglectedness, particularly as measured in the form of the ratio of percentage per resource, needs to be a part of your analysis, it just turns out to be the case that you can often find e.g. two different health interventions that are otherwise very comparable in how much good they do, but with very different ability to consume extra resources, and that drives a big difference in their attractiveness as causes to work on.

If ever you did want to evaluate a cause where the existing resources were zero, you could just as easily swap the bad cancellative denominator/numerator pair with another one, say the same thing in absolute instead of relative terms, and the rest of the model would more or less stand up. Whether that should be done in general for evaluating other causes as well is a judgement call about how these numbers vary in practice and what situations are most easily compared and contrasted.

Comment author: Ben_Todd 06 November 2017 10:35:08PM 0 points [-]

Even if people pick interventions at random, the more people who enter a cause, the more the best interventions will get taken (by chance), so you still get diminishing returns even if people aren't strategically selecting.

Comment author: BenMillwood  (EA Profile) 25 November 2017 07:37:50AM *  0 points [-]

To clarify, this only applies if everyone else is picking interventions at random, but you're still managing to pick the best remaining one (or at least better than chance).

It also seems to me like it applies across causes as well as within causes.

Comment author: Arepo 06 November 2017 07:36:07PM *  1 point [-]

To be clear, I do think neglectedness will roughly track the value of entering a field, ceteris literally being paribus.

On reflection I don't think I even believe this. The same assumption of rationality that says that people will tend to pick the best problems in a cause area to work on suggests that (a priori) they would tend to pick the best cause area to work on, in which case more people working on a field would indicate that it was more worth working on.

Comment author: BenMillwood  (EA Profile) 25 November 2017 07:31:36AM *  0 points [-]

The same assumption of rationality that says that people will tend to pick the best problems in a cause area to work on suggests that (a priori) they would tend to pick the best cause area to work on

This was an insightful comment for me, and the argument does seem correct at first glance. I guess the reason I'd still disagree is because I observe people thinking about within-cause choices very differently from how they think about across-cause choices, so they're more rational in one context than the other. A key part of effective altruism's value, it seems to me, is the recognition of this discrepancy and the argument that it should be eliminated.

in which case more people working on a field would indicate that it was more worth working on.

I think if you really believe people are rational in the way described, more people working on a field doesn't necessarily give you a clue as to whether more people should be working on it or not, because you expect the number of people working on it to roughly track the number of people who ought to work on it -- you think the people who are not working on it are also rational, so there must be circumstances under which that's correct, too.

Comment author: BenMillwood  (EA Profile) 18 November 2017 09:20:23AM 0 points [-]

Is "Part 3. Specific lessons on running a large local community" still on the way?

Comment author: BenMillwood  (EA Profile) 23 July 2017 04:02:19PM 0 points [-]

In "For-profit investing typically does not have massive negative returns, but non-profit investing can", I understand this to only be true in the sense that for-profit investing is only concerned with financial returns, whereas non-profit investing is concerned with returns of all kinds.

For-profit investing can still have negative externalities, of course, it's just that the shareholders aren't really obliged to care about them :)

Comment author: Robert_Wiblin 05 July 2017 12:28:36AM 0 points [-]

I'm almost certain time doesn't work this way in our universe! But for the paradox to exist we have to imagine a universe where an infinite amount of time really can pass. I'm not an expert in these expected value paradoxes for different kinds of infinity - might be worth asking Amanda Askell who is.

Either way, the mixed strategy of saving and donating some gives us a way out.

Comment author: BenMillwood  (EA Profile) 23 July 2017 03:43:15PM 0 points [-]

It's worth pointing out that if time just advances forever, so that your current time is just "T seconds after the starting point", then it is simultaneously true that:

  • time is infinite
  • every instant has a finite past (and an infinite future)

The second point in particular means that even though time is infinite, you still can't wait an infinite amount of time and then do something. I think that's what MichaelStJules was getting at.

Your mixed strategy has its own paradox, though – suppose you decide that one strategy is better than another if it "eventually" does more total good – that is, there's a point in time after which "total amount of good done so far" exceeds that of the other strategy for the rest of eternity. You have to do something like this because it doesn't usually make sense to ask which strategy achieved the most good "after infinite time" because infinite time never elapses.

Anyway, suppose you have that metric of "eventual winner". Then your strategy can always be improved by reducing the fraction you donate, because the exponential growth of the investment will eventually outpace the linear reduction in donations. But as soon as you reduce the fraction to zero, you no longer get any gains at all. So you have the odd situation where no fraction is optimal – for any strategy, there is always a better one.

In a context of infinite possible outcomes and infinite possible choice pathways, this actually isn't that surprising. You might as well be surprised that there's no largest number. And perhaps that applies just as well to the original philanthropist's paradox – if you permit yourself an infinite time horizon to invest over, it's just not surprising that there's no optimal moment to "cash in".

As soon as you start actually encoding your beliefs that the time horizon is in fact not infinite, I'm willing to bet you start getting some concrete moments to start paying your fund out, and some reasonable justifications for why those moments were better than any other. To the extent that the conclusion "you should wait until near the end of civilization to donate" is still a counterintuitive one, I claim it's just because of our (correct) intuition that investing is not always better than donating right now, even in the long run. That's the argument that Ben Todd and Sanjay made.

Comment author: BenMillwood  (EA Profile) 25 February 2017 01:46:43PM *  0 points [-]

One thing which makes me more confident that object-level risk[1] is important in for-profit investing, but expect it to be less central in charitable work, is that I'm more confident that for-profit risk is priced correctly, or at least not way out of line with what it should be. It seems more plausible to me that there are low-risk high-return charitable opportunities, because people are generally worse at identifying and saturating those opportunities. (Although per GiveWell's post on Broad market efficiency I now believe this effect is much less striking than I first guessed).

[1] I'm not sure this is a correct application of "object-level", but I mean actual risk that a given investment will succeed or fail, rather than the "meta" risk that we'll fail to analyse its value correctly. I'm not super confident the distinction is meaningful.

View more: Next