In response to Open Thread #36
Comment author: Linch 25 April 2017 04:04:22AM *  0 points [-]

Do you live in the South Bay (south of San Francisco?).

Did you recently move here and want to be plugged in to what EAs around here are doing and thinking? Did you recently learn about effective altruism and want to know what the heck it's about? Well, join South Bay Effective Altruism's first fully newbie-friendly meetup!

We'll discuss cause prioritization, what causes areas YOU are interested in, and how we can help each other do the most good!

The actual meetup will be this Friday at 7pm, but you can also comment here or message me at email[dot]Linch[at]gmail[dot]com to be in the loop for future events.

Comment author: Daniel_Eth 30 March 2017 06:16:38PM 5 points [-]

Regarding “But hold on: you think X, so your view entails Y and that’s ridiculous! You can’t possibly think that.”

I agree that being haughty is typically bad. But the argument "X implies Y, and you claim to believe X. Do you also accept the natural conclusion, Y?" when Y is ridiculous is a legitimate argument to make. At that point, the other person either can accept the implication, change his mind on X, or argue that X does not imply Y. It seems like the thing you have most of a problem with is the tone though. Is that correct?

Comment author: Linch 31 March 2017 02:04:37AM 1 point [-]

I've noticed this before, and I think it's a wrong truth-seeking device on a technical level.

Basically, I'm really leery of reductio ad absurdums with statements that are inherently probabilistic in general, but especially when it comes to ethics.

A straightforward reductio ad absurdum goes: 1. Say we believe in P 2. P implies Q 3. Q is clearly wrong 4. Therefore, not P.

However, in philosophical ethics it's more like 1. Say we believe in P 2. A seems reasonable 3. B seems reasonable 4. C seems kind of reasonable. 5. D seems almost reasonable if you squint a little, at least it's more reasonable than P 6. E has a >50% chance of being right. 7. P and A and B and C and D and E implies Q 8. Q is an absurd/unintuitive conclusion. 9. Therefore, not P

The issue here is that most of the heavy lifting is done by appeals to conjunctions, and conflating >50% probabilities with absolute truths.

In response to comment by Linch on Open Thread #36
Comment author: ZachWeems 15 March 2017 02:27:59PM 6 points [-]

| It just seems rather implausible, to me, that retirement money is anywhere close to being a cost-effective intervention, relative to other likely EA options.

I don't think that "Give 70-year-old Zach a passive income stream" is an effective cause area. It is a selfish maneuver. But the majority of EAs seem to form some sort of boundary, where they only feel obligated to donate up to a certain point (whether that is due to partially selfish "utility functions" or a calculated move to prevent burnout). I've considered choosing some arbitrary method of dividing income between short term expenses, retirement and donations, but I am searching for a method that someone considers non-arbitrary, because I might feel better about it.

In response to comment by ZachWeems on Open Thread #36
Comment author: Linch 16 March 2017 05:56:38AM *  5 points [-]

Apologies, rereading it again, I think my first comment was rude. :/

I do a lot of selfish and suboptimal things as well, and it will be inefficient/stressful if each of us have to always defend any deviation from universal impartiality in all conversations.

I think on the strategic level, some "arbitrariness" is fine, and perhaps even better than mostly illusory non-arbitrariness. We're all human, and I'm not certain it's even possible to really cleanly delineate how much you value different satisfying different urges for a meaningful and productive life.

On the tactical level, I think general advice on frugality, increasing your income, and maximizing investment returns is applicable. Off the top of my head, I can't think of any special information specifically to the retirement/EA charity dichotomy. (Maybe the other commentators can think of useful resources?)

(Well, one thing that you might already be aware of is that retirement funds and charity donations on two categories that are often tax-exempt, at least in the US. Also, many companies "match" your investment into retirement accounts up to a certain %, and some match your donations. Optimizing either of those categories can probably save you (tens of) thousands of dollars a year)

Sorry I can't be more helpful!

In response to Open Thread #36
Comment author: Linch 15 March 2017 04:33:49AM *  4 points [-]

My personal opinion is that individuals should save enough to mitigate emergencies, job transitions, etc. (, but no more.

It just seems rather implausible, to me, that retirement money is anywhere close to being a cost-effective intervention, relative to other likely EA options.

Comment author: marjorie 10 March 2017 03:43:39PM 0 points [-]


Hello to you all,i want to use this time to thank Dr. Sambo for what he has done for me last week here ,my names are marjorie mc cardle from Australia, I never believed in Love Spells or Magics until I met this special spell caster when i contact this man called Execute some business..He is really powerful..My Husband divorce me with no reason for almost 5 years and i tried all i could to have her back cos i really love him so much but all my effort did not work out.. we met at our early age at the college and we both have feelings for each other and we got married happily for 5 years with no kid and he woke up one morning and he told me hes going on a divorce..i thought it was a joke and when he came back from work he tender to me a divorce letter and he packed all his loads from my house..i ran mad and i tried all i could to have him back but all did not work out..i was lonely for almost 5 years So when i told the spell caster what happened he said he will help me and he asked for her full name and his picture..i gave him that..At first i was skeptical but i gave it a try cos have tried so many spell casters and there is no solutions when he finished with the readings,he got back to me that hes with a woman and that woman is the reason why he left me The spell caster said he will help me with a spell that will surely bring him back.but i never believe all this he told me i will see a positive result within 24 hours of the day..24hours later,he called me himself and came to me apologizing and he told me he will come back to me..I cant believe this,it was like a dream cos i never believe this will work out after trying many spell casters and there is no solution..The spell caster is so powerful and after that he helped me with a pregnancy spell and i got pregnant a month later and find a better job..we are now happy been together again and with lovely kid..This spell caster has really changed my life and i will forever thankful to him..he has helped many friends too with similar problem too and they are happy and thankful to him..This man is indeed the most powerful spell caster have ever experienced in life..Am Posting this to the Forum in case there is anyone who has similar problem and still looking for a way can reach him CONTACT THIS GREAT AND POWERFUL SPELL CASTER CALLED DR SAMBO… HIS EMAIL ADDRESS IS CONTACT HIM NOW AND BE FAST ABOUT IT SO HE CAN ALSO ATTEND TO YOU BECAUSE THE EARLIER YOU CONTACT HIM NOW THE BETTER FOR YOU TO GET QUICK SOLUTION TO ALL YOUR PROBLEMS, visit his website at Phone number:+2348039456308.

Comment author: Linch 11 March 2017 09:52:58AM 0 points [-]

The quality of this intervention has already been discussed elsewhere on this forum:

Comment author: Peter_Hurford  (EA Profile) 02 March 2017 11:11:54PM 3 points [-]

March 2 Update: We have a volunteer who is taking on this project. As a result, Joey and I broke down the project more to the following questions:

1.) What were the top twenty foreign aid foundations (including government agencies) from 1975 to 2000 in terms of total grant dollars given to foreign aid (e.g., DFID, USAID, Gates/GAVI)? Scoring them relative to each other, how would you score them on a 1-5 scale with 5 being most accurately described as "hits based" and 1 being most accurately described as "proven evidence-backed"? (Also, is this a useful dichotomy?) Please try to provide justification for rankings.

2a.) Looking back at the list of top twenty orgs by size, pick the top five orgs by size that are more "hits based" and the top five orgs by size that are more "evidence-backed".

2b.) From each of these orgs, look at their top 10 grants by grant size. Of these, pick two grants that are likely to be the highest impact and two grants that are likely to be of average impact (relative to the ten grants from that org). You can look at there website, wiki page, and stated granting strategies to get a sense of this. (There will be 40 grants considered total.) Briefly describe the outcomes of the grant and the grant size. Present these grants shuffled and as blinded as possible (no org name) to Joey and me so that we can independently rank them without knowing whether they came from hits based orgs or not.

2c.) Using your own research, as best as possible, try to quantify the impact of these grants.

2d.) Combining our judgments, come to an overall assessment as best as possible as to the relative success of "hits-based" and "evidence-based" orgs.

We also have a bonus question that is much lower priority but might be of potential interest down the road:

3.) Can VC firms be described as pursuing a "hits based" strategy? How much due diligence do they put into their investments before making them? How does this due diligence compare to OpenPhil? Is there anything from learning about VC strategy we can use to inform EA strategy?


Joey and I separately estimated how long it would take to do (1) + (2). We then averaged our estimates together and then multiplied by 1.5 to adjust for the planning fallacy. We came up with a total of 70 hours. Since this is more than we originally thought, we decided to up our pay from $1500 to $2000.

Comment author: Linch 03 March 2017 07:04:38AM 0 points [-]

Congratulations! This is very exciting and I'm looking forward to hearing about future updates.

Comment author: AGB 14 February 2017 08:28:10PM 3 points [-]

For a third perspective, I think most EAs who donate to AMF do so neither because of an EV calculation they've done themselves, nor because of risk aversion, but rather because they've largely-or-entirely outsourced their donation decision to Givewell. Givewell has also written about this in some depth, back in 2011 and probably more recently as well.

Key quote:

"This view of ours illustrates why – while we seek to ground our recommendations in relevant facts, calculations and quantifications to the extent possible – every recommendation we make incorporates many different forms of evidence and involves a strong dose of intuition. And we generally prefer to give where we have strong evidence that donations can do a lot of good rather than where we have weak evidence that donations can do far more good – a preference that I believe is inconsistent with the approach of giving based on explicit expected-value formulas (at least those that (a) have significant room for error (b) do not incorporate Bayesian adjustments, which are very rare in these analyses and very difficult to do both formally and reasonably)."

Comment author: Linch 17 February 2017 10:53:27AM 1 point [-]

An added reason to not take expected value estimates literally (which applies to some/many casual donors, but probably not to AGB or GiveWell) is if you believe that you are not capable of making reasonable expected value estimates under high uncertainty yourself, and you're leery of long casual chains because you've developed a defense mechanism against your values being Eulered or Dutch-Booked.

Apologies for the weird terminology, see: and:

Comment author: Linch 15 February 2017 12:39:21PM *  3 points [-]

The GiveWell Top Charities are part of the Open Philanthropy Project’s optimal philanthropic portfolio, when only direct impact is considered. There’s not enough money to cover the whole thing. These are highly unlikely to both be true. Global poverty cannot plausibly be an unfillable money pit at GiveWell’s current cost-per-life-saved numbers. At least one of these three things must be true:

GiveWell’s cost per life saved numbers are wrong and should be changed.

The top charities’ interventions will reach substantially diminishing returns long before they’ve managed to massively scale up.

A few billion dollars can totally wipe out major categories of disease in the developing world.

I don't think I understand the trilemma you presented here.

As a sanity check, under-5 mortality is about 6 million worldwide. Assuming that more than 2/3 is preventable (which I think is a reasonable assumption if you compare with developed world numbers on under-5 mortality), this means there are 4 million+ preventable deaths (and corresponding suffering) per year. At $10000 to prevent a death, this is already way more money than Open Phil has in a few months. At $3500 to prevent a death, this is still more money than Open Phil has in even a single year.

We would expect the numbers to also be much larger if we're not prioritizing just deaths, but also prevention of suffering.

Comment author: RyanCarey 09 February 2017 07:02:04PM 0 points [-]

I trust that you can explain Bayes theorem, I'm just adding that we now can be fairly confident that the intervention has less than 10% effectiveness.

Comment author: Linch 10 February 2017 12:08:02AM 0 points [-]

Yeah that makes sense!

Comment author: RyanCarey 09 February 2017 03:12:55AM 0 points [-]

You should not update significantly towards “casual outreach about EA is ineffective”, or “outreach has a very low probability of success” since the study is FAR too underpowered to detect even large effects. For example, if talking about GWWC to likely candidates has a 10% chance of making them take the pledge in the next 15-20 days, and the 14 people who were contacted are exactly representative of the pool of “likely candidates”, then we have a .9^14=23% chance of getting 0 pledges.

Given that it was already unlikely that being put in contact with a GWWC member would have a 10% chance of making them take the pledge, we can now call it very unlikely.

Comment author: Linch 09 February 2017 07:21:00AM *  2 points [-]

I'm not sure how you're operationalizing the difference between unlikely and very unlikely, but I think we should not be able to make sizable updates from this data unless the prior is REALLY big.

(You probably already understand this, but other people might read your comment as suggesting something more strongly than you're actually referring to, and this is a point that I really wanted to clarify anyway because I expect it to be a fairly common mistake)

Roughly: Unsurprising conclusions from experiments with low sample sizes should not change your mind significantly, regardless of what your prior beliefs are.

This is true (mostly) regardless of the size of your prior. If a null result when you have a high prior wouldn't cause a large update downwards, then a null result on something when you have a low prior shouldn't cause a large shift downwards either.

[Math with made-up numbers below]

As mentioned earlier:

If your hypothesis is 10%: 23% probability experiment confirms it.

If your hypothesis is 1%: 87% probability experiment is in line with this

5%: 49%

20%: 4.4%

Say your prior belief is that there's a 70% chance of talking to new people having no effect (or meaningfully close enough to zero that it doesn't matter), a 25% chance that it has a 1% effect, and a 5% chance that it has a 10% effect.

Then by Bayes' Theorem, your posterior probability should be: 75.3% chance it has no effect

23.4% chance it has a 1% effect

1.24% chance it has a 10% effect.

If, on the other hand, you originally believed that there's a 50% chance of it have no effect, and a 50% chance of it having a 10% effect, then your posterior should be:

81.3% chance it has no effect

18.7% chance it has a 10% effect.

Finally, if your prior is that it already has a relatively small effect, this study is far too underpowered to basically make any conclusions at all. For example, if you originally believed that there's a 70% chance of it having no effect, and a 30% chance of it having a .1% effect, then your posterior should be:

70.3% chance of no effect

29.7% chance of a .1% effect.

This is all assuming ideal conditions.Model uncertainty and uncertainty about the quality of my experiment should only decrease the size of your update, not increase it.

Do you agree here? If so, do you think I should rephrase the original post to make this clearer?

View more: Next