Comment author: MichaelPlant 24 April 2018 06:33:40PM 3 points [-]

Ah, that's great. Thanks very much for that. I think "dating a non-EA" is a particularly dangerous(/negative impact?) phenomenon we should probably be talking about more. I also know someone, A, whose non-EA-inclined partner, B, was really unhappy that A wasn't aiming to get a high-paying professional job and it really wrenched A from focusing on trying do the most useful stuff. Part of the problem was B's family wanted B's partner to be dating a high earner.

Comment author: saulius  (EA Profile) 01 May 2018 12:09:14AM *  4 points [-]

Males having a “dating EAs only” rule is also dangerous (for the health of the community) when 70% of the community identifies as male and only 26% as female. It’d promote unhealthy competition. What is more, communities are not that big in many of the cities which for many people would make the choice very limited. Especially since we should probably avoid flirting with newcomers because that might scare them away.

Maybe the partner doesn't have to be an EA to prevent the value drift, maybe the important thing is that the partner is supportive of EA-type sacrifices. I'll put this as a requirement in my online dating profiles. I think that people who are altruistic (but not necessarily EAs) are especially likely to be supportive.

Comment author: Halstead 24 April 2018 07:35:08PM 2 points [-]

haha yeah that was my take. I think the best norm to propagate is "go out with whoever makes you happy"

Comment author: saulius  (EA Profile) 30 April 2018 11:24:54PM 5 points [-]

I think that there should be no norm here and we should simply consider the fact that dating a non-EA may cause a value drift before making decisions. Being altruistic sometimes means making sacrifices to your happiness. If having less money, less time and no children can be amongst the possible sacrifices, I see no reason why limiting the set of possible romantic partners could not be one of possible sacrifices as well. People are diverse. Maybe someone would rather donate less money but abstain from dating non-EAs, or even abstain from dating at all. One good piece of writing related to the subject is http://briantomasik.com/personal-thoughts-on-romance/

In response to Open Thread #39
Comment author: musicant 01 April 2018 01:03:53PM 0 points [-]

I'm new to the idea of EA, and interested in pursuing it further. I'm finding that I have a hard time buying the argument of using expected value to make a decision, when I don't have a large number of decisions I get to make. Expected value works because it "averages out" over the long term. I'm especially concerned about the recommendations by GiveWell on deworming. I am not a rich person, and I want to be able to make an impact if I can. Nothing in life is a certainty. That said, putting a large portion of the $ that I can contribute to something that has a high probability of not succeeding means that there is a high probability that my contribution will make no difference whatsoever. If I put money into a large number of causes, each of which has that profile, and if they are independent of each other, I accept that expected value kicks in. But in the case where I only get to make a contribution to a very limited number of causes, I don't see that expected value is the right philosophy. Can anyone help clarify?

In response to comment by musicant on Open Thread #39
Comment author: saulius  (EA Profile) 23 April 2018 02:14:05PM *  0 points [-]

Hi. Firstly, I want to say that many people within the movement differ in how risk averse they are and make decisions with that taken into account. For example, this flowchart on deciding which cause to work on http://globalprioritiesproject.org/2015/09/flowhart/ has a question “would you rather do something that has a 1% chance of saving 1,000 lives than save one life for sure?” I know some smart people who would answer that question with a “no” and behave accordingly.

However, many EAs think that the 1% chance option is better and many EAs spend entire life’s effort on causes like AI-safety even though it’ll almost surely have no impact because for them it’s not that important to make sure you have at least some impact, for them the small possibility of having a huge impact is just as motivating. I do share your feeling that having at least some impact is better, but personally I try to somewhat ignore it as a bias when making important decisions. To me, in some abstract sense, 1% chance of saving 1,000 lives is better than 100% chance of saving 1 life. In the same way that helping 10 people africa is in some abstract sense better than helping 1 person who lives in my country. And in the same way that helping 10 people who will live in million years is better than helping 1 person who is living now. Part of my brain disagrees but I choose to call that a bias rather than a part of my moral compass. Which IMO is a totally subjective choice.

Even if all of us were risk averse, it might still make sense for all of us to cooperate and put money into different risky causes, because then there’s a high probability that all of us combined will have a big positive impact. Instead of making sure that you yourself make a significant difference, you could think how EA community as whole (or humanity as a whole) could make a big positive difference. EA already supports many charities and maybe the risky charity that you donate to personally won’t have an impact, but if many people support different risky charities like you will, all of us combined will have a bigger impact with a high probability.

All that said, some EAs donate some money to causes that make sure that they have at least some impact and some money to risky causes with high expected value. More on this in https://www.lesswrong.com/posts/3p3CYauiX8oLjmwRF/purchase-fuzzies-and-utilons-separately

Comment author: saulius  (EA Profile) 12 April 2018 07:22:39PM *  0 points [-]

Interesting article. I see some practical issues though.

Finding a symmetrical trade partners would be very hard. If Allison has a degree from Oxford and Bettina from community college, the trade would not be fair.

A more easily implementable solution is to search for a donor willing to offset a cause area switch, i.e. make a donation to the cause area the talent will be leaving.

Would such a donation be made monthly or would it be one-time donation when the person does a switch? If it’s monthly, what happens when the donor changes her mind or doesn’t have funds anymore? The person who made the switch is left in an awkward career situation. If it’s a one time donation, what motivates the person who switched to stay in her job?

Maybe for compensation Allison could ask MIRI to pay her a salary AND donate some money to THL every month. Or she could simply ask MIRI pay her more and then donate the money herself. From MIRI's perspective that's probably similar to hiring a non-EA but this is the best way I see to avoid coordination problems.

Comment author: Dunja 06 April 2018 11:44:53AM *  2 points [-]

Thanks for this, great info and presentation and a very well planned event! That said, I'm in general rather skeptical of the impact such events have on anything but the fun of the participants :) I don't have any empirical data to back this claim (so I might as well be completely wrong), but I have an impression that while such events help like-minded people to get to know each other, in terms of an actual, long-term impact on the goals of EA they don't do much. And here is why: those who're enthusiastic about EA and/or willing to contribute in a certain way will do so anyway. For them online information, or a single talk may even be enough. And the other way around: those who aren't much into it will rarely become so via such an event.

I am aware that this may be quite an unpopular view, but I think it would be great to have some empirical evidence to show if it's really wrong.

My guess is that events organized for an effective knowledge-building in the given domain (including concrete skills required for a very concrete tasks in the given community, some of which were a part of your event) would be those that would make more of a difference. Say, an EA community realizes they lack the knowledge of gathering empirical data or the knowledge of spreading their ideas and attracting new members. In that case, one could invite experts on these issues to provide concrete intensive crash-courses, equipping the given community so that it can afterwards put these skills to action. This means a hard-working event, without much extra-entertainment activities, but with a high knowledge gain. I think networking and getting to know others is nice, but not as essential as the know-how and the willingness to apply it (which may then spontaneously result in a well networked community).

(Edit: I once again checked the primary goal of your event and indeed, if you want to provide a space for people to get to know one another, this kind of retreat certainly makes a lot of sense. So maybe my worries were misplaced given this goal, since I rather had in mind the goal of expanding the EA community and attracting new members).

Comment author: saulius  (EA Profile) 07 April 2018 02:58:01PM *  5 points [-]

I partially agree with you but I'll focus on what I disagree with :)

“those who're enthusiastic about EA and/or willing to contribute in a certain way will do so anyway. For them online information, or a single talk may even be enough.”

Personally, hanging out with EAs makes me A LOT more enthusiastic about EA and I work on my EA projects much more as a result. I basically forget about EA when I’m away from the community for long periods of time. I might be an outlier here but I’m sure that the same is true for others to a lesser degree. And it’s these kind of events that not only energise me but also help me find EA friends with whom I can hang out, co-work or even live. Which, by the way, makes such events more valuable when they are for people from one city.

Also, I know from first-hand experience that online information is not enough for cause prioritisation, making career decisions or deciding where to donate. I read a lot but when I started going to EA meetups some gaps in my knowledge and flaws in my thinking were soon exposed. Discussions hit diminishing returns after a while though.

But maybe both goals can be achieved with simple socials at a lesser cost.

Comment author: Jacy_Reese 22 February 2018 02:53:57PM *  4 points [-]

I personally don't think WAS is as similar to the most plausible far future dystopias, so I've been prioritizing it less even over just the past couple of years. I don't expect far future dystopias to involve as much naturogenic (nature-caused) suffering, though of course it's possible (e.g. if humans create large numbers of sentient beings in a simulation, but then let the simulation run on its own for a while, then the simulation could come to be viewed as naturogenic-ish and those attitudes could become more relevant).

I think if one wants something very neglected, digital sentience advocacy is basically across-the-board better than WAS advocacy.

That being said, I'm highly uncertain here and these reasons aren't overwhelming (e.g. WAS advocacy pushes on more than just the "care about naturogenic suffering" lever), so I think WAS advocacy is still, in Gregory's words, an important part of the 'far future portfolio.' And often one can work on it while working on other things, e.g. I think Animal Charity Evaluators' WAS content (e.g. ]guest blog post by Oscar Horta](https://animalcharityevaluators.org/blog/why-the-situation-of-animals-in-the-wild-should-concern-us/)) has helped them be more well-rounded as an organization, and didn't directly trade off with their farmed animal content.

Comment author: saulius  (EA Profile) 27 February 2018 01:01:56AM 1 point [-]

But humanity/AI is likely to expand to other planets. Won't those planets need to have complex ecosystems that could involve a lot of suffering? Or do you think it will all be done with some fancy tech that'll be too different from today's wildlife for it to be relevant? It's true that those ecosystems would (mostly?) be non-naturogenic but I'm not that sure that people would care about them, it'd still be animals/diseases/hunger.etc. hurting animals. Maybe it'd be easier to engineer an ecosystem without predation and diseases but that is a non-trivial assumption and suffering could then arise in other ways.

Also, some humans want to spread life to other planets for its own sake and relatively few people need to want that to cause a lot of suffering if no one works on preventing it.

This could be less relevant if you think that most of the expected value comes from simulations that won't involve ecosystems.

Comment author: Bernadette_Young 15 May 2017 09:03:15PM *  7 points [-]

The mention of the specific errors found in DCP2 estimates of de-worming efficacy, seem to be functioning here as guilt by association. I can't see any reason they should be extrapolated to all other calculations in different chapters of a >1000 page document. The figure from DCP2 for trachoma treatment directly references the primary source, so it's highly unlikely to be vulnerable to any spreadsheet errors.

The table Toby cites and you reference here (Table 50.1 from DCP2) says "trichiasis surgery". This means surgical treatment for a late stage of trachoma. Trichiasis is not synonymous with trachoma, but a late and severe complication of trachoma infection, by which stage eyelashes are causing corneal friction. It doesn't 'sometimes' lead to blindness, though that is true of trachoma infections when the whole spectrum is considered. Trichiasis frequently causes corneal damage leading to visual impairment and blindness. You are right to point out that not every person with trichiasis will develop blindness, and a "Number Needed to Treat" is needed to correct the estimate from $20 per case of blindness prevented. However we don't have good epidemiological data to say whether that number is 1, 2, 10 or more. Looking at the literature it's likely to be closer to 2 than 10. The uncertainty factor encoded in Peter Singer's use of $100 per person would allow for a number needed to treat of 5.

In this case the term "cure" is appropriate - as trichiasis is the condition being treated by surgery. At one point Toby's essay talks about curing blindness as well as curing trachoma. Strictly speaking trichiasis surgery is tertiary prevention (treatment of a condition which has already caused damage to prevent further damage.), but the error is not so egregious as to elicit the scorn of the hypothetical doctor you quote below. (Source: I am a medical doctor specialising in infectious diseases, I think the WHO fact sheet you link to is overly simplifying matters when it states "blindness caused by trachoma is irreversible").

[Edited to add DOI: I'm married to Toby Ord]

Comment author: saulius  (EA Profile) 15 May 2017 11:58:21PM *  4 points [-]

Thank you very much for writing this. Ironically, I did not do enough fact-checking before making public claims. Now I am not even sure I was right to say that everyone should frequently check facts in this manner because it takes a lot of time and it's easy to make mistakes, especially when it's not the field of expertise for most of us.

Trichiasis surgery then does seem to be absurdly effective in preventing blindness and pain. I am puzzled why GiveWell hasn't looked into it more. Well, they explain it here. The same uncertainty about "Number Needed to Treat".

I want to ask if you don't mind:

  • When literature says that surgery costs ~$20-60 or $7.14, is that for both eyes?
  • Do you think that it's fair to say that it costs say $100 to prevent trachoma-induced blindness? Or is there too much uncertainty to use such number when introducing EA?
Comment author: Julia_Wise  (EA Profile) 15 May 2017 02:31:36PM 3 points [-]

Thanks for researching and writing this up! We've been discussing the topic a lot at CEA/Giving What We Can over the last few days. I think this points to the importance of flagging publication dates (as GiveWell does, indicating that the research on a certain page was current as of a given date but isn't necessarily accurate anymore). Fact-checking, updating, or just information flagging as older and possibly inaccurate was on our to-do list for materials on the Giving What We Can site, which go back as much as 10 years and sometimes no longer represent our best understanding. I now think it needs to be higher priority than I did.

For individuals rather than organizations, I'm unsure about the best way to handle things like this, which will surely come up again. If someone publishes a paper or blog post, how often are they obliged to update it with corrected figures? I'm thinking of a popular post which used PSI's figure of around $800 to save a child's life. In 2010 when it was written that seemed like a reasonable estimate, but it doesn't now. Is the author responsible for updating the figure everywhere the post was published and re-published? (That's a strong disincentive for ever writing anything that includes a cost-effectiveness estimate, since they're always changing.) Does everyone who quoted it or referred to it need to go back each year and include a new estimate? My guess is it's good practice, particularly when we notice people creating new material that cites old figures, to give them a friendly note with a link to newer sources, with the understanding that this stuff is genuinely confusing and hard to stay on top of.

Comment author: saulius  (EA Profile) 15 May 2017 03:15:16PM 0 points [-]

It's obviously impossible to enforce everyone to update figures all the time. If there is an old publication date, everyone probably understands that it could be outdated. I just think that the date should be always featured prominently. E.g. in this page it could be better. I think that flagging pages the way GiveWell does is a great idea. But featured pages that have no date should probably be checked or updated quite often. I mean pages like "top charities", "what we can achieve" and "myths about aid" in GWWC's case.

Comment author: PeterSinger 13 May 2017 11:47:33PM 8 points [-]

These are good points and I'm suitably chastened for not being sufficiently thorough in checking Toby Ord's claims,
I'm pleased to see that GiveWell is again investigating treating blindness: http://blog.givewell.org/2017/05/11/update-on-our-views-on-cataract-surgery/. In this very recent post, they say: "We believe there is evidence that cataract surgeries substantially improve vision. Very roughly, we estimate that the cost-effectiveness of cataract surgery is ~$1,000 per severe visual impairment reversed.[1]"
The footnote reads: "This estimate is on the higher end of the range we calculated, because it assumes additional costs due to demand generation activities, or identifying patients who would not otherwise have known about surgery. We use this figure because we expect that GiveWell is more likely to recommend an organization that can demonstrate, through its demand generation activities, that it is causing additional surgeries to happen. The $1,000 figure also reflects our sense that cost-effectiveness in general tends to worsen (become more expensive) as we spend more time building our model of any intervention. Finally, it is a round figure that communicates our uncertainty about this estimate overall. But it's reasonable to say that until they complete this investigation, which will be years rather than months, it may be better to avoid using the example of preventing or curing blindness." So the options seem to be either not using the example of blindness at all, or using this rough figure of $1000, with suitable disclaimers. It still leads to 40 cases of severe visual impairment reversed v. 1 case of providing a blind person with a guide dog.

Comment author: saulius  (EA Profile) 14 May 2017 10:57:35AM 3 points [-]

agree :)

Comment author: PeterSinger 12 May 2017 11:23:20PM 16 points [-]

Regrettably, I misspoke in my TED talk when I referred to "curing" blindness from trachoma. I should have said "preventing." (I used to talk about curing blindness by performing cataract surgery, and that may be the cause of the slip.) But there is a source for the figure I cited, and it is not GiveWell. I give the details in The Most Good You Can Do", in an endnote on p. 194, but to save you all looking it up, here it is:

"I owe this comparison to Toby Ord, “The moral imperative towards cost-effectiveness,” http://www.givingwhatwecan.org/sites/givingwhatwecan.org/files/attachments/moral_imperative.pdf. Ord suggests a figure of $20 for preventing blindness; I have been more conservative. Ord explains his estimate of the cost of providing a guide dog as follows: “Guide Dogs of America estimate $19,000 for the training of the dog. When the cost of training the recipient to use the dog is included, the cost doubles to $38,000. Other guide dog providers give similar estimates, for example Seeing Eye estimates a total of $50,000 per person/dog partnership, while Guiding Eyes for the Blind estimates a total of $40,000.” His figure for the cost of preventing blindness by treating trachoma comes from Joseph Cook et al., “Loss of vision and hearing,” in Dean Jamison et al., eds., Disease Control Priorities in Developing Countries, 2d ed. (Oxford: Oxford University Press, 2006), 954. The figure Cook et al. give is $7.14 per surgery, with a 77 percent cure rate. I thank Brian Doolan of the Fred Hollows Foundation for discussion of his organization’s claim that it can restore sight for $25. GiveWell suggests a figure of $100 for surgeries that prevent one to thirty years of blindness and another one to thirty years of low vision but cautions that the sources of these figures are not clear enough to justify a high level of confidence."

Now, maybe there is some more recent research casting doubt on this figure, but note that the numbers I use allow that the figure may be $100 (typically, when I speak on this, I give a range, saying that for the cost of training one guide dog, we may be able to prevent somewhere between 400 - 1600 cases of blindness. Probably it isn't necessary even to do that. The point would be just as strong if it were 400, or even 40.

Comment author: saulius  (EA Profile) 13 May 2017 02:01:41PM *  6 points [-]

EDIT: this comment contains some mistakes

To begin with, I want to say that my goal is not to put blame on anyone but to change how we speak and act in the future.

His figure for the cost of preventing blindness by treating trachoma comes from Joseph Cook et al., “Loss of vision and hearing,” in Dean Jamison et al., eds., Disease Control Priorities in Developing Countries, 2d ed. (Oxford: Oxford University Press, 2006), 954. The figure Cook et al. give is $7.14 per surgery, with a 77 percent cure rate.

I am looking at this table from the cited source (Loss of Vision and Hearing, DCP2). It’s 77% cure rate for trachoma that sometimes develops into blindness. Not 77% cure rate for blindness. At least that’s how I interpret it, I can’t be sure because the cited source of the figure in the DCP2’s table doesn’t even mention trachoma! From what I’ve read, sometimes recurrences happen so 77% cure rate from trachoma is much much more plausible. I'm afraid Toby Ord made the mistake of implying that curing trachoma = preventing blindness.

What is more, Toby Ord used the same DCP2 report that GiveWell used and GiveWell found major errors in it. To sum up very briefly:

Eventually, we were able to obtain the spreadsheet that was used to generate the $3.41/DALY estimate. That spreadsheet contains five separate errors that, when corrected, shift the estimated cost effectiveness of deworming from $3.41 to $326.43. [...] The estimates on deworming are the only DCP2 figures we’ve gotten enough information on to examine in-depth.

Regarding Fred Hollows Foundation, please see GiveWell’s page about them and this blog post. In my eyes these discredit organization’s claim that it restores sight for $25.

In conclusion, without further research we have no basis for the claim that trachoma surgeries can prevent 400, or even 40 cases of blindness for $40,000. We simply don't know. I wish we did, I want to help those people in the video.

I think one thing that is happening is that we are too eager to believe any figures we find if they support an opinion we already hold. That severely worsens already existing problem of optimizer’s curse.


I also want to add that preventing 400 blindness cases for $40,000 (i.e. one case for $100) to me sounds much more effective than top GiveWell's charities. GiveWell seem to agree, see citations from this page

Based on very rough guesses at major inputs, we estimate that cataract programs may cost $112-$1,250 per severe visual impairment reversed [...] Based on prior experience with cost-effectiveness analyses, we expect our estimate of cost per severe visual impairment reversed to increase with further evaluation. [...] Our rough estimate of the cost-effectiveness of cataract surgery suggests that it may be competitive with our priority programs; however, we retain a high degree of uncertainty.

We tell the trachoma example and then advertise GiveWell, showing that GiveWell’s top and standout charities are not even related to blindness and no one in EA ever talks about blindness. So people probably assume that GiveWell’s recommended charities are much more effective than surgery that cures blindness for $100 but they are not.

Because GiveWell’s estimates for cataract surgeries are based on guesses, I think we shouldn’t use those figures in introductory EA talks as well. We can tell the disclaimers but the person who hears the example might skip them when retelling the thought experiment (out of desire to sound more convincing). And then the same will happen.

View more: Prev | Next