Comment author: Austen_Forrester 06 June 2017 11:08:45PM 1 point [-]

Blind people are not a discriminated group, at least not in the first world. The extreme poor, on the other hand, often face severe discrimination -- they are mistreated and have their rights violated by those with power, especially if they are Indians of low caste.

Comparative intervention effectiveness is a pillar of EA, distinct from personal sacrifice, so they are not interchangeable. I reject that there is some sort of prejudice for choosing to help one group over another, whether the groups are defined by physical condition, location, etc. One always has to choose. No one can help every group. Taking the example of preventing blindness vs assisting the blind, clearly the former is the wildly superior intervention for blindness so it is absurd to call it prejudiced against the blind.

Comment author: Telofy  (EA Profile) 08 June 2017 11:14:49AM 0 points [-]

Thanks! In response to which point is that? I think points 5 and 6 should answer your objection, but tell me if they don’t. Truth is not at issue here (if we ignore the parenthetical at the very end that isn’t mean to be part of my argument). I’d even say that Peter Singer deals in concepts of unusual importance and predictive power. But I think it’s important to make sure that we’re not being misunderstood in dangerous ways by valuable potential allies.

In response to comment by Telofy  (EA Profile) on Red teaming GiveWell
Comment author: Peter_Hurford  (EA Profile) 30 May 2017 04:29:06PM 2 points [-]

That’s probably the comparison Arunbharatula is drawing where international development doesn’t look as strong as some x-risk or values spreading interventions.

I think that's a generous read of Bharatula's writing, especially since the Gates Foundation also spends the majority of their money on things that speed up positive developments that would likely happen anyway.

Regardless, it's an important steelman. Maybe it would be valuable to focus on funding information about what works so that governments know what to roll out when the time comes? Or find more ways to help governments listen to cost-effectiveness and other evidence? Or just fund MIRI? I find this somewhat persuasive, but we would need to actually build the case.

Comment author: Telofy  (EA Profile) 04 June 2017 09:31:12PM 1 point [-]

True, the Gates Foundation is a good example of someone speeding up things that would’ve happened eventually with high probability.

I’ll ask around FRI whether there’ve been any papers comparing these scenarios. We’ve long been discussing them, but I can’t quite point to the one paper that summarizes it all. This is also something I would love to have cowritten by a historian. The only historian EA I know is not doing any history unfortunately.

In response to Red teaming GiveWell
Comment author: Peter_Hurford  (EA Profile) 30 May 2017 02:06:55AM 1 point [-]

Someone once asked why Bill Gates doesn’t close the funding gap for GiveWell recommended charities? The most demonstrable cost-effective interventions are going to funded by the international community anyway, even if we do nothing.

This is likely true, but even if the international community takes the most cost-effective interventions, there still are demonstrably cost-effective interventions that don't get funded by the international community anyway. For example, AMF is very cost-effective and it still has a funding gap.

Comment author: Telofy  (EA Profile) 30 May 2017 04:11:32PM 1 point [-]

I’d still say that the original point stands. Developed countries had no problem taking care of their own malaria problems. They still had a few more chemical that don’t work anymore today, but if we still had malaria in Germany today, the government would surely find a way to eliminate it. The trajectory of developing countries indicates that many will cease to be developing countries within half a century or so, and then the same logic will apply to them.

Cost-effectiveness in this cause area means getting the money in early to speed up positive developments that would likely happen anyway, just a few decades later. That saves a few decades of suffering, which is valuable, but it probably doesn’t compare to trajectory-changing interventions. Rather than increase the speed of a development that is already going into the right direction, trajectory-changing interventions can potentially affect very long periods of the future.

That’s probably the comparison Arunbharatula is drawing where international development doesn’t look as strong as some x-risk or values spreading interventions.

In response to Red teaming GiveWell
Comment author: Telofy  (EA Profile) 28 May 2017 10:51:44AM 2 points [-]

Quick question: The paper on the PAHO-HANLON approach doesn't include the term equity. What does it mean in this context, and where did you find an expansion that might be helpful for me as well? Positioning is very relevant to something in writing at the moment, and so might be this equity thing.

I love having so many directions of improvement summarized in one post. In some cases my critiques would soon in the opposite direction. Not too little evidence for our against something but too great risk-aversion, perhaps as a concession to donors who are biased by considerations from the area of private investments, which don't carry over to commons.

Sorry, very compressed because I'm typing on my phone.

In response to comment by Telofy  (EA Profile) on Red teaming GiveWell
Comment author: Telofy  (EA Profile) 29 May 2017 11:41:28AM *  1 point [-]

The best explanation I’ve found so far (pages 133–135).

Equity is meant in the justice sense, not in the sense of shares, which is something that a lot of people care about inherently, and that has strong effects relevant at least to non-negative utilitarians as well, as for example in the GiveDirectly case mentioned above. (This considerations has also often cropped up in the context of village-level vs. household-level randomization in RCTs in my experience.) But I think there are more considerations of this sort that should also be included once we refine the model (and make it less simple), such as the creation or destruction of option value and value of information. (Option in the sense of choice rather than stock option.) Moral Foundations Theory may provide some inspiration for further considerations.

In response to Red teaming GiveWell
Comment author: Telofy  (EA Profile) 28 May 2017 10:51:44AM 2 points [-]

Quick question: The paper on the PAHO-HANLON approach doesn't include the term equity. What does it mean in this context, and where did you find an expansion that might be helpful for me as well? Positioning is very relevant to something in writing at the moment, and so might be this equity thing.

I love having so many directions of improvement summarized in one post. In some cases my critiques would soon in the opposite direction. Not too little evidence for our against something but too great risk-aversion, perhaps as a concession to donors who are biased by considerations from the area of private investments, which don't carry over to commons.

Sorry, very compressed because I'm typing on my phone.

Comment author: PeterSinger 12 May 2017 11:31:18PM 6 points [-]

I don't understand the objection about it being "ableist" to say funding should go towards preventing people becoming blind rather than training guide dogs

If "ableism" is really supposed to be like racism or sexism, then we should not regard it as better to be able to see than to have the disability of not being able to see. But if people who cannot see are no worse off than people who can see, why should we even provide guide dogs for them? On the other hand, if -- more sensibly -- disability activists think that people who are unable to see are at a disadvantage and need our help, wouldn't they agree that it is better to prevent many people -- say, 400 -- experiencing this disadvantage than to help one person cope a little better with the disadvantage? Especially if the 400 are living in a developing country and have far less social support than the one person who lives in a developed country?

Can someone explain to me what is wrong with this argument? If not, I plan to keep using the example.

Comment author: Telofy  (EA Profile) 17 May 2017 02:26:43PM *  2 points [-]

Here’s what I usually found most unfortunate about the comparison, but I don’t mean to compete with anyone who thinks that the math is more unfortunate or anything else.

  1. The decision to sacrifice the well-being of one person for that of others (even many others) should be hard. If we want to be trusted (and the whole point of GiveWell is that people don’t have the time to double-check all research no matter how accessible it is – plus, even just following a link to GiveWell after watching a TED Talk requires that someone trusts us with their time), we need to signal clearly that we don’t make such decisions lightly. It is honest signaling too, since the whole point of EA is to put a whole lot more effort into the decision than usual. Many people I talk to are so “conscientious” about such decisions that they shy away from them completely (implicitly making very bad decisions). It’s probably impossible to show just how much effort and diligence has gone into such a difficult decision in just a short talk, so I rather focus on cases where I am, or each listener is, the one at whose detriment we make the prioritization decision, just like in the Child in the Pond case. Few people would no-platform me because they think it’s evil of me to ruin my own suit.
  2. Sacrificing oneself, or rather some trivial luxury of oneself, also avoids the common objection why a discriminated against minority should have to pay when there are [insert all the commonly cited bad things like tax cuts for the most wealthy, military spending, inefficient health system, etc.]. It streamlines the communication a lot more.
  3. The group at whose detriment we need to decide should never be a known, discriminated against minority in such examples, because these people are used to being discriminated against and their allies are used to seeing them being discriminated against, so when someone seems to be saying that they shouldn’t receive some form of assistance, they have just a huge prior for assuming that it’s just another discriminatory attack. I think their heuristic more or less fails in this case, but that is not to say that it’s not a very valid heuristic. I’ve been abroad in a country where pedestrian crosswalks are generally ignored by car drivers. I’m not going to just blinding walk onto the street there even if the driver of the only car coming toward me is actually one who would’ve stopped for me if I did. My heuristic fails in that case, but it generally keeps me safe.
  4. Discriminated minority groups are super few, especially the ones the audience will be aware of. Some people may be able to come up with a dozen or so, some with several dozens. But in my actual prioritization decisions for the Your Siblings charity, I had to decide between groups of so fuzzy reference classes that there must be basically arbitrarily many of such groups. Street children vs. people at risk of malaria vs. farmed animals? Or street children in Kampala vs. people at risk of malaria in the southern DRC vs. chickens farmed for eggs in Spain? Or street children of the lost generation in the suburb’s of Kampala who were abducted for child sacrifice but freed by the police and delivered to the orphanage we’re cooperating with vs. …. You get the idea. If we’re unbiased, then what are the odds that we’ll draw a discriminated against group from the countless potential examples in this urn? This should heavily update a listener toward thinking that there’s some bias against the minority group at work here. Surely, the real explanation is something about salience on our minds or ease of communication and not about discrimination, but they’d have to know us very well to have so much trust in our intentions.
  5. People with disability probably have distance “bias” at the same rates as anyone else, so they’ll perceive the blind person with the guide dog as in-group, the blind people suffering from cataracts in developing countries as completely neutral foreign group, and us as attacking them, making us the out-group. Such controversy is completely avoidable and highly dangerous, as Owen Cotton-Barratt describes in more detail in his paper on movement growth. Controversy breeds an opposition (and one that is not willing to engage in moral trade with us) that destroys option value particularly by depriving us of the highly promising option to draw on the democratic process to push for the most uncontroversial implications of effective altruism that we can find. Scott Alexander has written about it under the title “Toxoplasma of Rage.” I don’t think publicity is worth sacrificing the political power of EA for it, but that is just a great simplification of Owen Cotton-Barratt’s differentiated points on the topic.
  6. Communication is by necessity cooperative. If we say something, however true it may be, and important members of the audience understand it as something false or something else entirely (that may not have propositional nature), then we failed to communicate. When this happens, we can’t just stamp our collective foot on the ground and be like, “But it’s true! Look at the numbers!” or “It’s your fault you didn’t understand me because you don’t know where I’m coming from!” That’s not the point of communication. We need to adapt our messaging or make sure that people at least don’t misunderstand us in dangerous ways.

(I feel like you may disagree on some of these points for similar reasons that The Point of View of the Universe seemed to me to argue for a non-naturalist type of moral realism while I “only” try to assume some form of non-cognitivist moral antirealism, maybe emotivism, which seems more parsimonious to me. Maybe you feel like or have good reasons to think that there is a true language (albeit in a non-naturalist sense) so that it makes sense to say “Yes, you misunderstood me, but what I said is true, because …,” while I’m unsure. I might say, “Yes, you misunderstood me, but what I meant was something you’d probably agree with. Let me try again.”)

Comment author: Telofy  (EA Profile) 17 May 2017 07:00:50AM 0 points [-]

Love your writing style!

Comment author: [deleted] 24 April 2017 08:24:02PM 9 points [-]

I'm deeply sorry. I sometimes say something without catching context. Please understand I am a very newcomer on EA and this forum. I promise I will comment more carefully from now on.

In response to comment by [deleted] on Effective altruism is self-recommending
Comment author: Telofy  (EA Profile) 26 April 2017 11:44:59AM 3 points [-]

Surely your comment would‘ve been very informative on its own.

Welcome to the forum! :-D

Comment author: redmoonsoaring 18 March 2017 05:38:04PM 14 points [-]

While I see some value in detailing commonly-held positions like this post does, and I think this post is well-written, I want to flag my concern that it seems like a great example of a lot of effort going into creating content that nobody really disagrees with. This sort of armchair qualified writing doesn't seem to me like a very cost-effective use of EA resources, and I worry we do a lot of it, partly because it's easy to do and gets a lot of positive social reinforcement, to a much greater degree than empirical bold writing tends to get.

Comment author: Telofy  (EA Profile) 27 March 2017 02:14:15PM 3 points [-]

While enough people are skeptical about rapid growth and no one (I think) wants so sacrifice integrity, the warning to be careful about politicization of EA is a timely and controversial one because well-known EAs have put a lot of might behind Hillary’s election campaign and the prevention of Brexit to the point that the lines behind private efforts and EA efforts may blur.

Comment author: RomeoStevens 25 February 2017 08:15:29PM *  6 points [-]

Thinking about what to call this phenomenon because it seems like an important aspect of discourse. Namely, making no claims but only distinctions, which generates no arguments. This was a distinct flavor to Superintelligence, I think intentionally to create a framework within which to have a dialog absent the usual contentious claims. This was good for that particular use case, but I think that deployed indiscriminately it leads to a kind of big tent approach inimical to real progress.

I think potentially it is the right thing for OpenPhil to currently be doing since they are first trying to figure out how the world actually is with pilot grants and research methodology testing etc. Good to not let it infect your epistemology permanently though. Suggested counter force: internal non-public betting market.

Comment author: Telofy  (EA Profile) 26 February 2017 08:13:00PM *  0 points [-]

Namely, making no claims but only distinctions

Or taxonomies. Hence: The Taxoplasma of Ra.

(Sorry, I should post this in DEAM, not here. I don’t even understand this Ra thing.)

But I really like this concept!

View more: Next