Comment author: kbog  (EA Profile) 10 September 2018 12:08:35AM *  7 points [-]

Discord lets you separate servers into different channels for people to talk about different things. There is already an EA Discord, of course new and near term EAs are welcome there. I think it would be bad if we split things like this because the more the near term EAs isolate themselves, the more and more "alienated" people will feel elsewhere, so it will be a destructive feedback loop. You're creating the problem that you are trying to solve.

Also, it would reinforce the neglect of mid-term causes which have always gotten too little attention in EA.

I ask that far-future effective altruists and people whose priority cause area is AI risk or s-risks do not participate.

Yeah, this isn't good policy. It should be pretty clear that this is how groupthink happens, and you're establishing it as a principle. I get that you feel alienated because, what, 60% of people have a different point of view? (perish the thought!) And you want to help with the growth of the movement. But hopefully you can find a better way to do this than creating an actual echo chamber. It's clearly a poor choice as far as epistemology is concerned.

You're also creating the problem you're trying to solve in a different way. Whereas most "near-term EAs" enjoy the broad EA community perfectly well, you're reinforcing an assumption that they can't get along, that they should expect EA to "alienate" them, as they hear about your server. As soon as people are pointed towards a designated safe space, they're going to assume that everything on the outside is unfriendly to them, and that will bias their perceptions going forward.

You are likely to have a lighter version of the problem that Hatreon did with Patreon, Voat with Reddit, etc - whenever a group of people has a problem with the "mainstream" option and someone tries to create an alternative space, the first people who jump ship to the alternative will be the highly-motivated people on the extreme end of the spectrum, who are the most closed-minded and intolerant of the mainstream, and they are going to set the norms for the community henceforth. Don't get me wrong, it's good to expand EA with new community spaces and be more appealing to new people, it is always nice to see people put effort into new ideas for EA, but this is very flawed, I strongly recommend that you revise your plans.

Comment author: ozymandias 10 September 2018 02:59:57AM 0 points [-]

I do not intend Near-Term EAs to be participants' only space to talk about effective altruism. People can still participate on the EA forum, the EA Facebook group, local EA groups, Less Wrong, etc. There is not actually any shortage of places where near-term EAs can talk with far-future EAs.

Near-Term EAs has been in open beta for a week or two while I ironed out the kinks. So far, I have not found any issues with people being unusually closed-minded or intolerant of far-future EAs. In fact, we have several participants who identify as cause-agnostic and at least one who works for a far-future organization.

6

Near-Term Effective Altruism Discord

I have started  a Discord server  for near-term effective altruists. (If you haven’t used Discord before, it’s a pretty standard chat server. Most of its functions are fairly self-explanatory.) Most of my effective altruist friends focus on the far future. While far-future effective altruists are great, being around them all... Read More
Comment author: ozymandias 25 April 2018 07:36:14PM 11 points [-]

The EA community climate survey linked in the EA survey has some methodological problems. When academics study sexual harassment and assault, it's generally agreed upon that one should describe specific acts (e.g. "has anyone ever made you have vaginal, oral, or anal sex against your will using force or a threat of force?") rather than vague terms like harassment or assault. People typically disagree on what harassment and assault mean, and many people choose not to conceptualize their experiences as harassment or assault. (This is particularly true for men, since many people believe that men by definition can't be victims of sexual harassment or assault.) Similarly, few people will admit to perpetrating harassment or assault, but more people will admit to (for example) touching someone on the breasts, buttocks, or genitals against their will.

I'd also suggest using a content warning before asking people about potentially traumatic experiences.

Comment author: TruePath 11 May 2017 07:37:31AM 0 points [-]

This feels like nitpicking that gives the impression of undermining Singer's original claim when in reality the figures support them. I have no reason to believe Singer was claiming that of all possible charitable donations trauchoma is the most effective, merely to give the most stunningly large difference in cost effectiveness between charitable donations used for comparable ends (both about blindness so no hard comparisons across kinds of suffering/disability).

I agree that within the EA community and when presenting EA analysis of cost-effectiveness it is important to be upfront with the full complexity of the figures. However, <b>Singer's purpose at TED isn't to carefully pick the most cost effective donations but to force people to confront the fact that cost effectiveness matters.</b>. While those of us already in EA might find a statement like "We prevent 1 year of blindness for every 3 surgeries done which on average cost..." perfectly compelling the audience members who aren't yet persuaded simply tune out. After all it's just more math talk and they are interested in emotional impact. The only way to convince them is to ignore getting the numbers perfectly right and focus on the emotional impact of choosing to help a blind person in the US get a dog rather than many people in poor countries avoid blindness.

Now it's important that we don't simplify in misleading ways but even with the qualifications here it is obvious that it still costs orders of magnitude more to train a dog than prevent blindness via this surgery. Moreover, once one factors in considerations like pain, the imperfect replacement for eyes provided by a dog, etc.. the original numbers are probably too favorable to dog training as far as relative cost effectiveness goes.

This isn't to say that your point here isn't important regarding people inside EA making estimates or givewell analysis or the like. I'm just pointing out that it's important to distinguish the kind of thing being done at a TED talk like this from that being done by givewell. So long as when people leave the TED talk their research leaves the big picture in place (dogs >>>> trauchoma surgery) it's a victory.

Comment author: ozymandias 11 May 2017 10:11:58PM 13 points [-]

If we're ignoring getting the numbers right and instead focusing on the emotional impact, we have no claim to the term "effective". This sort of reasoning is why epistemics around dogooding are so bad in the first place.

In response to Why I left EA
Comment author: ozymandias 19 February 2017 07:42:09PM 9 points [-]

I'd be interested in an elaboration on why you reject expected value calculations.

My personal feeling is that expected-value calculations with very small probabilities are unlikely to be helpful, because my calibration for these probabilities is very poor: a one in ten million chance feels identical to a one in ten billion chance for me, even though their expected-value implications are very different. But I expect to be better-calibrated on the difference between a one in ten chance and a one in a hundred chance, particularly if-- as is true much of the time in career choice-- I can look at data on the average person's chance of success in a particular career. So I think that high-risk high-reward careers are quite different from Pascal's muggings.

Can you explain why (and whether) you disagree?

Comment author: Fluttershy 09 February 2017 03:34:36AM *  9 points [-]

It’s not a coincidence that all the fund managers work for GiveWell or Open Philanthropy.

Second, they have the best information available about what grants Open Philanthropy are planning to make, so have a good understanding of where the remaining funding gaps are, in case they feel they can use the money in the EA Fund to fill a gap that they feel is important, but isn’t currently addressed by Open Philanthropy.

It makes some sense that there could be gaps which Open Phil isn't able to fill, even if Open Phil thinks they're no less effective than the opportunities they're funding instead. Was that what was meant here, or am I missing something? If not, I wonder what such a funding gap for a cost-effective opportunity might look like (an example would help)?

There's a part of me that keeps insisting that it's counter-intuitive that Open Phil is having trouble making as many grants as it would like, while also employing people who will manage an EA fund. I'd naively think that there would be at least some sort of tradeoff between producing new suggestions for things the EA fund might fund, and new things that Open Phil might fund. I suspect you're already thinking closely about this, and I would be happy to hear everyone's thoughts.

Edit: I'd meant to express general confidence in those who had been selected as fund managers. Also, I have strong positive feelings about epistemic humility in general, which also seems highly relevant to this project.

Comment author: ozymandias 09 February 2017 02:48:50PM 4 points [-]

IIRC, Open Phil often wants to not be a charity's only funder, which means they leave the charity with a funding gap that could maybe be filled by the EA Fund.

Comment author: the_jaded_one 29 January 2017 04:35:12PM *  1 point [-]

reducing deportations of undocumented immigrants would reduce incarceration (through reducing the number of people in ICE detention)

That is true, but it is politicized inference. You could also reduce the number of people in ICE detention at any given time by deporting them much more quickly. Or you could reduce the number of undocumented immigrants by making it harder for them to get in in the first place, for example by building a large wall on the southern US border.

So I would characterize this as a politically biased opinion first and foremost. It's not even an opinion that requires being informed - it's obvious that you could reduce incarceration by releasing people from detention and just letting them have whatever they were trying to illegally take, you don't need a law degree to make this inference, but you do need a political slant to claim that it's a good idea.

And the totality of policies espoused by people such as Chloe Cockburn would be to flood the US with even more immigrants from poorer countries, not just to grant legal status to existing ones. This is entryism, and it is a highly political move that many people are deeply opposed to because they see it as part one of a plan to wipe them and their culture out. I don't think that's a good fit for an EA cause - even if you think it's a good idea, it makes sense to separate it from EA.

Comment author: ozymandias 29 January 2017 05:40:21PM 0 points [-]

Well, yes, anyone can come up with all sorts of policy ideas. If a person has policy expertise in a particular field, it allows them to sort out good policies from bad ones, because they are more aware of possible negative side effects and unintended consequences than an uninformed person is. I don't think the fact that a person endorses a particular policy means that they haven't thought about other policies.

Is your claim that Chloe Cockburn has failed to consider policy ideas associated with the right-wing, and thus has not done her due diligence to know that what she recommends is actually the best course? If so, what is your evidence for this claim?

Comment author: the_jaded_one 29 January 2017 03:55:42PM *  4 points [-]

I think dividing these three claims more clearly would make it easier for me to follow your argument: effective altruist charity suggestion lists should not endorse political charities.

This is a rather large topic, I don't think it would be wise to try and specify and defend that abstract claim in the same post as talking about a specific situation. I take it as given, at least here. Perhaps I will do a followup, but I think it would be hard to do the topic justice in, say, 5-10 hours which is what I realistically have.

Of course, an identical critique applies to animal welfare charities: many, many traditionalists/conservatives/non-social-justice-people are turned off by animal welfare activism.

Animal welfare activism is controversial, but it hasn't been subsumed into the culture war in the way immigration, race and social justice have. Some parts of animal welfare activism, such as veganism are left-associated, but other parts like wild animal suffering and synthetic meat most certainly are not. So in my mind, animal welfare activism is suitable for EA involvement.

And xrisk charities tend to turn off, to a first approximation, everyone.

AI-risk as offputting is becoming less true over time, but EA should not be aiming to appeal to everyone. Rather I think that EA should be aiming to not take sides in tribal wars.

Is your belief that it is morally wrong to ever specifically help one group because you believe they are worse off than other groups?

No, but in the specific case of the US culture war I think it is a bad idea to move in the "Black lives matter" direction. In the case of the tradeoff between incarceration and public safety, I don't think there is any good reason to make it into a race issue, because that immediately sends the signal that you are interested in raising the status and outcomes of your "favorite" race at the cost of other races. This is a tradeoff situation where benefits targeted at a specific group will harm people who are not from that group in a fairly direct way.

On the other hand if GiveDirectly gives cash to women in some third world country, and that cash comes from voluntary payments in the west, it is going to be an improvement for everyone in the receiving community as their local economy is stimulated.

Comment author: ozymandias 29 January 2017 05:35:12PM 3 points [-]

I don't think it would be wise to try and specify and defend that abstract claim in the same post as talking about a specific situation. I take it as given, at least here. Perhaps I will do a followup, but I think it would be hard to do the topic justice in, say, 5-10 hours which is what I realistically have.

I am confused. If you took it as given, why bother talking about whether Alliance for Safety and Justice and Cosecha are good charities? It surely doesn't matter if someone is good at doing something that you think they shouldn't be doing in the first place. Perhaps you intended to say that you mean to discuss the object-level issue of whether these charities are good and leave aside the meta-level issue of whether EA should be involved in politics, in which case I am puzzled about why you brought up the meta-level issue in your post.

Animal welfare activism is controversial, but it hasn't been subsumed into the culture war in the way immigration, race and social justice have. Some parts of animal welfare activism, such as veganism are left-associated, but other parts like wild animal suffering and synthetic meat most certainly are not. So in my mind, animal welfare activism is suitable for EA involvement.

I disagree that animal welfare activism hasn't been subsumed into the culture war. For instance, veganism is a much more central trait of the prototypical hippie than immigration opinions are. PETA is significantly more controversial than any equally prominent immigration charity.

I think that wild-animal suffering and synthetic meat are mostly not part of the culture war because they are obscure. I expect that they would become culture-war issues as soon as they become more prominent. Do you disagree? Or do you think that the appropriate role of EA is to elevate issues into culture-war prominence and then step aside? Or something else?

AI-risk as offputting is becoming less true over time, but EA should not be aiming to appeal to everyone. Rather I think that EA should be aiming to not take sides in tribal wars.

Do you mean that EA shouldn't take sides in e.g. deworming, because that's a tribal war between economists and epidemiologists? Or do you mean that they shouldn't take sides in issues associated with the American left and right, even if they sincerely believe that one of those issues is the best way to improve the world? Or something else?

Comment author: the_jaded_one 29 January 2017 03:03:51PM 2 points [-]

Informed opinions can still be biased, and we are being asked to "trust" her.

I am uncertain why someone would choose to figure out what other people's area of expertise is from Twitter.

Well I am worried about political bias in EA. Her political opinions are supremely relevant.

On a strictly legal question such as "In situation X, does law Y apply" I would definitely trust her more than I would trust myself. But that is not the question that is being asked, the question that is being asked is "Will the action of funding Cosecha reduce incarceration while maintaining public safety" with the followup question of "Or is this about increasing illegal immigration by making it harder to deport illegals, opposing Trump and generally supporting left-wing causes?"

I don't think that she can claim special knowledge or lack of bias in answering those questions. I think it's hard for anyone to.

Comment author: ozymandias 29 January 2017 04:00:24PM 3 points [-]

I am perhaps confused about what your claim is. Do you mean to say "Chloe Cockburn does not have expertise except in the facts of the law and being a left-wing anti-Trump activist"? Or "Chloe Cockburn has a good deal of expertise in fields relevant to the best possible way to reduce mass incarceration, but her opinion is sadly biased because she has liberal political opinions"?

Regarding her Twitter, I think Chloe Cockburn might have an informed opinion that reducing deportations of undocumented immigrants would reduce incarceration (through reducing the number of people in ICE detention) while maintaining public safety. That would cause her both to recommend Cosecha and to advocate on her Twitter feed for reducing deportations. Indeed, it is very common for people to do awareness-raising on Twitter for causes they believe are highly effective: if your argument were taken to its endpoint, we ought not trust GiveWell because its employees sometimes talk about how great malaria nets and deworming are on social media.

Probably, like all people, Chloe Cockburn supports the causes she supports for both rational and irrational reasons. That is something to take into account when deciding how seriously to take her advice. But that is also a fully general counterargument against ever taking advice from anyone.

Comment author: ozymandias 29 January 2017 02:49:36PM *  13 points [-]

This post seems to me to move somewhat incoherently between:

  • effective altruist charity suggestion lists should not endorse political charities.
  • effective altruist charity suggestion lists should specifically not endorse anti-racist and pro-undocumented-immigrant charities.
  • there is not sufficient evidence to suggest that Alliance for Safety and Justice and Cosecha in specific are effective.

I think dividing these three claims more clearly would make it easier for me to follow your argument.

It would also be more persuasive, for me, if you elaborated more on what your arguments actually were. For instance, on the issue of whether 80,000 Hours should endorse political charities, you mention that it might turn off "traditionalists/conservatives and those who are uninitiated to Social Justice ideology." Of course, an identical critique applies to animal welfare charities: many, many traditionalists/conservatives/non-social-justice-people are turned off by animal welfare activism. And xrisk charities tend to turn off, to a first approximation, everyone. You might, of course, believe that effective altruists should only work on global poverty issues. But it seems like an odd oversight to me to not either address animal welfare and xrisk charities (to which far more money is moved than to Cosecha) or explain why you believe animal welfare and xrisk charities are different.

Similarly, your argument against Alliance for Safety and Justice appears to mostly be that they specialize in helping people of color. To me, this does not seem like an obvious point against them; the question is whether specializing in helping people of color causes more benefit to the world than helping both white people and people of color equally. There is a prima facie case that the former does; after all, many people believe that dysfunctional policing in black and Latino communities leads to both increased crime and mass incarceration. But you seem to disagree, and I'm not sure why. You oppose selective release of black and Latino prisoners (which does not seem to be a policy ASJ is in favor of, although perhaps I'm wrong) and to believe an organization specializing in helping men would be a reducto ad absurdam. I don't, actually, see any problems with donating to an organization that primarily helps men if it seems to be the best way to reduce mass incarceration. Is your belief that it is morally wrong to ever specifically help one group because you believe they are worse off than other groups? (If so, how do you feel about GiveDirectly targeting worse-off people with their cash transfers and having considered the possibility of only transferring cash to women?)

View more: Next