Comment author: Vidur_Kapur  (EA Profile) 09 July 2017 03:37:50PM *  2 points [-]

Thank you for the interesting post, and you provide some strong arguments for moral inclusivity.

I'm less confident that the marketing gap, if it exists, is a problem, but there may be ways to sell the more 'weird' cause areas, as you suggest. However, even when they are mentioned, people may still get the impression that EA is mostly about poverty. The other causes would have to be explained in the same depth as poverty (looking at specific charities in these cause areas as well as cost-effectiveness estimates where they exist, for instance) for the impression to fade, it seems to me.

While I do agree that it's likely that a marketing gap is perceived by a good number of newcomers (based solely on my intuition), do we have any solid evidence that such a marketing gap is perceived by newcomers in particular?

Or is it mainly perceived by more 'experienced' EAs (many of whom may prioritise causes other than global poverty) who feel as if sufficient weight isn't being given to other causes, or who feel guilty for giving a misleading impression relative to their own impressions (which are formed from being around others who think like them)? If the latter, then the marketing gap may be less problematic, and will be less likely to blow up in our faces.

Comment author: Peter_Hurford  (EA Profile) 09 July 2017 12:41:05AM *  19 points [-]

It's worth noting that the 2017 EA Survey (data collected but not yet published), the 2015 EA Survey, and the 2014 EA Survey all have global poverty as the most popular cause (by plurality) among EAs in these samples, and by a good sized margin. So it may not be the case that the EA movement is misrepresenting itself as a movement by focusing on global poverty (even if movement leaders may think differently than the movement as a whole).

Still, it is indeed the case that causes other than global poverty are more popular than global poverty, which would potentially argue for a more diverse presentation on what EA is about as you suggest.

Comment author: Vidur_Kapur  (EA Profile) 09 July 2017 03:12:32PM 0 points [-]

And, as Michael says, even the perception that EA is misrepresenting itself could potentially be harmful.

In response to Why I left EA
Comment author: Vidur_Kapur  (EA Profile) 25 February 2017 10:29:40AM *  0 points [-]

I agree with the characterization of EA here: it is, in my view, about doing the most good that you can do, and EA has generally defined "good" in terms of the well-being of sentient beings. It is cause-neutral.

People can disagree on whether potential beings (who would not exist if extinction occurred) have well-being (total vs. prior-existence), they can disagree on whether non-human animals have well-being, and can disagree on how much well-being a particular intervention will result in, but they don't arbitrarily discount the well-being of sentient beings in a speciesist manner or in a manner which discriminates against potential future beings. At least, that's the strong form of EA. This doesn't require one to be a moral realist, though it is very close to utilitarianism.

If I'm understanding this post correctly, the "weak form" of EA - donating more and donating more effectively to causes you already care about, or even just donating more effectively given the resources you're willing to commit - is not unique enough for Lila to stay. I suspect, though, that many EAs (particularly those who are only familiar with the global poverty aspect of EA) only endorse this weak form, but the more vocal EAs are the ones who endorse the strong form.

Comment author: Evan_Gaensbauer 30 January 2017 09:23:42PM 1 point [-]

If effective altruists aren't perfect utilitarians because they're human, and humans can't be perfect utilitarians because they're human, maybe the problem is effective altruists trying to be perfect utilitarians despite their inability to do so, and that's why they make mistakes. What do you think of that?

Comment author: Vidur_Kapur  (EA Profile) 31 January 2017 04:48:12PM *  3 points [-]

I don't think this gets us very far. You're making a utilitarian argument (or certainly an argument consistent with utilitarianism) in favour of not trying to be a perfect utilitarian. Paradoxically, this is what a perfect utilitarian would do given the information that they have about their own limits - they're human, as you put it. For someone such as myself who believes that utilitarianism is likely to be objectively true, therefore, I already know not to be a perfectionist.

Ultimately, Singer put it best: do the most good that you can do.

Comment author: Vidur_Kapur  (EA Profile) 10 December 2016 06:21:00PM *  2 points [-]

The main problem with this post, in my view, is that it's still in some places trying trying to re-run the election debate. The relevant question is no longer about who is a bigger risk or who will cause more net suffering out of Trump or Clinton, but about how bad Trump is on his own and what we can do to reduce the risks that arise from his Presidency.

I agree that Trump's views on Russia reduce global catastrophic risk (although his recent appointments seem to be fairly hawkish towards Russia.) However, he'll likely increase tensions in Asia, and his views on climate change seem to me to be a major risk.

In terms of values and opinion polls, immigrants to Western nations have better attitudes than people from their native countries. Furthermore, immigrants when they return to their native countries often take back the values and norms of their host countries. I'm not saying this to make a judgement on whether immigration on this scale is good or bad, just to make the point that our aim is to make the world a better place, not to decrease crime rates in Europe.

That said, far-right extremists are on the rise in both the United States and in Europe (thanks in part to irrational overreactions and hyperbolic statements like law and order is breaking down, which is just patently false as others have said, and thanks in part due to a number of false beliefs about immigration and immigrants themselves, Muslim or not) and I think that one way to stop them from taking power in elections and from attacking immigrants, refugees and others is to give them the sense that they have control over 'their' borders; in other words, tactically retreating on the issue of immigration may well be a good thing. Did we need to elect Trump, with all of the risks that come with his Presidency, in order to do that?

I don't know, but I do know that Trump has been elected now, and that many of his stated policies are terrible, and if individual EAs think that trying to change the policies of the Trump administration from the inside would be an effective thing to do (as Peter Singer has suggested) then I'd say that's plausibly true for a small number of EAs.

I think, in general, it's true that a small number of EAs going into party politics would be an effective thing to do, over and above the policy-change focus which already exists in the EA community and some of its organisations, but that this should be done on an individual basis: EA-affiliated groups and organisations should not get involved in party-politics.

Comment author: Vidur_Kapur  (EA Profile) 14 November 2016 07:04:13PM *  3 points [-]

Just a few thoughts.

Firstly, Trump's agricultural advisors seem to be very hostile to animal welfare. This may mean that we need more people working on farmed animal welfare, not less.

In terms of going into politics, the prospect of having a group of EAs, and perhaps even an EA-associated organization, doing regular, everyday politics may turn some people off from the movement (depending on your view on whether EA is net-positive or negative overall, this may be bad or good.)

While Sentience Politics, the Open Philanthropy Project and some others I may have missed do take part in political activities, they focus on specific policies, and I suspect that what some people are talking about would involve a systematic attempt to engage in party-politics.

I think that even without Trump, the idea of having a very small number of individual EAs (maybe 1/1000 EAs) going inside politics and trying to influence administrations or even become politicians was a good one.

But, a systematic attempt to engage in party-politics would not be a good idea, partly because, even in the EA community, focusing on party-politics or even on controversial policies seems to lead to less willingness to consider other points of view.

And, partly because influencing administrations or becoming a politician on one's own is more likely to make a difference than engaging in regular party-political campaigning, even though becoming a politician or influencing an administration is less easy to do.

Finally, I think that politics is very important, because you could potentially reduce existential risks as well as spread good values and ensure that humanity is on the right course in the future, and therefore there's not a tension between reducing existential risks and values-spreading.

However, in order for any politicians or political advisors to be able to steer humanity in a positive direction, you need public and corporate support for it, which is why I believe that spreading anti-speciesism, working on farmed animal suffering, and so on, remains highly important too.

Overall, Trump's election has not influenced my beliefs significantly.

Comment author: jasonk 20 September 2016 12:35:24AM 2 points [-]

In contemporary ethics, Derek Parfit has tried to find convergence in his 'On What Matters' books.

Comment author: Vidur_Kapur  (EA Profile) 21 September 2016 03:53:47PM *  0 points [-]

Yeah, I'd say Parfit is probably the leading figure when it comes to trying to find convergence. If I understand his work correctly, he initially tried to find convergence when it came to normative ethical theories, and opted for a more zero-sum approach when it came to meta-ethics, but in the upcoming Volume Three I think he's trying to find convergence when it comes to meta-ethics too.

In terms of normative theories, I've heard that he's trying to resolve the differences between his Triple Theory (which is essentially Rule Utilitarianism) and the other theory he finds most plausible, the Act Utilitarianism of Singer and De-Lazari Radek.

Anyone trying to work on convergence should probably follow the fruitful debate surrounding 'On What Matters'.

Comment author: MichaelDickens  (EA Profile) 17 September 2016 04:50:41PM 5 points [-]

I totally agree. I've had several in-person discussions about the expected sign of x-risk reduction, but hardly anybody writes about it publicly in a useful way. The people I've spoken to in person all had similar perspectives and I expect that we're still missing a lot of important considerations.

I believe we don't see much discussion of this sort because you have to accept a few uncommon (but true) beliefs before this question becomes interesting. If you don't seriously care about non-human animals (which is a pretty intellectually crappy position but still popular even among EAs) then reducing x-risk is pretty clearly net positive, and if you think x-risk is silly or doesn't matter (which is another obviously wrong but still popular position) then you don't care about this question. Not that many people accept both that animals matter and that x-risk matters, and even among people who do accept those, some believe that work on x-risk is futile or that we should focus on other things. So you end up with a fairly small pool of people who care at all about the question of whether x-risk reduction is net positive.

Comment author: Vidur_Kapur  (EA Profile) 18 September 2016 12:02:13PM 2 points [-]

It's also possible that people don't even want to consider the notion that preventing human extinction is bad, or they may conflate it with negative utilitarianism when it could also be a consequence of classical utilitarianism.

For the record, I've thought about writing something about it, but I basically came to the same conclusions that you did in your blog post (I also subscribe to total, hedonistic utilitarianism and its implications i.e. anti-speciesism, concern for wild-animals etc).

If everyone has similar perspectives, it could be a sign that we're on the right track, but it could be that we're missing some important considerations as you say, which is why I also think more discussion of this would be useful.

Comment author: kokotajlod 19 July 2016 11:25:05PM *  2 points [-]

I agree that it's dangerous to generalize from fictional evidence, BUT I think it's important not to fall into the opposite extreme, which I will now explain...

Some people, usually philosophers or scientists, invent or find a simple, neat collection of principles that seems to more or less capture/explain all of our intuitive judgments about morality. They triumphantly declare "This is what morality is!" and go on to promote it. Then, they realize that there are some edge cases where their principles endorse something intuitively abhorrent, or prohibit something intuitively good. Usually these edge cases are described via science-fiction (or perhaps normal fiction).

The danger, which I think is the opposite danger to the one you identified, is that people "bite the bullet" and say "I'm sticking with my principles. I guess what seems abhorrent isn't abhorrent after all; I guess what seems good isn't good after all."

In my mind, this is almost always a mistake. In situations like this, we should revise or extend our principles to accommodate the new evidence, so to speak. Even if this makes our total set of principles more complicated.

In science, simpler theories are believed to be better. Fine. But why should that be true in ethics? Maybe if you believe that the Laws of Morality are inscribed in the heavens somewhere, then it makes sense to think they are more likely to be simple. But if you think that morality is the way it is as a result of biology and culture, then it's almost certainly not simple enough to fit on a t-shirt.

A final, separate point: Generalizing from fictional evidence is different from using fictional evidence to reject a generalization. The former makes you subject to various biases and vulnerable to propaganda, whereas the latter is precisely the opposite. Generalizations often seem plausible only because of biases and propaganda that prevent us from noticing the cases in which they don't hold. Sometimes it takes a powerful piece of fiction to call our attention to such a case.

[Edit: Oh, and if you look at what the OP was doing with the Giver example, it wasn't generalizing based on fictional evidence, it was rejecting a generalization.]

Comment author: Vidur_Kapur  (EA Profile) 21 July 2016 12:15:53PM *  2 points [-]

I disagree that biting the bullet is "almost always a mistake". In my view, it often occurs after people have reflected on their moral intuitions more closely than they otherwise would have. Our moral intuitions can be flawed. Cognitive biases can get in the way of thinking clearly about an issue.

Scientists have shown, for instance, that for many people, their intuitive rejection of entering the Experience Machine is due to the status quo bias. If people's current lives were being lived inside an Experience Machine, 50% of people would want to stay in the Machine even if they could instead live the lifestyle of a multi-millionaire in Monaco. Similarly, many people's intuitive rejection of the Repugnant Conclusion could be due to scope insensitivity.

And, revising our principles to accommodate the new evidence may lead to inconsistencies in our principles. Also, if you're a moral realist, it almost always doesn't make sense to change your principles if you believe that your principles are true.

In response to On Priors
Comment author: MichaelDickens  (EA Profile) 26 April 2016 10:35:57PM 3 points [-]

I wasn't sure if this article was the sort of thing the EA Forum audience is interested in, so let me know. I figured it's better to post and get feedback than to not post and never know.

In response to comment by MichaelDickens  (EA Profile) on On Priors
Comment author: Vidur_Kapur  (EA Profile) 27 April 2016 04:00:02PM 2 points [-]

I'm very interested in this sort of stuff, though a bit of the maths is beyond me at the moment!

View more: Next