In response to comment by Lila on Why I left EA
Comment author: kbog  (EA Profile) 20 February 2017 03:49:27AM *  7 points [-]

I'm clearing up the philosophical issues here. It's fine if you don't agree, but I want others to have a better view of the issue. After all, you started your post by saying that EAs are overconfident and think their views are self evident. Well, what I'm doing here is explaining the reasons I have for believing these things, to combat such perceptions and improve people's understanding of the issues. Because other people are going to see this conversation, and they're going to make some judgement about EAs like me because of it.

But if you explicitly didn't want people to respond to your points... heck, I dunno what you were looking for. You shouldn't expect to not have people respond with their points of view, especially when you disagree on a public forum.

In response to comment by kbog  (EA Profile) on Why I left EA
Comment author: Lila 20 February 2017 05:02:41AM 7 points [-]

You're free to offer your own thoughts on the matter, but you seemed to be trying to engage me in a personal debate, which I have no interest in doing. This isn't a clickbait title, I'm not concern trolling, I really have left the EA community. I don't know of any other people who have changed their mind about EA like this, so I thought my story might be of some interest to people. And hey, maybe a few of y'all were wondering where I went.

In response to Why I left EA
Comment author: kbog  (EA Profile) 20 February 2017 01:42:48AM *  10 points [-]

Like many EAs, I'm a moral anti-realist. This is why I find it frustrating that EAs act as if utilitarianism is self-evident and would be the natural conclusion of any rational person. (I used to be guilty of this.)

You can be a moral realist and be very skeptical of anyone who is confident in a moral system, and you can be an anti-realist and be really confident in a moral system. The metaethical question of realism can affect the normative question over moral theories, but it doesn't directly tell us whether to be confident or not.

Anti realists reject the claim that any moral propositions are true. So they don't think there is a fact of the matter about what we morally ought to do. But this doesn't mean they believe that anyone's moral opinion is equally valid. The anti realist can believe that our moral talk is not grounded in facts while also believing that we should follow a particular moral system.

Finally, it seems to me that utilitarians in EA seem to have arguments for it which are at least as well grounded as people with any other moral view in any other community, with the exception of actual moral philosophers. I, for instance, don't think utilitarianism is self-evident. But I think that debunking arguments against moral intuitions are very good and that subjective normativity is the closest pointer to objective normativity that we have, that this implies that we ought to give equal respect to subjective normativity experienced by others, that the von Neumann-Morgenstein axioms of decisionmaking are valuable for a moral theory and point us towards maximizing expected value, and that there is no reason to be risk averse. I think a lot of utilitarians in EA would say something vaguely like this, and the fact that they don't do so explicitly is no proof that they have no justification whatsoever or respect for opposing views.

My view is that morality is largely the product of the whims of history, culture, and psychology.

Empirically, yes, this happens to be the case, but realists don't disagree with that. (Science is also the product of the whims of history, culture, and psychology.) They disagree over whether these products of history, culture, and psychology can be justified as true or not.

Any attempt to systematize such complex belief systems will necessarily lead to unwanted conclusions.

Plenty of realists have had this view. And you could be an anti-realist who believes in systematizing complex belief systems as well - it's not clear to me why you can't be both.

So I'm just not sure that your reasoning for your normative ideas is valid, because you're talking as if they follow from your metaethical assumptions when they really should not (without some more details/assumptions).

Note that many people in EA take moral uncertainty seriously, something which is rarely done anywhere else.

Absurd expected value calculations/Pascal's mugging -> Valuing existential risk and high-risk, high-reward careers rely on expected value calculations

Pascal's Mugging is a thought experiment with exceptionally low probability, arbitrarily high stakes events. It poses a noteworthy counterargument to the standard framework of universally maximizing expected value, and therefore is philosophically interesting. That is after all the reason why Nick Bostrom and Eliezer Yudkowsky, two famous effective altruists, developed the idea. But to say that it poses an argument against other expected value calculations is a bit of a non sequitur - there is no clear reason not to say that in Pascal's Mugging, we shouldn't maximize expected value, but in existential risk where the situation is not so improbable and counterintuitive, we should maximize expected value. The whole point of Pascal's Mugging is to try to show that some cases of maximizing expected value are obviously problematic, but I don't see what this says about all the cases where maximizing expected value is not obviously problematic. If there were a single parsimonious decision theory that was intuitive and worked well in all cases including Pascal's Mugging, then you might abandon maximizing expected value in favor of it, but there is no such theory.

There's actual reasons that people like the framework of maximizing expected value; such as how it's invulnerable to Dutch Book Theorems and doesn't lead to intransitivity. In Pascal's Mugging, maybe we can accept losing these properties, because it's such a problematic case. But in other scenarios we will want to preserve them.

It's also worth noting that many of those working on existential risk don't rely on formal mathematical calculations at all, or believe that their cause is very high in probability anyway, as people at MIRI for instance have made clear.

Prioritizing animals (particularly invertebrates) relied on total-view utilitarianism (for me). I value animals (particularly non-mammals) very little compared to humans

But you are wrong about that. Valuing animal interests comparably to humans' is not a uniquely utilitarian principle. Numerous non-utilitarian arguments for this have been advanced by philosophers such as Regan, Norcross, and Korsgaard, and they have been received very well in philosophy. In fact, they are received so well that there is hardly a credible position which involves rejecting the set of them.

You might think that lots of animals just don't experience suffering, but so many EAs agree with this that I'm a little puzzled as to what the problem is. Sure, there's far more people who take invertebrate suffering seriously in a group of EAs than in a group of other people. But there's so many who don't think that invertebrates are sentient that, to be quite honest, this looks less like "I'm surrounded by people I disagree with" and more like "I'm not comfortable in the presence of people I disagree with."

Also, just to be clear although you never stated it explicitly: the idea that we should make serious sacrifices to others according to a framework of maximizing expected value does not imply utilitarianism. Choosing to maximize expected value is a question of decision theory where many moral theories often don't take a clear side, while the obligation to make significant sacrifices to the developing world has been advanced by non-utilitarian arguments from Cohen, Singer, Pogge, and others. These arguments, also, are considered compelling enough that there is hardly a credible position which involves rejecting the set of them.

In response to comment by kbog  (EA Profile) on Why I left EA
Comment author: Lila 20 February 2017 02:10:45AM 6 points [-]

I don't expect you to convince me to stay.

Maybe I should have said "I'd prefer if you didn't try to convince me to stay". Moral philosophy isn't a huge interest of mine anymore, and I don't really feel like justifying myself on this. I am giving an account of something that happened to me. Not making an argument for what you should believe. I was very careful to say "in my view" for non-trivial claims. I explicitly said "Prioritizing animals (particularly invertebrates) relied on total-view utilitarianism (for me)." So I'm not interested in hearing why prioritizing animals does not necessarily rely on total view utilitarianism.

In response to Why I left EA
Comment author: casebash 20 February 2017 12:59:55AM -1 points [-]

If morality isn't real, then perhaps we should just care about our selves.

But suppose we do decide to care about other people's interests - maybe not completely, but at least to some degree. To the extent that we decide to devote resources to helping other people, it makes sense that we should do this to the maximal extent possible and this is what utilitarianism does.

In response to comment by casebash on Why I left EA
Comment author: Lila 20 February 2017 01:03:10AM 3 points [-]

To the extent that we decide to devote resources to helping other people, it makes sense that we should do this to the maximal extent possible

I don't think I do anything in my life to the maximal extent possible

In response to Why I left EA
Comment author: ozymandias 19 February 2017 07:42:09PM 9 points [-]

I'd be interested in an elaboration on why you reject expected value calculations.

My personal feeling is that expected-value calculations with very small probabilities are unlikely to be helpful, because my calibration for these probabilities is very poor: a one in ten million chance feels identical to a one in ten billion chance for me, even though their expected-value implications are very different. But I expect to be better-calibrated on the difference between a one in ten chance and a one in a hundred chance, particularly if-- as is true much of the time in career choice-- I can look at data on the average person's chance of success in a particular career. So I think that high-risk high-reward careers are quite different from Pascal's muggings.

Can you explain why (and whether) you disagree?

In response to comment by ozymandias on Why I left EA
Comment author: Lila 19 February 2017 09:53:10PM 8 points [-]

That's a good point, though my main reason for being wary of EV is related to rejecting utilitarianism. I don't think that quantitative, systematic ways of thinking are necessarily well-suited to thinking about morality, any more than they'd be suited to thinking about aesthetics. Even in biology (my field), a priori first-principles approaches can be misleading. Biology is too squishy and context-dependent. And moral psychology is probably even squishier.

EV is one tool in our moral toolkit. I find it most insightful when comparing fairly similar actions, such as public health interventions. It's sometimes useful when thinking about careers. But I used to feel compelled to pursue careers that I hated and probably wouldn't be good at, just on the off chance it would work. Now I see morality as being more closely tied to what I find meaning in (again, anti-realism). And I don't find meaning in saving a trillion EV lives or whatever.

23

Why I left EA

I don't intend to convince you to leave EA, and I don't expect you to convince me to stay. But typical insider "steel-manned" arguments against EA lack imagination about other people's perspectives: for example, they assume that the audience is utilitarian. Outsider anti-EA arguments are often mean-spirited or misrepresent EA... Read More
Comment author: Lila 18 August 2016 03:56:58AM -1 points [-]

"But I think supporting the continuation of humanity and the socialization of the next generation can be considered a pretty basic part of human life."

Maybe it's a good thing at the margins, but we have more than enough people breeding at this point. There's nothing particularly noble about it, anymore than it's noble for an EA to become a sanitation worker. Sure, society would fall apart without sanitation workers, but still...

You're entitled to do what you want with your life, but there's no reason to be smug about it.

Comment author: John_Maxwell_IV 23 July 2016 04:23:44PM 1 point [-]

If someone at CEA reads a bunch of studies on a particular topic, and writes several well-cited paragraphs that summarize the literature, this would be appropriate for Wikipedia, no? (I agree other ways of interpreting "research" might not be.)

Comment author: Lila 23 July 2016 05:38:54PM 1 point [-]
Comment author: Lila 23 July 2016 03:29:11PM 1 point [-]

You should probably explain what SODIS is.

Comment author: John_Maxwell_IV 23 July 2016 08:41:07AM *  13 points [-]

Exciting stuff!

We’ve already been experimenting with this project over the last six months. People we’ve provided advice for include: entrepreneurs who have taken the Founders’ Pledge and exited; private major donors who contacted us as a result of reading Doing Good Better; former Prime Minister Gordon Brown, for his International Commission on Financing Global Education Opportunity; and Alwaleed Philanthropies, a $30 billion foundation focused on global humanitarianism. This project is still very much in its infancy and we’ll assess its development on an ongoing basis.

Do you have plans to publish summaries of the research you do, e.g. on Wikipedia or the EA Wiki? If I remember correctly, GiveWell was originally "The Clear Fund", and their comparative advantage was supposed to be making the research behind their grants public, instead of keeping research to themselves like most foundations. Making research public lets people criticize it, or base their giving off of it even if they didn't request it. See also. There are certainly reasons to stay quiet in some cases, and I could understand why donors might not want their names announced, but it feels like the bias should be towards publishing.

I'd also challenge you to think about what CEA's "secret sauce" is for doing this research for donors in a way that's superior to whatever other group they would consult with in order to have it done. I'm not saying that you won't do a better job, I'm just saying it seems worth thinking about.

We think that policy is an important area for effective altruism to develop into

Some people have argued against this. I'm also skeptical. My sense is that

  1. This is an area where it plausibly does make sense to use a non-CEA label, since as soon as you step in to the political arena, you are inviting people to throw mud at you.

  2. The highest leverage interventions may be at the meta-level. For example, creation of a website whose discussion culture can stay friendly and level-headed even with many participants--I suggested how this might be done at the end of this essay. Or here's a proposal for fighting filter bubbles.

  3. I'm generally skeptical that the intuitions which have worked for EA thus far will transfer well to the political arena. It seems like a much different animal. Again, I'd challenge you to think about whether this is your comparative advantage. The main advantage that comes to mind is that CEA has a lot of brand capital to spend, but doing political stuff is a good way to accidentally spend a lot brand capital very quickly if mud is thrown. As a flagship organization of the EA movement, there's also a sense in which CEA draws from a pool of brand capital that belongs to the community at large. If CEA does something to discredit itself (e.g. publicly recommends a controversial policy), it's possible for other EA organizations, or people who have identified publicly as EAs, to catch flak.

As a broad question: I understand it's commonly advised in the business world to focus on a few "core competencies" and outsource most other functions. I'm curious whether this also makes sense in the nonprofit world.

Comment author: Lila 23 July 2016 03:18:13PM 1 point [-]

Do you have plans to publish summaries of the research you do, e.g. on Wikipedia

Wikipedia's policies forbid original research. Publishing the research on the organization's website and then citing it on Wikipedia would also be discouraged, because of exclusive reliance on primary sources. (And the close connection to the subject would raise eyebrows.)

I think this is worth mentioning because I've seen some embarrassing violations of Wikipedia policy on EA-related articles recently.

Comment author: Marcus_Ogren 20 July 2016 12:35:21AM -1 points [-]

A. Basic proposal

Over two billion dollars were spent on the 2012 US presidential election. About half this total was spent to make Obama win and Mitt Romney lose; most of the other half was spent to make Romney win and Obama lose. In aggregate this was incredibly wasteful, and there should be a better way of influencing an election than throwing money into a zero-sum game. Instead of funding opposing advertisements, campaign money should support programs that everyone considers to be beneficial; that is to say, it should go to charity.

It should be possible to make a nonprofit that would implement this, and here's how it could work. The nonprofit, which I shall give the placeholder name of Altruistic Partisanship, would run a website on which people could make donations. For each donation, the donor would specify a political candidate (or party) she wishes to support as well as an apolitical charity. Altruistic Partisanship would hold onto the money until the end of the current month, at which point the total amount of money raised for each candidate would be tallied up. The candidate who has raised the most money this way will receive a donation equal to the amount of money that candidate has raised in excess of what the opposing candidate has raised. The remaining money (equal to twice as much as was associated with the less-preferred candidate) would go to the charities specified by the donors.

Here's how it could work. Suppose that Clinton raises $1,000,000 through Altruistic Partisanship and Trump raises $800,000. For each Clinton donor, 1/5 of what they donated would go to the Clinton campaign and 4/5 would go the charity or charities they specified. For each Trump donor, all of what they donated would go to their preferred charity. Clinton's campaign would receive $200,000, Trump's would receive nothing, and $1,600,000 would go to charity.

As for the marginal effects of a donation, suppose someone were to have donated $100 through Altruistic Partisanship and specified Hillary Clinton as her preferred candidate and AMF as her preferred charity. If Clinton raised more through Altruistic Partisanship, the marginal effect of this donation is that Clinton's campaign would have an additional $100 (just as if she donated directly to the Clinton campaign). If Trump had raised more, the marginal effects of this donation would be a $100 reduction in Trump's campaign funds, AMF receiving $100, and the (apolitical) charities preferred by Trump's supporters receiving $100.

Comment author: Lila 20 July 2016 06:40:12PM *  0 points [-]

It feels like telling two rival universities to cut their football programs and donate the savings to AMF. "Everyone wins!"

Anyway, two billion dollars isn't that much in the scheme of things. I remember reading somewhere that Americans spend more money on Halloween candy than politics.

View more: Next