Comment author: Lila 18 August 2016 03:56:58AM -1 points [-]

"But I think supporting the continuation of humanity and the socialization of the next generation can be considered a pretty basic part of human life."

Maybe it's a good thing at the margins, but we have more than enough people breeding at this point. There's nothing particularly noble about it, anymore than it's noble for an EA to become a sanitation worker. Sure, society would fall apart without sanitation workers, but still...

You're entitled to do what you want with your life, but there's no reason to be smug about it.

Comment author: John_Maxwell_IV 23 July 2016 04:23:44PM 1 point [-]

If someone at CEA reads a bunch of studies on a particular topic, and writes several well-cited paragraphs that summarize the literature, this would be appropriate for Wikipedia, no? (I agree other ways of interpreting "research" might not be.)

Comment author: Lila 23 July 2016 05:38:54PM 1 point [-]
Comment author: Lila 23 July 2016 03:29:11PM 1 point [-]

You should probably explain what SODIS is.

Comment author: John_Maxwell_IV 23 July 2016 08:41:07AM *  13 points [-]

Exciting stuff!

We’ve already been experimenting with this project over the last six months. People we’ve provided advice for include: entrepreneurs who have taken the Founders’ Pledge and exited; private major donors who contacted us as a result of reading Doing Good Better; former Prime Minister Gordon Brown, for his International Commission on Financing Global Education Opportunity; and Alwaleed Philanthropies, a $30 billion foundation focused on global humanitarianism. This project is still very much in its infancy and we’ll assess its development on an ongoing basis.

Do you have plans to publish summaries of the research you do, e.g. on Wikipedia or the EA Wiki? If I remember correctly, GiveWell was originally "The Clear Fund", and their comparative advantage was supposed to be making the research behind their grants public, instead of keeping research to themselves like most foundations. Making research public lets people criticize it, or base their giving off of it even if they didn't request it. See also. There are certainly reasons to stay quiet in some cases, and I could understand why donors might not want their names announced, but it feels like the bias should be towards publishing.

I'd also challenge you to think about what CEA's "secret sauce" is for doing this research for donors in a way that's superior to whatever other group they would consult with in order to have it done. I'm not saying that you won't do a better job, I'm just saying it seems worth thinking about.

We think that policy is an important area for effective altruism to develop into

Some people have argued against this. I'm also skeptical. My sense is that

  1. This is an area where it plausibly does make sense to use a non-CEA label, since as soon as you step in to the political arena, you are inviting people to throw mud at you.

  2. The highest leverage interventions may be at the meta-level. For example, creation of a website whose discussion culture can stay friendly and level-headed even with many participants--I suggested how this might be done at the end of this essay. Or here's a proposal for fighting filter bubbles.

  3. I'm generally skeptical that the intuitions which have worked for EA thus far will transfer well to the political arena. It seems like a much different animal. Again, I'd challenge you to think about whether this is your comparative advantage. The main advantage that comes to mind is that CEA has a lot of brand capital to spend, but doing political stuff is a good way to accidentally spend a lot brand capital very quickly if mud is thrown. As a flagship organization of the EA movement, there's also a sense in which CEA draws from a pool of brand capital that belongs to the community at large. If CEA does something to discredit itself (e.g. publicly recommends a controversial policy), it's possible for other EA organizations, or people who have identified publicly as EAs, to catch flak.

As a broad question: I understand it's commonly advised in the business world to focus on a few "core competencies" and outsource most other functions. I'm curious whether this also makes sense in the nonprofit world.

Comment author: Lila 23 July 2016 03:18:13PM 1 point [-]

Do you have plans to publish summaries of the research you do, e.g. on Wikipedia

Wikipedia's policies forbid original research. Publishing the research on the organization's website and then citing it on Wikipedia would also be discouraged, because of exclusive reliance on primary sources. (And the close connection to the subject would raise eyebrows.)

I think this is worth mentioning because I've seen some embarrassing violations of Wikipedia policy on EA-related articles recently.

Comment author: Marcus_Ogren 20 July 2016 12:35:21AM -1 points [-]

A. Basic proposal

Over two billion dollars were spent on the 2012 US presidential election. About half this total was spent to make Obama win and Mitt Romney lose; most of the other half was spent to make Romney win and Obama lose. In aggregate this was incredibly wasteful, and there should be a better way of influencing an election than throwing money into a zero-sum game. Instead of funding opposing advertisements, campaign money should support programs that everyone considers to be beneficial; that is to say, it should go to charity.

It should be possible to make a nonprofit that would implement this, and here's how it could work. The nonprofit, which I shall give the placeholder name of Altruistic Partisanship, would run a website on which people could make donations. For each donation, the donor would specify a political candidate (or party) she wishes to support as well as an apolitical charity. Altruistic Partisanship would hold onto the money until the end of the current month, at which point the total amount of money raised for each candidate would be tallied up. The candidate who has raised the most money this way will receive a donation equal to the amount of money that candidate has raised in excess of what the opposing candidate has raised. The remaining money (equal to twice as much as was associated with the less-preferred candidate) would go to the charities specified by the donors.

Here's how it could work. Suppose that Clinton raises $1,000,000 through Altruistic Partisanship and Trump raises $800,000. For each Clinton donor, 1/5 of what they donated would go to the Clinton campaign and 4/5 would go the charity or charities they specified. For each Trump donor, all of what they donated would go to their preferred charity. Clinton's campaign would receive $200,000, Trump's would receive nothing, and $1,600,000 would go to charity.

As for the marginal effects of a donation, suppose someone were to have donated $100 through Altruistic Partisanship and specified Hillary Clinton as her preferred candidate and AMF as her preferred charity. If Clinton raised more through Altruistic Partisanship, the marginal effect of this donation is that Clinton's campaign would have an additional $100 (just as if she donated directly to the Clinton campaign). If Trump had raised more, the marginal effects of this donation would be a $100 reduction in Trump's campaign funds, AMF receiving $100, and the (apolitical) charities preferred by Trump's supporters receiving $100.

Comment author: Lila 20 July 2016 06:40:12PM *  0 points [-]

It feels like telling two rival universities to cut their football programs and donate the savings to AMF. "Everyone wins!"

Anyway, two billion dollars isn't that much in the scheme of things. I remember reading somewhere that Americans spend more money on Halloween candy than politics.

Comment author: kbog  (EA Profile) 16 July 2016 02:22:05AM 0 points [-]

Interesting. Well, if opiates simply aren't that pleasurable, then it doesn't say anything about utilitarianism either way. If people experienced things which were really pleasurable but still felt like it would be bad to keep experiencing it, that would be a strike against utilitarianism. If people experienced total pleasure and preferred sticking with it after total reflection and introspection, then that would be a point in favor of utilitarianism.

In response to comment by kbog  (EA Profile) on EA != minimize suffering
Comment author: Lila 16 July 2016 03:13:26AM 1 point [-]

My point was that opiates are extremely pleasurable but I wouldn't want to experience them all the time, even with no consequences. Just sometimes.

Comment author: kbog  (EA Profile) 13 July 2016 03:21:23PM *  1 point [-]

Have you tried opiates?

If not, it doesn't seem right to make a judgement on the matter!

In response to comment by kbog  (EA Profile) on EA != minimize suffering
Comment author: Lila 15 July 2016 04:49:44AM 0 points [-]

I've had vicodin and china white and sometimes indulge in an oxy. They're quite good, but it hasn't really changed my views on morality. Despite my opiate experience, I'm much less utilitarian than the typical EA.

Comment author: ClaireZabel 14 July 2016 01:56:53AM 4 points [-]

I think your argument is actually two: 1) It is not obvious how to maximize happiness, and some obvious-seeming strategies to maximize happiness will not in fact maximize happiness. 2) you shouldn't maximize happiness

(1) is true, I think most EAs agree with it, most people in general agree with it, I agree with it, and it's pretty unrelated to (2). It means maximizing happiness might be difficult, but says nothing about whether it's theoretically the best thing to do.

Relatedly, I think a lot of EAs agree that it is sometimes indeed the fact that to maximize happiness, we must incur some suffering. To obtain good things, we must endure some bad. Not realizing that and always avoiding suffering would indeed have bad consequences. But the fact that that is true, and important, says nothing about whether it is good. It is the case now that eating the food I like most would make me sick, but doesn't tell me whether I should modify myself to enjoy healthier foods more, if I was able to do so.

Put differently, is the fact that we must endure suffering to get happiness sometimes good in itself, or is it an inconvenient truth we should (remember, but) change, if possible? That's a hard question, and I think it's easy to slip into the trap of telling people they are ignoring a fact about the world to avoid hard ethical questions about whether the world can and should be changed.

Comment author: Lila 15 July 2016 04:43:36AM 2 points [-]

I agree that points 1 and 2 are unrelated, but I think most people outside EA would agree that a universe of happy bricks is bad. (As I argued in a previous post, it's pretty indistinguishable from a universe of paperclips.) This is one problem that I (and possibly others) have with EA.

Comment author: cdc482 14 July 2016 10:34:41PM 0 points [-]

EA is an evolving movement, but the reasons for prioritizing violence and poor governance in the developing world seem weak. It's certainly altruistic and the amount of suffering it addresses is enormous. However, the world is in such a sad state of affairs, that I don't think such a complex and unexplored will compete with charities addressing basic needs like alleviating poverty or even OpenPhil's current agenda of prison reform and factory farm suffering. That said, you could start the exploring. Isn't that how the other causes became mainstream within the EA movement?

Comment author: Lila 15 July 2016 04:40:26AM 0 points [-]

I'd be happy if the EA movement became interested in this, just as I'd be happy if the Democratic Party did. But my point was, the label EA means nothing to me. I follow my own views, and it doesn't matter to me what this community thinks of it. Just as you're free to follow your own views, regardless of EA.

Comment author: Ant_Colony 13 July 2016 07:24:52PM 2 points [-]

EAs are pushing for a very specific agenda and have very specific values

Uh, what? Since when?

Comment author: Lila 14 July 2016 02:55:06PM 1 point [-]

Yeah it's confusing because the general description is very vague: do the most good in the world. EAs are often reluctant to be more specific than that. But in practice EAs tend to make arguments from a utilitarian perspective, and the cause areas have been well-defined for a long time: GiveWell recommended charities (typically global health), existential risk (particularly AI), factory farming, and self-improvement (e.g. CFAR). There's nothing terribly wrong with these causes, but I've become interested in violence and poor governance in the developing world. EA just doesn't have much to offer there.

View more: Next