Giving What We Can Pledge thoughts

The Center for Effective Altruism commissioned me to write an essay on Giving What We Can’s pledge to give 10% of your income to the most effective charities. They did not ask that the essay have any particular contents, but I expect that I would not have agreed to write... Read More
Comment author: KelseyPiper 07 April 2016 02:50:53AM 4 points [-]

All your posts on cause prioritization have been really valuable for me but I think this is my favorite so far. It clearly motivates what you're doing and the circumstances under which we'll end up being forced to do it, it compares the result you got from using a formal mathematical model to the result you got when you tried to use your intuitions and informal reasoning on the same problem, which both helps sanity-check the mathematical model and helps make the case that it's a useful endeavor for people who already have estimates they've arrived at through informal reasoning, and it spells out the framework explicitly enough other people can use it.

I'm curious why you don't think the specific numbers can be trusted. My instincts are that the cage free numbers are dubious as to how much they improve animal lives, that your choice of prior will affect this so much it's probably worth having a table of results given different reasonable priors, and that "the value of chickens relative to humans" is the wrong way to think about how good averting chicken suffering is compared to making people happier or saving their lives (chickens are way less valuable to me than humans. chicken-torture is probably nearly as morally bad as human-torture; I am not sure that the things which make torture bad vary between chickens and people). Are those the numbers that you wanted us to flag as dubious, or were you thinking of different ones?

Comment author: Bernadette_Young 19 August 2015 08:57:19AM 6 points [-]

I think anybody wanting to raise a potentially divisive or negative discussion should think carefully about how likely a given discussion is to be self-defeating, or to yield negative results that outweigh the benefits.

The setting matters a lot to this: if you post on Facebook, the discussion gets published in lots of people's feeds in a manner that posters don't control (I find 'likes' on comments I make in the EA FB group from friends I know are not members of that group). Also, the FB policy of only allowing 'upvoting' means that the degree to which people's statements are well or badly received is not well reflected. Finally the listing of threads by order of most recent comment keeps pile-ons in the current discussion.

(This also creates an important asymmetry: those who don't care about the discussion being damaging are more likely to continue it, while those who disagree might avoid voicing their disagreement in the hopes that the thread will die away.)

This forum doesn't suffer any of those drawbacks, so I believe it is a better arena for raising these issues for discussion if you reasonably believe there is something important at stake.

Comment author: KelseyPiper 21 August 2015 04:47:53AM *  3 points [-]

I really agree here - other factors that make Facebook conversations particularly inflammatory include Facebook's lack of threading, so you can't easily see who a person is responding to and if the tone of the response is appropriate to the original post, the way Facebook comment threads rapidly stack up with hundreds of comments, some only tangentially related to the original post, and the wide variance in moderation schemes. I've been disillusioned by some of the conversations on Facebook, but this comment made me more optimistic that is a platform issue, not a problem with open discussion of EA concerns.

Comment author: tomstocker 19 August 2015 08:49:13AM 2 points [-]

Those quotes aren't real quotes right? I recognize the offensive one about 'autistic white nerds' from a wierd article by one of the Vox founders but I would have bet against much of the others being said by someone?

Comment author: KelseyPiper 19 August 2015 09:05:11PM 4 points [-]

No, sorry, they are not. And not all of these are pitfalls I've witnessed specifically in EA outreach - the atheist/skeptic community and campus conservative/libertarian groups are where I watched a lot of these mistakes get made.


Pitfalls in Diversity Outreach

(This is an adaptation of a post on my blog.) EA is one of several movements I have seen which have tried to address the problem of a lack of diversity, either demographic diversity (i.e. too many men, too many white people) or ideological diversity  (too many programmers, too many... Read More
Comment author: riceissa  (EA Profile) 05 July 2015 03:48:19AM 3 points [-]

Check whether your school explicitly desires or opposes affiliation with an outside organization.

Do you happen to know why Stanford didn't like the affiliation with external organizations?

Comment author: KelseyPiper 05 July 2015 07:07:40PM 2 points [-]

I think they were concerned that the Stanford brand name would be used for publicity and /or fundraising by organizations outside their control.

Comment author: Marcus_A_Davis 14 April 2015 05:12:15PM 5 points [-]

This is super practical advice that I can definitely see myself applying in the future. The introductions on the sheets seem particularly well-suited to getting people engaged.

Also, "What is the first thing you would do if appointed dictator of the United States?" likely just entered my favorite questions to ask anyone in ice-breaker scenarios, many of which have nothing to do with EA.

Comment author: KelseyPiper 16 April 2015 05:45:10AM 3 points [-]

The question we've had the most success with for a regular/weekly meetup is "what is something interesting you've learned/read/thought about recently". The advantage to keeping it consistent is that people know what to expect; this question also avoids most of the disadvantages of keeping the question consistent (namely that people repeat themselves and get bored). It also tends to provoke fascinating answers.


Meetup : Stanford THINK

Discussion article for the meetup : Stanford THINK WHEN: 05 October 2014 04:30:00PM (-0700) WHERE: Stanford University, Old Union rm 121 Stanford's EA group meets here weekly; please stop by! Discussion article for the meetup : Stanford THINK