Comment author: Linch 16 January 2017 06:22:15AM 0 points [-]

UPDATE: I now have my needed number of volunteers, and intend to launch the experiment tomorrow evening. Please email, PM, or otherwise contact me in the next 12 hours if you're interested in participating.

Comment author: mmKALLL 14 January 2017 07:48:33AM 0 points [-]

Yes, of course, but I haven't seen spambots in comments so far, and thus would guess that they wouldn't become a problem on top level either.

Comment author: Linch 15 January 2017 08:37:35AM 2 points [-]

I often see spambots in the comments.

Comment author: BenHoffman 12 January 2017 06:58:34AM *  1 point [-]

I don't think that Ben Todd is proposing (5). I think he's proposing (4), and that this proposed norm would effectively be a tax on criticism. Taxes aren't as costly as bans, and can be good if they pay for something good enough, but in this case I don't think it's worth it.

In particular, applying journalistic standards to criticism of, but not praise of, EA orgs' behavior seems like a weird position to take if what you're interested in is improving the quality of public information.

Comment author: Linch 12 January 2017 07:42:40AM *  0 points [-]

Ah, I'm so sorry for misunderstanding you! I came here from another post which quoted Ben T's comment:

https://srconstantin.wordpress.com/

"In other words: the CEO of 80,000 Hours thinks that people should “run critical posts past the organization concerned before posting”, but also thinks that it might not be worth it for GWWC to address such criticisms because they don’t directly contribute to growth or fundraising, and addressing criticisms publicly might “make the organization look bad.”

This cashes out to saying “we don’t want to respond to your criticism, and we also would prefer you didn’t make it in public.”

It’s normal for organizations not to respond to every criticism — the Coca-Cola company doesn’t have to respond to every internet comment that says Coke is unhealthy — but Coca-Cola’s CEO doesn’t go around shushing critics either."

I think upon reflection that while my statement of 5) is too strong, it's a plausible reading even there, and one that these comments point to. ie, "shushing critics" isn't the same thing as explicit censorship, but it's not that far away.

(Also, Benquo's comment implies this more directly)

"In particular, applying journalistic standards to criticism of, but not praise of, EA orgs' behavior seems like a weird position to take if what you're interested in is improving the quality of public information."

Ah, that's a good point about my inconsistency. I will need to think about this more clearly.

Comment author: BenHoffman 11 January 2017 08:22:39PM *  0 points [-]

That's good to hear. But I didn't think you were saying that criticism is generally harmful - I thought you were saying that failing to check in with GWWC first is harmful in expectation. If so, I'm curious what the most important scenarios are in which it could cause harm to start this sort of conversation in public rather than in private. If not, when do you think this advice does help?

It additionally seemed like you thought that this advice should be applied, not just to criticism of GWWC's own conduct, but to criticism of the idea of the pledge itself - which is already public, and not entirely specific to GWWC, as organizations like The Life You Can Save and REG promote similar pledges. I got this impression because Alyssa's post is limited to discussion of the public pledge itself.

Comment author: Linch 11 January 2017 09:48:12PM *  2 points [-]

EDIT: Ben H's comment below convincingly illustrated that I misunderstood him. I apologize for contributing to any misinformation.

EDIT 2: Looking upwards of the comment chain, I think this is a very reasonable reading of Benquo's comment:

"I'm having a hard time reconciling these. In particular, it seems like if you make both these claims, you're basically saying that EAs shouldn't publicly criticize the pledge without GWWC's permission because that undercuts GWWC's goals. That seems very surprising to me. Am I misunderstanding you?"

I think my mistake is that in haste, I confused the different Bens with similar (but far from identical) opinions and formed an inaccurate model.

Original post:

Reposted from FB, I apologize if the language here is less polished than desired.

1) It's a common courtesy for journalists (and GiveWell) to message the organizations they're writing about for a response.

2) Sometimes said organizations are too busy, etc. to respond to said criticisms.

3) Ben Todd suggested that we have this norm in EA as well.

4) My interpretation of 1)+2) means you give people a chance to respond/comment to your criticism before airing it, especially if there are contexts that are missing.

5) Most others have taken 1), 2) and 3) to necessarily imply that orgs should have the right to waive criticism before they appear on air.

I believe 5) is incorrect because it is very different from the base cases I am aware of (GiveWell asks charities to comment before publishing their charity reports, journalists asking for a comment from people they write about).

Why are people taking 5) as the default interpretation here?

Comment author: cafelow  (EA Profile) 09 January 2017 07:26:32PM 0 points [-]

I guess we'll find out :)

Comment author: Linch 10 January 2017 01:20:58AM 0 points [-]

I'm not sure we can find out even after the experiment. Since the base rate is at best 10-20%, I would expect the differences in random variance to swamp out person-to-person differences in skill, credibility etc.

Comment author: Peter_Hurford  (EA Profile) 09 January 2017 10:35:44PM 1 point [-]

Not if Linch can't get more volunteers!

Comment author: Linch 10 January 2017 01:15:53AM 1 point [-]

I have three so far, including yourself!

Comment author: Linch 10 January 2017 01:07:59AM *  10 points [-]

I think this is an important article, and underlies a critical nontrivial distinction between altruistic and for-profit work. However, I also think seeing everything completely globally misses an important part of what incentivizes individuals to do good work.

Whenever eg. people tell me that after talking to me they've donated to an effective charity, or they took the pledge, or they trade their vote, or wrote an article about EA, etc., I feel a large spike of joy that my actions have somewhat possibly produced counterfactual good in the world. (Less joy than if I earned and donated the money myself, but still very significant joy). I do not feel nearly as much of this joy when learning, eg. Will has done the same thing, or Good Ventures investments did slightly higher than the market would suggest.

Now, no doubt part of this problem is just a classic issue of scope insensitivity. You can say that if I should feel happy that I raised $X, I should be even happier that Good Ventures or CEA raised $100X or $1000X that, and it's wrong for me to not update in that direction.

But I think there's a different issue here too. My second-order preferences for myself is that my own emotions should not serve just as an objective observer, looking from outside to get an impartial view of the world.I am also an agent upon the world, and it's very relevant to me that my own emotions accurately and acutely inspire me to do the most important/impactful things.

Thus, it makes sense for me to feel, on a visceral, System One level, directly moved by my own opportunities to have a personal impact, in a way that the work of Good Ventures or CEA is less directly relevant to my own space of actions.

I am open to the idea that seeing myself as a coherent and individual agent is silly (esp. in light of the late Derek Parfit's great works). But most people see themselves as agents, so I feel like the presumption should be that this is a useful approximation. Likewise, perhaps other people (including yourself, Peter) are not significantly inspired to act by their own past and predictions of future emotional state, and can do good work regardless of whether they're happy or sad. In that case, I commend you and am working towards being more stoic generally. But ultimately I don't know anybody else's motivations as well as my own, and I know that my own happiness is a very relevant reinforcement and feedback mechanism on my own ability to be altruistically impactful.

TL;DR: while I agree with you that a)seeing the general progress of the EA movement is a good motivational factor and b)interpersonal comparisons are very suboptimal for inspiration, I disagree with the larger thrust of this argument, which seems to imply that I should be inspired more by the efforts of Team EA than my own counterfactual impact. This seems to closely imply that the primary use of S1 emotion is to accurately reflect the world, whereas I would argue that it's more important for my own emotions to be used to train my behavior.

Comment author: mmKALLL 06 January 2017 05:37:17AM 0 points [-]

I was wondering about why there is a 5 karma requirement to publishing an article. Was there problems before that was put into place?

It would also be handy to be able to get a weekly digest of top voted posts or something similar in my email.

Comment author: Linch 09 January 2017 06:39:22AM 1 point [-]

One reason this might be good is to prevent spambots from making top-level posts.

Comment author: Michael_S 08 January 2017 04:54:57PM *  4 points [-]

This sounds really great to me. I love the idea of having more RCTs in the EA sphere. I would definitely record how much they are giving 1 year later.

I also think it's worth having a hold out set. People can pre-register the list of friends, than a random number generator can be used to randomly selects some friends not to make an explicit GWWC pitch to. It's possible many of the friends/contacts who join GWWC and start donating are those who have already been exposed to EA ideas before over a long period of time, and the effect size of the direct GWWC pitch isn't as large as it would appear. Having a hold out set would account for this. With a hold out set, CEA wouldn't have to worry about who they contact. The holdout set would take care of this and make the estimate of the treatment effect unbiased.

Comment author: Linch 09 January 2017 12:13:14AM 1 point [-]

I don't think there will be enough datapoints to do this. But if there are enough people willing to be in this experiment and they think they have a lot of friends they'll be willing to contact, I will include a holdout set.

11

Proposal for an Pre-registered Experiment in EA Outreach

Can talking about GWWC for 90 minutes actually get somebody to take the Pledge?   Note: I timed myself so I will not take more than 30 mins on the first draft (and hopefully less than an hour on this post overall)   Surface analysis of Giving What We Can... Read More

View more: Next