Comment author: jimrandomh 13 January 2018 11:01:35PM 2 points [-]

In this model, what is the probability that the initiative (which I see is modeled as costing $6-39M) is successful? Or is it assumed that in the case where it isn't going to succeed, the cost is limited to the cost of polling ($50-300k)?

Comment author: Kerry_Vaughan 09 February 2017 09:30:29PM 4 points [-]

Right now we're trying to give fund managers lots of latitude on what to do with the money. If they think there's an argument for saving the money and donating later we'll allow that (but ask for some communication about why saving the money makes sense).

I'd be interested in whether people would prefer a different policy.

Comment author: jimrandomh 10 February 2017 06:32:02PM 4 points [-]

My concern is that the marginal effect of donating to one of these funds on the amount of money actually reaching charities might be zero. Given that OpenPhil spent below its budget, and these funds are managed by OpenPhil staff, it appears as though these funds put money on the wrong side of a bottleneck. One of the major constraints on OpenPhil's giving has been wanting charities to have diverse sources of funding; this appears to reduce funding diversity, by converting donations from individual small donors into donations from OpenPhil. What reason do donors have to think they aren't just crowding out donations from OpenPhil's main fund?

Comment author: jimrandomh 09 February 2017 12:55:03AM *  7 points [-]

What will be these funds' policy on rolling funds over from year to year, if the donations a fund gets exceed the funding gaps the managers are aware of?

(This seems particularly important for funds whose managers are also involved with OpenPhil, given that OpenPhil did not spend its entire budget last year.)

Comment author: RobBensinger 07 February 2017 10:44:10PM *  3 points [-]

Anonymous #16:

Level of involvement: Most of my friends are involved in effective altruism and talk about it regularly.

The extent to which AI topics and MIRI seem to have increased in importance in effective altruism worries me. The fact that this seems to have happened more in private among the people who run key organizations than in those organizations' public faces is particularly troubling. This is also a noticeable red flag for groupthink. For example, Holden's explanation of why he has become more favorably disposed to MIRI was pretty unconvincing.

Other Open Phil links about AI: 2015 cause report, 2016 background blog post.

Comment author: jimrandomh 08 February 2017 01:03:43AM 2 points [-]

The fact that this seems to have happened more in private among the people who run key organizations than in those organizations' public faces is particularly troubling.

I'm confused by the bit about this not being reflected in organizations' public faces? Early in 2016 OpenPhil announced they would be making AI risk a major priority.

Comment author: jimrandomh 06 February 2017 05:09:48PM 5 points [-]

The questions about diet before and after the change seem to be pushing people strongly into claiming to be or to have been some sort of vegetarian; the only option you have there that isn't somehow anti-meat is "Other', which requires typing.

A better version of this question would have a no-dietary-restrictions option first, and a few options that aren't animal-welfare related like "low carb" and "Mediterranean".

Comment author: jimrandomh 06 February 2017 05:05:34PM 5 points [-]

Statistics nitpick: I believe you should be using a two-sided test, as it is also possible for leafleting to reduce the rate of people going vegetarian if the leaflets alienate people somehow.

Comment author: kbog  (EA Profile) 25 October 2016 01:32:07PM *  2 points [-]

To be more clear, I'm against both (a) witch hunts and (b) formal procedures of evicting people. The fact that one of these things can happen without the other does not eliminate the fact that both of them are still stupid on their own.

we could have very explicit and contained rules, such as "If you do X, Y or Z then you're out" and this would be different from the generic approach of "if anyone tries to outgrip them then support that effort".

As a counterexample to the dichotomy, sure. As something to be implemented... haha no. The more rules you make up the more argument there will be over what does or doesn't fall under those rules, what to do with bad actions outside the rules, etc.

Or if we feel that it is too hard to put into a clear list, perhaps we could outsource our decision-making to a small group of trusted 'community moderators' 

Maybe you shouldn't outsource my decision about who is kosher to "trusted community moderators". Why are people not smart enough to figure it out on their own?

And is this supposed to save time, the hundreds of hours that people are bemoaning here? A formal group with formal procedures processing random complaints and documenting them every week takes up at least as much time.

Comment author: jimrandomh 25 October 2016 06:16:16PM 4 points [-]

Maybe you shouldn't outsource my decision about who is kosher to "trusted community moderators". Why are people not smart enough to figure it out on their own?

The issue in this case is not that he's in the EA community, but that he's trying to act as the EA community's representative to people outside the community who are not well placed to make that judgment themselves.

Comment author: Elizabeth 25 October 2016 03:35:45PM 10 points [-]

I agree that it's important that EA stay open to weird things and not exclude people solely for being low status. I see several key distinctions between early SI/early MIRI and Intentional Insights: * SI was cause focused, II a fundraising org. Causes can be argued on their merits. For fundraising, "people dislike you for no reason" is in and of itself evidence you are bad at fundraising and should stop. * I think this is an important general lesson. Right now "fundraising org" seems to be the default thing for people to start, but it's actually one of the hardest things to do right and has the worst consequences if it goes poorly. With the exception of local groups, I'd like to see the community norms shift to discourage inexperienced people from starting fundraising groups. * AFAIK, SI wasn't trying to use the credibility of the EA movement to bolster itself . Gleb is, both explicitly (by repeatedly and persistently listing endorsements he did not receive) and implicitly. As long as he is doing that the proportionate response is criticizing him/distancing him from EA enough to cancel out the benefits. * The effective altruism name wasn't worth as much when MIRI was getting started. There was no point in faking an endorsement because no one had heard of us. Now that EA has some cachet with people outside the movement there exists the possibility of trying to exploit that cachet, and it makes sense for us to raise the bar on who gets to claim endorsement.

Comment author: jimrandomh 25 October 2016 05:49:38PM 2 points [-]

Chronological nitpick: SingInst (which later split into MIRI and CFAR) is significantly older than the EA name and the EA movement, and its birth and growth are attributable in significant part to SingInst and CFAR projects.

Comment author: Jeff_Kaufman 24 October 2016 05:02:19PM *  7 points [-]

I'm going to guess that none of GW, TLYCS, ACE, or GWWC worked with InIn on this video, saw it before it was published, or consented to the use of their trademarks in it.

The video description does say "All the organizations involved in the video reviewed the script and provided a high-resolution copy of their logo. Their collaboration in the production of this video does not imply their specific support for any other organizations involved in the video."

Comment author: jimrandomh 24 October 2016 05:13:16PM *  8 points [-]

You're right, I missed that. I'll edit the parent post to fix the error.

(Given the history, I'm curious to find out what "reviewed the script and provided a high-resolution copy of their logo" means, and in particular whether they saw the entire script, and therefore knew they were being featured next to InIn, or whether they only reviewed the portion that was about themselves.)

Comment author: jimrandomh 24 October 2016 05:11:55PM *  9 points [-]

Gleb, Intentional Insights board meeting, 9/21/16 at 22:05:

"We certainly are an EA meta-charity. We promote effective giving, broadly. We will just do less activities that will try to influence the EA movement itself. This would include things like writing articles for the EA forum about how to do more effective marketing. We will still do some of that, but to a lesser extent because people are right now triggered about Intentional Insights. There's a personalization of hostility associated with Intentional Insights, so we want to decrease some of our visibility in central EA forums, while still doing effective altruism. We are still an effective altruist meta-charity. So focusing more on promoting effective giving to a broad audience."


View more: Next