Comment author: Larks 11 February 2017 12:44:02AM 3 points [-]

It seems strange to have the funds run by people who also direct money from on behalf of big grant-making organizations. Under what circumstances will the money end up going somewhere different? I can see the motivation for having EA funds if they were managed by someone independent - say Carl or Paul - but the current incarnation seems to be basically equivalent to just giving GiveWell or OPP money with a cause-based restriction.

Comment author: RobBensinger 11 February 2017 02:50:57AM *  7 points [-]

A lot of people have been asking for a way to donate to (/ be funged by) OPP, so if this only enables people to do that, I'd still expect it to be quite popular. Some relevant reasons OPP staff gave in their donor suggestions for wanting more money to go to charities than OPP was willing and able to provide:

  • [re Cosecha and ASJ] "Given the amount we’re aiming to allocate to criminal justice reform as a whole, my portfolio has too many competing demands for us to offer more." [I don't know how much this is a factor for the four areas above.]

  • "I see value to ACE having a broad support base to (a) signal to groups that donors care about its recommendations, (b) raise its profile, and attract more donors, and (c) allow it to invest in longer-term development, e.g. higher salaries (i.e. without fear of expanding with a fragile support base)".

  • [re CIWF] "we’re limited in our ability to provide all of them by the public support test and a desire to avoid being the overwhelming funder of any group".

  • [re MIRI] "the ultimate size of our grant was fairly arbitrary and put high weight on accurate signaling about our views". [Note I work at MIRI, though I'm just citing them as an example here.]

  • "we would not want to be more than 50% of 80,000 Hours’ funding in any case (for coordination/dependence reasons)."

  • [re 80K] "my enthusiasm for supporting specific grants to support the effective altruism community has been higher than other decision-makers’ at Open Phil, and we’ve given less than I’ve been inclined to recommend in some other cases."

  • [in general] "There’s an internal debate about how conservative vs. aggressive to be on grants supporting organizations like these with, I think, legitimate arguments on both sides. I tend to favor larger grants to organizations in these categories than other decision-makers at Open Phil."

The slowness of OPP's grant process might also be an advantage for non-OPP funders. (E.g., ACE, FHI, CEA, 80K, and the Ploughshares Fund were informally promoted by OPP staff to third parties before OPP had reached the end of its own organizational decision process.)

The EA Funds strike me as unlikely to capture all the advantages of donor diversification, but they capture some of them.

Comment author: Kerry_Vaughan 10 February 2017 11:28:32PM 3 points [-]

I could imagine this being more useful if the EA Funds are administered by OPP staff in their non-official capacity

I think Nick's administration of the EA Giving DAF is done in his non-official capacity. However, this can only go so far. If one of the fund managers donates to something very controversial that probably still harms OpenPhil even if they were funding the thing as private individuals.

We'll need to have non-OpenPhil staff manage funds in the future to get the full benefits of diversification.

it does seem to me like you could costlessly get more of those advantages if the four funds were each co-run by a pair of people (one from OPP, one from elsewhere) so it's less likely that any one individual or organization will suffer the fallout for controversial choices.

I like this idea. The only potential problem is that the more people you add to the fund, the more they need to reach consensus on where to donate. The need to reach consensus often results in safe options instead of interesting ones.

I'd be very interested in other ideas for how we can make it easier to donate to unusual or controversial options via the fund. Diversification is really critical to the long-term impact of this project.

Comment author: RobBensinger 11 February 2017 12:26:07AM 2 points [-]

I like this idea. The only potential problem is that the more people you add to the fund, the more they need to reach consensus on where to donate.

That's true, though you could also give one individual more authority than the others -- e.g., have the others function as advisers, but give the current manager the last word. This is already presumably going to happen informally, since no one makes their decisions in a vacuum; but formalizing it might diffuse perceived responsibility a bit. It might also encourage diversification a bit.

Comment author: jimrandomh 10 February 2017 06:32:02PM 4 points [-]

My concern is that the marginal effect of donating to one of these funds on the amount of money actually reaching charities might be zero. Given that OpenPhil spent below its budget, and these funds are managed by OpenPhil staff, it appears as though these funds put money on the wrong side of a bottleneck. One of the major constraints on OpenPhil's giving has been wanting charities to have diverse sources of funding; this appears to reduce funding diversity, by converting donations from individual small donors into donations from OpenPhil. What reason do donors have to think they aren't just crowding out donations from OpenPhil's main fund?

Comment author: RobBensinger 10 February 2017 07:30:36PM 2 points [-]

The Funds do have some donor diversifying effect, if only because donors can change whether they give to the Fund based on whether they like its recent beneficiaries; though it doesn't capture all the benefits of diversification.

I could imagine this being more useful if the EA Funds are administered by OPP staff in their non-official capacity, and they have more leeway to run risky experiments or fund things that are hard to publicly explain/justify or might not reflect well on OPP. (This would work best if small EA Funds donors were less concerned about 'wasting' their donation than Good Ventures, though, which is maybe unrealistic.)

I haven't thought much about the tradeoffs, but it does seem to me like you could costlessly get more of those advantages if the four funds were each co-run by a pair of people (one from OPP, one from elsewhere) so it's less likely that any one individual or organization will suffer the fallout for controversial choices.

Comment author: RobBensinger 07 February 2017 10:02:25PM 6 points [-]

Anonymous #12:

I feel that people in people involved in effective altruism are not very critical of the ways that confirmation bias and hero-of-the-story biases slip into their arguments. It strikes me as... convenient... that one of the biggest problems facing humanity is computers and that a movement popular among Silicon Valley professionals says people can solve it by getting comfortable professional jobs in Silicon Valley and donating some of the money to AI risk groups.

This is obviously not the whole story, as the arguments for taking AI risk seriously are not at all transparently wrong -- though I think EA folks are often overconfident regarding the assumptions they make about the future of AI. Still, it seems worth looking into why this community's agenda ended up meshing so neatly with its members' hobbies. In my more uncharitable moments, I can't help but feel that if the trendy jobs were in potato farming, some in EA would be imploring me to deal with the growing threat of tubers.

(I'm EA-adjacent. I seem to know a lot of you, and I'm sympathetic, but I've never been completely sold. Also, I notice that anonymous commentator #3 said something similar.)

Comment author: RobBensinger 09 February 2017 11:21:44PM *  3 points [-]

Three points worth mentioning in response:

  1. Most of the people best-known for worrying about AI risk aren't primarily computer scientists. (Personally, I've been surprised by the number of physicists.)

  2. 'It's self-serving to think that earning to give is useful' seems like a separate thing from 'it's self-serving to think AI is important.' Programming jobs obviously pay well, so no one objects to people following the logic from 'earning to give is useful' to 'earning to give via programming work is useful'; the question there is just whether earning to give itself is useful, which is a topic that seems less related to AI. (More generally, 'technology X is a big deal' will frequently imply both 'technology X poses important risks' and 'knowing how to work with technology X is profitable', so it isn't surprising to find those beliefs going together.)

  3. If you were working in AI and wanted to rationalize 'my current work is the best way to improve the world', then AI risk is really the worst way imaginable to rationalize that conclusion: accelerating general AI capabilities is very unlikely to be a high-EV way to respond to AI risk as things stand today, and the kinds of technical work involved in AI safety research often require unusual skills and background for CS/AI. (Ryan Carey wrote in the past: "The problem here is that AI risk reducers can't win. If they're not computer scientists, they're decried as uninformed non-experts, and if they do come from computer scientists, they're promoting and serving themselves." But the bigger problem is that the latter doesn't make sense as a self-serving motive.)

Comment author: Fluttershy 09 February 2017 04:14:08AM 1 point [-]

It updates me in the direction that the right queries can produce a significant amount of valuable material if we can reduce the friction to answering such queries (esp. perfectionism) and thus get dialogs going.

Definitely agreed. In this spirit, is there any reason not to make an account with (say) a username of username, and a password of password, for anonymous EAs to use when commenting on this site?

Comment author: RobBensinger 09 February 2017 04:56:35AM 4 points [-]

I think this would be too open to abuse; see the concerns I raised in the OP.

An example of a variant on this idea that might work is to take 100 established+trusted community members, give them all access to the same forum account, and forbid sharing that account with any additional people.

Comment author: RobBensinger 07 February 2017 11:04:48PM 6 points [-]

Anonymous #40:

I'm the leader of a not-very-successful EA student group. I don't get to socialize with people in EA that much.

I wish the community were better at supporting its members in accomplishing things they normally couldn't. I feel like almost everyone just does the things that they normally would. People that enjoy socializing go to meetups (or run meetups); people that enjoy writing blog posts write blog posts; people that enjoy commenting online comment online; etc.

Very few people actually do things that are hard for them, which means that, for example, most people aren't founding new EA charities or thinking original thoughts about charity or career evaluation or any of the other highly valuable things that come out of just a few EA people. And that makes sense; it doesn't work to just force yourself to do this sort of thing. But maybe the right forms of social support and reward could help.

Comment author: RobBensinger 07 February 2017 11:04:24PM 8 points [-]

Anonymous #39:

Level of involvement: I'm not an EA, but I'm EA-adjacent and EA-sympathetic.

EA seems to have picked all the low-hanging fruit and doesn't know what to do with itself now. Standard health and global poverty feel like trying to fill a bottomless pit. It's hard to get excited about GiveWell Report #3543 about how we should be focusing on a slightly different parasite and that the cost of saving a life has gone up by $3. Animal altruism is in a similar situation, and is also morally controversial and tainted by culture war. The benefits of more long-shot interventions are hard to predict, and some of them could also have negative consequences. AI risk is a target for mockery by outsiders, and while the theoretical arguments for its importance seem sound, it's hard to tell whether an organization is effective in doing anything about it. And the space of interventions in politics is here-be-dragons.

The lack of salient progress is a cause of some background frustration. Some of those who think their cause is best try to persuade others in the movement, but to little effect, because there's not much new to say to change people's minds; and that contributes to the feeling of stagnation. This is not to say that debate and criticism are bad; being open to them is much better than the alternative, and the community is good at being civil and not getting too heated. But the motivation for them seems to draw more from ingrained habits and compulsive behavior than from trying to expose others to new ideas. (Because there aren't any.)

Others respond to the frustration by trying to grow the movement, but that runs into the real (and in my opinion near-certain) dangers of mindkilling politics, stifling PR, dishonesty (Sarah Constantin's concerns), and value drift.

And others (there's overlap between these groups) treat EA as a social group, whether that means house parties or memes. Which is harmless fun in itself, but hardly an inspiring direction for the movement.

What would improve the movement most is a wellspring of new ideas of the quality that inspired it to begin with. Apart from that, it seems quite possible that there's not much room for improvement; most tradeoffs seem to not be worth the cost. That means that it's stuck as it is, at best -- which is discouraging, but if that's the reality, EAs should accept it.

Comment author: RobBensinger 07 February 2017 11:03:25PM 3 points [-]

Anonymous #37:

I would like to see more humility from people involved in effective altruism regarding metaethics, or at least better explanations for why EAs' metaethical positions are what they are. Among smart friends and family members of mine whom I've tried to convince of EA ideas, the most common complaint is, 'But that's not what I think is good!' I think this is a reasonable complaint, and I'd like it if we acknowledged it in more introductory material and in more of our conversations.

More broadly, I think that rather than having a 'lying problem,' EA has an 'epistemic humility problem' -- both around philosophical questions and around empirical ones, and on both the community level and the individual level.

Comment author: RobBensinger 07 February 2017 11:02:13PM -1 points [-]

Anonymous #36:

I'd like to see more information from the EA community about which organizations are most effective at addressing environmental harm, and at reducing greenhouse gas emissions in particular. More generally, I'd like to see more material from the EA community about which organizations or approaches are most effective in the category in which they fall.

Many EA supporters doubtless accept a broadly utilitarian ethical framework, according to which all activities can be ranked in order of their effect on aggregate welfare. I think the notion of aggregate welfare is incoherent. For that reason, I'm not interested in anyone's opinion about whether reducing CO2 emissions is as cost-effective as saving children from malaria, or whether enabling people to buy better roofs is as cost-effective as reducing the risk of an intelligence explosion in AI.

When I decide that I want to reduce CO2 emissions, however, I would like to know which organizations are reducing emissions the most per dollar. That is a comparison that makes sense! If I am interested in helping to distribute malaria nets, I would like to have some sense of what impact my donation is likely to have. I suspect that there are a lot of people like me out there: not interested in ranking the importance of possible altruistic goals, but interested in information about how to pursue a given altruistic goal effectively.

Level of involvement: I have donated to GiveWell-endorsed charities for several years, though not at the level Peter Singer would recommend. I would not identify myself as a member of the EA movement.

Comment author: RobBensinger 07 February 2017 11:01:40PM 2 points [-]

Anonymous #35:

I would not feel like people in the EA community would backstab me if the benefit to them outweighed the harm. (Where benefit to them often involves their lofty goals, so it can go under the guise of 'effective altruism.')

View more: Prev | Next