Comment author: HaydnBelfield 24 February 2017 01:27:49PM 16 points [-]

Thanks for this! Its mentioned in the post and James and Fluttershy have made the point, but I just wanted to emphasise the benefits to others of Open Philanthropy continuing to engage in public discourse. Especially as this article seems to focus mostly on the cost/benefits to Open Philanthropy itself (rather than to others) of Open Philanthropy engaging in public discourse.

The analogy of academia was used. One of the reasons academics publish is to get feedback, improve their reputation and to clarify their thinking. But another, perhaps more important, reason academics publish academic papers and popular articles is to spread knowledge.

As an organisation/individual becomes more expert and established, I agree that the benefits to itself decrease and the costs increase. But the benefit to others of their work increases. It might be argued that when one is starting out the benefits of public discourse go mostly to oneself, and when one is established the benefits go mostly to others.

So in Open Philanthropy’s case it seems clear that the benefits to itself (feedback, reputation, clarifying ideas) have decreased and the costs (time and risk) have increased. But the benefits to others of sharing knowledge have increased, as it has become more expert and better at communicating.

For example, speaking personally, I have found Open Philanthropy’s shallow investigations on Global Catastrophic Risks a very valuable resource in getting people up to speed – posts like Potential Risks from Advanced Artificial Intelligence: The Philanthropic Opportunity have also been very informative and useful. I’m sure people working on global poverty would agree.

Again, just wanted to emphasise that others get a lot of benefit from Open Philanthropy continuing to engage in public discourse (in the quantity and quality at which it does so now).

Comment author: Richard_Batty 24 February 2017 04:44:22PM 8 points [-]

Yes! The conversations and shallow reviews are the first place I start when researching a new area for EA purposes. They've saved me lots of time and blind alleys.

OpenPhil might not see these benefits directly themselves, but without information sharing individual EAs and EA orgs would keep re-researching the same topics over and over again and not be able to build on each other's findings.

It may be possible to have information sharing through people's networks but this becomes increasingly difficult as the EA network grows, and excludes competent people who might not know the right people to get information from.

Comment author: BenHoffman 11 February 2017 02:54:16AM *  3 points [-]

Relevant resources:

Fact Posts: How and Why

The Open Philanthropy Project's Shallow Investigations provide nice template examples.

The Neglected Virtue of Scholarship

Scholarship: How to Do It Efficiently

I'm fairly new to the EA Forum, maybe someone who's been here longer knows of other resources on this site.

Comment author: Richard_Batty 11 February 2017 11:25:09AM 6 points [-]

Even simpler than fact posts and shallow investigations would be skyping experts in different fields and writing up the conversation. Total time per expert is about 2 hours - 1 hour for the conversation, 1 hour for writing up.

Comment author: Kerry_Vaughan 09 February 2017 09:13:51PM *  9 points [-]

Hi Richard,

Thanks a lot for the feedback. I work at CEA on the EA Funds project. My thoughts are below although they may not represent the views of everyone at CEA.

Funding new projects

I think EA Funds will improve funding for new projects.

As far as I know small donors (in the ~$10K or below range) have traditionally not played a large role in funding new projects. This is because the time it takes to evaluate a new project is substantial and because finding good new projects requires developing good referral networks. It generally doesn't make sense for a small donor to undertake this work.

Some of the best donors I know of at finding and supporting new projects are private individuals with budgets in the hundreds of thousands or low millions range. For these donors, it makes more sense to do the work required to find new projects and it makes sense for the projects to find these donors since they can cover a large percentage of the funding need. I think the funds will roughly mimic this structure. Also, I think Nick Beckstead has one of the better track records at helping to get early-stage projects funded and he's a fund manager.

Donor centralization

I agree with this concern. I think we should aim to not have OpenPhil program officers be the only fund managers in the future and we should aim for a wider variety of funds. What we have now represents the MVP, not the long-term goal.

EA Ventures

I was in charge of EA Ventures and it is no longer in operation. The model was that we sourced new projects and then presented them to our donors for potential funding.

We shut down EA Ventures because 1) the number of exciting new projects was smaller than we expected; 2) funder interest in new projects was smaller than expected and 3) opportunity cost increased significantly as other projects at CEA started to show stronger results.

My experience at EA Ventures updated me away from the view that there are lots of promising new projects in need of funding. I now think the pipeline of new projects is smaller than would be idea although I'm not sure what to do to solve this problem.

Comment author: Richard_Batty 10 February 2017 12:00:52AM 4 points [-]

Thanks, that clarifies.

I think I was confused by 'small donor' - I was including in that category friends who donate £50k-£100k and who fund small organisations in their network after a lot of careful analysis. If the fund is targeted more at <$10k donors that makes sense.

OpenPhil officers makes sense for MVP.

On EA Ventures, points 1 and 2 seem particularly surprising when put together. You found too few exciting projects but even they had trouble generating funder interest? So are you saying that even for high-quality new projects, funder interest was low, suggesting risk-aversion? If so, that seems to be an important problem to solve if we want a pipeline of new potentially high-impact projects.

On creating promising new projects, myself and Michael Peyton Jones have been thinking a lot about this recently. This thinking is for the Good Technology Project - how can we create an institution that helps technology talent to search for and exploit new high-social-impact startup opportunities. But a lot of our thinking will generalise to working out how to help EA get better at exploration and experimentation.

Comment author: Richard_Batty 09 February 2017 10:50:59AM *  11 points [-]

Small donors have played a valuable role by providing seed funding to new projects in the past. They can often fund promising projects that larger donors like OpenPhil can't because they have special knowledge of them through their personal networks and the small projects aren't established enough to get through a large donor's selection process. These donors therefore act like angel investors. My concern with the EA fund is that:

  • By pooling donations into a large fund, you increase the minimum grant that it's worth their time to make, thus making it unable to fund small opportunities
  • By centralising decision-making in a handful of experts, you reduce the variety of projects that get funded because they have more limited networks, knowledge, and value variety than the population of small donors.

Also, what happened to EA Ventures? Wasn't that an attempt to pool funds to make investments in new projects?

Comment author: RobBensinger 07 February 2017 10:58:05PM 7 points [-]

Anonymous #32(b):

The high-value people from the early days of effective altruism are disengaging, and the high-value people who might join are not engaging. There are people who were once quite crucial to the development of EA 'fundamentals' who have since parted ways, and have done so because they are disenchanted with the direction in which they see us heading.

More concretely, I've heard many reports to the effect: 'EA doesn't seem to be the place where the most novel/talented/influential people are gravitating, because there aren't community quality controls.' While inclusivity is really important in most circumstances, it has a downside risk here that we seem to be experiencing. I believe we are likely to lose the interest and enthusiasm of those who are most valuable to our pursuits, because they don't feel like they are around peers, and/or because they don't feel that they are likely to be socially rewarded for their extreme dedication or thoughtfulness.

I think that the community's dip in quality comes in part from the fact that you can get most of the community benefits without being a community benefactor -- e.g. invitations to parties and likes on Facebook. At the same time, one incurs social costs for being more tireless and selfless (e.g., skipping parties to work), for being more willing to express controversial views (e.g., views that conflict with clan norms), or for being more willing to do important but low-status jobs (e.g., office manager, assistant). There's a lot that we'd need to do in order to change this, but as a first step we should be more attentive to the fact that this is happening.

Comment author: Richard_Batty 08 February 2017 01:52:10AM *  6 points [-]

What communities are the most novel/talented/influential people gravitating towards? How are they better?

Comment author: ea247 05 February 2017 08:48:40PM 5 points [-]

Great post. Completely agree with the general concept and have a few positive updates on the Charity Entrepreneurship front.

We are working with another team to get one of the other promising ideas from our initial CE research founded. A public post on this will come out sometime in the next month or so.

Additionally we are in fact working on expanding the model we used on Charity Entrepreneurship for health to a much wider subset of causes and crucial considerations to end up some with charities we/others can start in broader areas. Our first post on this, which is going up publicly very soon, is on explore/exploit and optimal stopping, but in the context of starting charities. We also talk about multi-armed bandit problems in it.

Comment author: Richard_Batty 06 February 2017 10:04:12AM 7 points [-]

This is really exciting, looking forward to these posts.

The Charity Entrepreneurship model is interesting to me because you're trying to do something analogous to what we're doing at the Good Technology Project - cause new high impact organisations to exist. Whereas we started meta (trying to get other entrepreneurs to work on important problems) you started at the object level (setting up a charity and only later trying to get other people to start other charities). Why did you go for this depth-first approach?

Comment author: Richard_Batty 06 February 2017 09:53:08AM 4 points [-]

Exploration through experimentation might also be neglected because it's uncomfortable and unintuitive. EAs traditionally make a distinction between 'work out how to do the most good' and 'do it'. We like to work out whether something is good through careful analysis first, and once they're confident enough of a path they then optimise for exploitation. This is comforting because we then get to do only do work when we're fairly confident of it being the right path. But perhaps we need to get more psychologically comfortable with mixing the two together in an experimental approach.

Comment author: Richard_Batty 03 February 2017 04:17:41PM *  6 points [-]

Is there an equivalent to 'concrete problems in AI' for strategic research? If I was a researcher interested in strategy I'd have three questions: 'What even is AI strategy research?', 'What sort of skills are relevant?', 'What are some specific problems that I could work on?' A 'concrete problems'-like paper would help with all three.

Comment author: kbog  (EA Profile) 12 January 2017 06:49:35AM *  9 points [-]

I like your thoughts and agree with reframing it as epistemic virtue generally instead of just lying. But I think EAs are always too quick to think about behavior in terms of incentives and rational action. Especially when talking about each other. Since almost no one around here is rationally selfish, some people are rationally altruistic, and most people are probably some combination of altruism, selfishness and irrationality. But here people are thinking that it's some really hard problem where rational people are likely be dishonest and so we need to make it rational for people to be honest and so on.

We should remember all the ways that people can be primed or nudged to be honest or dishonest. This might be a hard aspect of an organization to evaluate from the outside but I would guess that it's at least as internally important as the desire to maximize growth metrics.

For one thing, culture is important. Who is leading? What is their leadership style? I'm not in the middle of all this meta stuff, but it's weird (coming from the Army) that I see so much talk about organizations but I don't think I've ever seen someone even mention the word "leadership."

Also, who is working at EA organizations? How many insiders and how many outsiders? I would suggest that ensuring that a minority of an organization is composed of identifiable outsiders or skeptical people would compel people to be more transparent just by making them feel like they are being watched. I know that some people have debated various reasons to have outsiders work for EA orgs - well here's another thing to consider.

I don't have much else to contribute, but all you LessWrong people who have been reading behavioral econ literature since day one should be jumping all over this.

Comment author: Richard_Batty 17 January 2017 04:01:09PM 0 points [-]

What sort of discussion of leadership would you like to see? How was this done in the Army?

Comment author: Richard_Batty 05 January 2017 11:16:56PM *  10 points [-]

I know some effective altruists who see EAs like Holden Karnofsky or what not do incredible things, and feel a little bit of resentment at themselves and others; feeling inadequate that they can’t make such a large difference.

I think there's a belief that people often have when looking at successful people which is really harmful, the belief that "I am fundamentally not like them - not the type of person who can be successful." I've regularly had this thought, sometimes explicitly and sometimes as a hidden assumption behind other thoughts and behaviours.

It's easy to slip into believing it when you hear the bios of successful people. For example, William MacAskill's bio includes being one of the youngest associate professors of philosophy in the world, co-founder of CEA, co-founder of 80,000 Hours, and a published author. Or you can read profiles of Rhodes Scholars and come across lines like "built an electric car while in high school and an electric bicycle while in college".

When you hear these bios it's hard to imagine how these people achieved these things. Cal Newport calls this the failed simulation effect - we feel someone is impressive if we can't simulate the steps by which they achieved their success. But even if we can't immediately see the steps they're still there. They achieved their success through a series of non-magic practical actions, not because they're fundamentally a different sort of person.

So a couple of suggestions:

If you're feeling like you fundamentally can't be as successful as some of the people you admire, start by reading Cal Newport's blog post. It gives the backstory behind a particularly impressive student, showing the exact (non-magical) steps he took to achieve an impressive bio. Then, when you hear an impressive achievement, remind yourself that there is a messy practical backstory to this that you're not hearing. Maybe read full biographies of successful people to see their gradual rise. Then go work on the next little increment of your plan, because that's the only consistent way anyone gets success.

If you're a person others look up to as successful, start communicating some of the details of how you achieved what you did. Show the practicalities, not just the flashy bio-worthy outcomes.

View more: Next