Comment author: casebash 27 March 2017 01:56:51PM 0 points [-]

Like a local university group or local city meetup.

Comment author: Richard_Batty 27 March 2017 02:33:33PM 0 points [-]

Not sure, it's really hard to make volunteer-run projects work and often a small core team do all the work anyway.

This half-written post of mine contains some small project ideas: https://docs.google.com/document/d/1zFeSTVXqEr3qSrHdZV0oCxe8rnRD8w912lLw_tX1eoM/edit

Comment author: casebash 27 March 2017 04:44:12AM 0 points [-]

Many of those seem like individual projects. Does anyone have any suggestions for projects that would be particularly good for EA groups?

Comment author: Richard_Batty 27 March 2017 01:37:27PM 1 point [-]

A lot of these would be good for a small founding team, rather than individuals. What do you mean by 'good for an EA group?'

Comment author: John_Maxwell_IV 04 March 2017 06:08:25AM 3 points [-]

£100... sounds tasty! I'll add it to my calendar :D

Comment author: Richard_Batty 25 March 2017 06:18:18PM 3 points [-]
14

Concrete project lists

There are lots of important project ideas in EA that people could work on, and I’d like to encourage people to explore more . When I was looking for projects to work on, I had difficulty thinking of what needed doing apart from obvious projects like raising money for GiveWell-recommended... Read More
Comment author: Richard_Batty 19 March 2017 11:52:19AM 6 points [-]

I was just looking at the EA funds dashboard. To what extent do you think the money coming into EA funds is EA money that was already going to be allocated to similarly effective charities?

I saw the EA funds post on hacker news, are you planning to continue promoting EA funds outside the existing EA community?

Comment author: Austen_Forrester 15 March 2017 01:54:55AM 0 points [-]

LOL. Typical of my comments. Gets almost no upvotes but I never receive any sensible counterarguments! People use the forum vote system to persuade (by social proof) without having a valid argument. I have yet to vote a comment (up or down) because I think people should think for themselves.

Comment author: Richard_Batty 16 March 2017 08:51:45PM *  6 points [-]

You can understand some of what people are downvoting you for by looking at which of your comments are most downvoted - ones where you're very critical without much explanation and where you suggest that people in the community have bad motives: http://effective-altruism.com/ea/181/introducing_ceas_guiding_principles/ah7 http://effective-altruism.com/ea/181/introducing_ceas_guiding_principles/ah6 http://effective-altruism.com/ea/12z/concerns_with_intentional_insights/8p9

Well-explained criticisms won't get downvoted this much.

Comment author: HaydnBelfield 02 March 2017 06:39:30PM 2 points [-]

Whatever happened to EA Ventures?

Comment author: Richard_Batty 02 March 2017 06:52:02PM 3 points [-]
Comment author: John_Maxwell_IV 01 March 2017 10:11:01PM *  6 points [-]

Brainstorming why this might be the case:

  • Lack of visibility. For example, I'm pretty into EA, but I didn't realize OpenPhil wanted to see a new science policy think tank. Just having a list of open projects could help with visibility.

  • Bystander effects. It's not clear who has a comparative advantage to work on this stuff. And many neglected projects aren't within the purview of existing EA organizations.

  • Risk aversion. Sometimes I wonder if the "moral obligation" frame of EA causes people to shy away from high-risk do-gooding opportunities. Something about wanting to be sure that you've fulfilled your obligation. Earning to give and donating to AMF or GiveDirectly becomes a way to certify yourself as a good person in the eyes of as many people as possible.

  • EA has strong mental handles for "doing good with your donations" and "doing good with your career". "Doing good with your projects" is a much weaker handle, and it competes for resources with the other handles. Speculative projects typically require personal capital, since it's hard to get funding for a speculative project, especially if you have no track record. But if you're a serious EA, you might not have a lot of personal capital left over after making donations. And such speculative projects typically require time and focus. But many careers that are popular among serious EAs are not going to leave much time and focus for personal projects. I don't see any page on the 80K website for careers that leave you time to think so you can germinate new EA organizations in your spare time. Arguably, the "doing good with your career" framing is harmful because it causes you to zoom out excessively instead of making a series of small bets.

  • Lack of accountability. Maybe existing EA organizations are productive because the workers feel accountable to the leaders, and the leaders feel accountable to their donors. In the absence of accountability, people default to browsing Facebook instead of working on projects. Under this model, using personal capital to fund projects is an antipattern because it doesn't create accountability the way donations do. Another advantage of EAs donating money to each other is that charitable donations can be deducted from your taxes, but savings intended for altruistic personal projects cannot be. But note that accountability can have downsides.

  • It's not that there is some particular glitch in the process of turning ideas into work. Rather, there is no process in the first place. We can work to identify and correct glitches once we actually have a pipeline.

If someone made it their business to fix this problem, how might they go about it? Brainstorming:

  • Secure seed funding for the project, then have a competitive application process to be the person who starts the organization. Advantages: Social status goes to the winner of the application process. Comparing applicants side-by-side, especially using a standard application, should result in better hiring decisions/better comparative advantage mapping. Project founders can be selected more on the basis of project-specific aptitude and less on the basis of connections/fundraising ability. If the application process is open and widely advertised (the way e.g. Y Combinator does with their application), there's the possibility of selecting talented people outside the EA community and expanding our effective workforce. Disadvantages: Project founders less selected for having initiative/being agentlike/being really passionate about this particular project?

  • Alternatively, one can imagine more of a "headhunter" type approach. Maybe someone from the EA funds looks through the EA rolodex and gets in contact with people based on whether they seem like promising candidates.

  • Both the competitive application approach and the headhunter approach could also be done with organizations being the unit that's being operated on rather than individuals. E.g. publicize a grant that organizations can apply for, or contact organizations with a related track record and see if they'd be interested in working on the project if given the funding. Another option is to work through universities. In general, I expect that you're able to attract higher quality people if they're able to put the name of a prestigious university on their resume next to the project. The university benefits because they get to be associated with anything cool that comes out of the project. And the project has an easier time getting taken seriously due to its association with the university's brand. So, wins all around.

  • Some of these projects could work well as thesis topics. I know there was a push a little while ago to help students find EA-related thesis topics that ended up fading out. But this seems like a really promising idea to me.

Comment author: Richard_Batty 02 March 2017 09:56:16AM 11 points [-]

This is really helpful, thanks.

Whilst I could respond in detail, instead I think it would be better to take action. I'm going to put together an 'open projects in EA' spreadsheet and publish it on the EA forum by March 25th or I owe you £100.

Comment author: John_Maxwell_IV 28 February 2017 09:06:52AM *  6 points [-]

A trending school of thought is "AI Alignment needs careful, clever, agenty thinkers. 'Having the correct opinion' is not that useful. There is nobody who can tell you what exactly to do, because nobody knows. We need people who can figure out what to do, in a very messy, challenging problem."

In some cases, such 'agentlike' people may have more ideas for things to do than they have time in which to do them. See e.g. this list of AI strategy research projects that Luke Muehlhauser came up with.

Broadly speaking, it seems like generating ideas for things to do, evaluating the likely usefulness of tasks, and executing tasks could in principle all be done by different people. I'm not sure I know of any distributed volunteer organizations that work this way in practice, though. Perhaps we could have a single person whose job it is to collect ideas for things to do, run them by people who seem like they ought to be able to evaluate the ideas, and get in touch with people who want to contribute.

People might also be more motivated to work on ideas they came up with themselves.

In terms of influencing top AI companies, I'd be interested to hear thoughts on the best way to handle groups like Facebook/Baidu where the lab's leader has publicly expressed skepticism about the value of AI safety research. One possible strategy is to practice talking to AI safety research skeptics in a lower-stakes context (e.g. at AI conferences) and focus on people like Andrew Ng only when you're relatively sure your advocacy won't backfire.

Comment author: Richard_Batty 28 February 2017 04:04:46PM 8 points [-]

I think we have a real problem in EA of turning ideas into work. There have been great ideas sitting around for ages (e.g. Charity Entrepreneurship's list of potential new international development charities, OpenPhil's desire to see a new science policy think tank, Paul Christiano's impact certificate idea) but they just don't get worked on.

Comment author: HaydnBelfield 24 February 2017 01:27:49PM 18 points [-]

Thanks for this! Its mentioned in the post and James and Fluttershy have made the point, but I just wanted to emphasise the benefits to others of Open Philanthropy continuing to engage in public discourse. Especially as this article seems to focus mostly on the cost/benefits to Open Philanthropy itself (rather than to others) of Open Philanthropy engaging in public discourse.

The analogy of academia was used. One of the reasons academics publish is to get feedback, improve their reputation and to clarify their thinking. But another, perhaps more important, reason academics publish academic papers and popular articles is to spread knowledge.

As an organisation/individual becomes more expert and established, I agree that the benefits to itself decrease and the costs increase. But the benefit to others of their work increases. It might be argued that when one is starting out the benefits of public discourse go mostly to oneself, and when one is established the benefits go mostly to others.

So in Open Philanthropy’s case it seems clear that the benefits to itself (feedback, reputation, clarifying ideas) have decreased and the costs (time and risk) have increased. But the benefits to others of sharing knowledge have increased, as it has become more expert and better at communicating.

For example, speaking personally, I have found Open Philanthropy’s shallow investigations on Global Catastrophic Risks a very valuable resource in getting people up to speed – posts like Potential Risks from Advanced Artificial Intelligence: The Philanthropic Opportunity have also been very informative and useful. I’m sure people working on global poverty would agree.

Again, just wanted to emphasise that others get a lot of benefit from Open Philanthropy continuing to engage in public discourse (in the quantity and quality at which it does so now).

Comment author: Richard_Batty 24 February 2017 04:44:22PM 9 points [-]

Yes! The conversations and shallow reviews are the first place I start when researching a new area for EA purposes. They've saved me lots of time and blind alleys.

OpenPhil might not see these benefits directly themselves, but without information sharing individual EAs and EA orgs would keep re-researching the same topics over and over again and not be able to build on each other's findings.

It may be possible to have information sharing through people's networks but this becomes increasingly difficult as the EA network grows, and excludes competent people who might not know the right people to get information from.

View more: Next