Comment author: Milan_Griffes 09 February 2018 01:27:23AM 0 points [-]

When is the next round of EA grants opening?

Are you considering accepting applications on a rolling basis?

Comment author: Kerry_Vaughan 11 February 2018 05:27:33PM 1 point [-]

Currently planning to open EA Grants applications by the end of the month. I plan for the application to remain open so that I can accept applications on a rolling basis.

Comment author: weeatquince  (EA Profile) 22 December 2017 03:19:17PM *  1 point [-]

This is fantastic. Thank you for writing up. Whilst reading I jotted down a number of thoughts, comments, questions and concerns.

.

ON EA GRANTS

I am very excited about this and very glad that CEA is doing more of this. How to best move funding to the projects that need it most within the EA community is a really important question that we have yet to solve. I saw a lot of people with some amazing ideas looking to apply for these grants.

1

"with an anticipated budget of around £2m"

I think it is quite plausible that £2m is too low for the year. Not having enough funding increases the costs to applicants (time spent applying) and you (time spent assessing) relative to the benefits (funding moved), especially if there are applicants above the bar for funding but that you cannot afford to fund. Also I had this thought prior to reading that one of your noted mistakes was "underestimated the number of applications", it feels like you might still be making this mistake.

2

"mostly evaluating the merits of the applicants themselves rather than their specific plans"

Interesting decision. Seems reasonable. However I think it does have a risk of reducing diversity and I would be concerned that the applicants would be judged on their ability to hold philosophise in an academic oxford manner etc.

Best of luck with it

.

OTHER THOUGHTS

3

"encouraging more people to use Try Giving,"

Could CEA comment or provide advise to local group leaders on if they would want local groups to promote the GWWC pledge or the Try Giving pledge or when one might be better than the other? To date the advise seems to have been to as much as possible push the Pledge and not Try Giving

4

"... is likely to be the best way to help others."

I do not like the implication that there is a single answer to this question regardless of individual's moral frameworks (utilitarian / non-utilitarian / religious / etc) or skills and background. Where the mission is to have an impact as a "a global community of people..." the research should focus on supporting those people to do what they has the biggest impact given their positions.

5 Positives

"Self-sorting: People tend to interact with others who they perceive are similar to themselves"

This is a good thing to have picked up on.

"Community Health"

I am glad this is a team

"CEA’s Mistakes"

I think it is good to have this written up.

6

"Impact review"

It would have been interesting to see an estimates for costs (time/money) as well as for the outputs of each team.

.

WELL DONE FOR 2017. GOOD LUCK FOR 2018!

Comment author: Kerry_Vaughan 02 January 2018 09:58:25PM 2 points [-]

I think it is quite plausible that £2m is too low for the year. Not having enough funding increases the costs to applicants (time spent applying) and you (time spent assessing) relative to the benefits (funding moved), especially if there are applicants above the bar for funding but that you cannot afford to fund. Also I had this thought prior to reading that one of your noted mistakes was "underestimated the number of applications", it feels like you might still be making this mistake.

That's fair. My thinking in choosing £2m was that we would want to fund more projects than we had money to fund last year, but that we would have picked much of the low-hanging fruit, so there'd be less to fund.

In any case, I'm not taking that number too seriously. We should fund all the projects worth funding and raise more money if we need it.

Comment author: callum_calvert 23 December 2017 06:27:32PM 2 points [-]

Thanks for writing this.

On EA Grants: Will you allow individuals to fund EA Grants in the future? This could either be letting individuals add to CEA's pot of funding for grants, publishing the rejected grants so that individuals can fund them independently or putting the applications on EA funds.

On EA Funds:

"Potential expansion of EA Funds on offer and investigation of different models for running and >using funds"

What types of funds and models might this investigation include?

Comment author: Kerry_Vaughan 02 January 2018 09:52:53PM 1 point [-]

Will you allow individuals to fund EA Grants in the future.

We probably won't raise EA Grants money from more than a handful of donors. I think we can secure funding from CEA's existing donor base and the overhead of raising money from multiple funders probably isn't worth the cost.

That said, there are two related things that we will probably do:

  1. We'll probably refer some promising projects to other funders. We did this last round for projects that we couldn't fund for legal reasons and for projects where existing funders had more expertise in the project than we did.
  2. We'll probably send applicants that were close to getting funding but didn't to other funders that might be interested in the project.
Comment author: MichaelPlant 19 December 2017 07:58:14PM *  10 points [-]

Thanks for the update, much appreciated.

I only have a question in one area: could you say a bit more about how the individual outreach team will find people and how it might try to help them? Maybe I'm misreading this, but there's something worryingly mysterious and opaque about there being someone in CEA who reaches out to 'pick winners' (in comparison to, say, having a transparent, formal application process for grants which seems unobjectionable).

One worry (which I'm perhaps overstating) is this might lead to accidental social/intellectual conformism because people start to watch what they do/say in the hope of the word getting out and them getting 'picked' for special prizes.

Comment author: Kerry_Vaughan 19 December 2017 09:20:15PM 6 points [-]

Good question. I agree that the process for Individual outreach is mysterious and opaque. My feeling is that this is because the approach is quite new, and we don't yet know how we'll select people or how we'll deliver value (although we have some hypotheses).

That said, there are two answers to this question depending on the timeline we're talking about.

In the short run, the primary objective is to learn more about what we can do to be helpful. My general heuristic is that we should focus on the people/activity combinations that seem to us to be likely to produce large effects so that we can get some useful results, and then iterate. (I can say more about why I think this is the right approach, if useful).

In practice, this means that in the short-run we'll work with people that we have more information on and easier access to. This probably means working with people that we meet at events like EA Global, people in our extended professional networks, EA Grants recipients, etc.

In the future, I'd want something much more systematic to avoid the concerns you've raised and to avoid us being too biased in favor of our preexisting social networks. You might imagine something like 80K coaching where we identify some specific areas where we think we can be helpful and then do broader outreach to people that might fall into those areas. In any case, we'll need to experiment and iterate more before we can design a more systematic process.

Comment author: Kerry_Vaughan 07 July 2017 10:55:00PM 2 points [-]

3c. Other research, especially "learning to reason from humans," looks more promising than HRAD (75%?)

I haven't thought about this in detail, but you might think that whether the evidence in this section justifies the claim in 3c might depend, in part, on what you think the AI Safety project is trying to achieve.

On first pass, the "learning to reason from humans" project seems like it may be able to quickly and substantially reduce the chance of an AI catastrophe by introducing human guidance as a mechanism for making AI systems more conservative.

However, it doesn't seem like a project that aims to do either of the following:

(1) Reduce the risk of an AI catastrophe to zero (or near zero) (2) Produce an AI system that can help create an optimal world

If you think either (1) or (2) are the goals of AI Safety, then you might not be excited about the "learning to reason from humans" project.

You might think that "learning to reason from humans" doesn't accomplish (1) because a) logic and mathematics seem to be the only methods we have for stating things with extremely high certainty, and b) you probably can't rule out AI catastrophes with high certainty unless you can "peer inside the machine" so to speak. HRAD might allow you to peer inside the machine and make statements about what the machine will do with extremely high certainty.

You might think that "learning to reason from humans" doesn't accomplish (2) because it makes the AI human-limited. If we want an advanced AI to help us create the kind of world that humans would want "if we knew more, thought faster, were more the people we wished we were" etc. then the approval of actual humans might, at some point, cease to be helpful.

Comment author: Kerry_Vaughan 07 July 2017 05:45:21AM 19 points [-]

This was the most illuminating piece on MIRIs work and on AI Safety in general that I've read in some time. Thank you for publishing it.

Comment author: MichaelDickens  (EA Profile) 03 June 2017 06:46:47AM 4 points [-]

Now that you mention it, I think this would be a much more interesting way to divide up funds. I have basically no idea whether AI safety or anti-factory farming interventions are more important; but given the choice between a "safe, guaranteed to help" fund and a "moonshot" fund I would definitely donate to the latter over the former. Dividing up by cause area does not accurately separate donation targets along the lines on which I am most confident (not sure if that makes sense). I would much rather donate to a fund run by a person who shares my values and beliefs than a fund for a specific cause area, because I'm likely to change my mind about which cause area is best, and perhaps the fund manager will, too, and that's okay.

Some possible axes:

  1. live-improving vs. life-saving (or, similarly, total view vs. person-affecting view)
  2. safe bets vs. moonshots
  3. suffering-focused vs. "classical"
  4. short-term vs. far future

Although having all possible combinations just along these axes would require 16 funds so in practice this won't work exactly as I've described.

Comment author: Kerry_Vaughan 07 June 2017 04:02:29PM 2 points [-]

I have basically no idea whether AI safety or anti-factory farming interventions are more important; but given the choice between a "safe, guaranteed to help" fund and a "moonshot" fund I would definitely donate to the latter over the former. Dividing up by cause area does not accurately separate donation targets along the lines on which I am most confident (not sure if that makes sense).

Great idea. This makes sense to me.

Comment author: MichaelDickens  (EA Profile) 03 June 2017 06:39:32AM 1 point [-]

RE #1, organizations doing cause prioritization and not EA community building: Copenhagen Consensus Center, Foundational Research Institute, Animal Charity Evaluators, arguably Global Priorities Project, Open Philanthropy Project (which would obviously not be a good place to donate, but still fits the criterion).

RE #2: if the point is to do what Nick wants, it should really be a "Nick Beckstead fund", not an EA Community fund.

Comment author: Kerry_Vaughan 07 June 2017 04:00:28PM 0 points [-]

RE #2: if the point is to do what Nick wants, it should really be a "Nick Beckstead fund", not an EA Community fund.

The fund is whatever he thinks is best in EA Community building. If he wanted to fund other things the EA Community fund would not be a good option.

Comment author: MichaelPlant 02 June 2017 12:13:34PM 7 points [-]

Thanks for this Kerry, very much appreciate the update.

Three funds I'd like to see:

  1. The 'life-improving' or 'quality of life'-type fund that tries to find the best way to increase the happiness of people whilst they are alive. My view on morality leads me to think that is what matters most. This is the area I do my research on too, so I'd be very enthusiastic to help whoever the fund manager was.

  2. A systemic change fund. Part of this would be reputational (i.e. no one could then complain EAs don't take systemic change seriously) another part would be that I'd really like to see what the fund manager would choose to give money too if it had to go to systemic change. I feel that would be a valuable learning experience.

  3. A 'moonshots' fund that supported high-risk, potentially high-reward projects. For reasons similar to 2 I think this would be a really useful way for us to learn.

My general thought is the more funds the better, presuming you can find qualified enough people to run them. It has the positive effect of demonstrating EA's openess and diversity, which should mollify our critics. As mentioned, it provides chances to learn stuff. And it strikes me as unlikely new funds would divert much money away from the current options. Suppose we had an EA environmentalism fund. I assume people who would donate to that wouldn't have been donating to, say, the health fund already. They'd probably be supporting green charities instead.

Comment author: Kerry_Vaughan 02 June 2017 05:02:58PM 1 point [-]

Hey Michael, great ideas. I'd like to see all of these as well. My concern would just be whether there are charities available to fund in the areas. Do you have some potential grant recipients for these funds in mind?

Comment author: RandomEA 01 June 2017 08:23:37PM 5 points [-]

One option is to split the EA Community Fund into a Movement/Community Building Fund (which could fund organizations that engage in outreach, support local groups, build online platforms etc.) and a Cause/Means Prioritization Fund (which could fund organizations that engage in cause prioritization, explore new causes, research careers, study the policy process etc.).

Comment author: Kerry_Vaughan 02 June 2017 05:00:58PM 3 points [-]

This is an interesting idea. I have a few hesitations about it, however:

  1. The number of organizations which are doing cause prioritization and not also doing EA Community Building is very small (I can't think of any off the top of my head).
  2. My sense is that Nick wants to fund both community building and cause prioritization, so splitting these might place artificial constraints on what he can fund.
  3. EA Community building has the least donations so far ($83,000). Splitting might make the resulting funds too small to be able to do much.

View more: Next