Comment author: Kerry_Vaughan 07 July 2017 10:55:00PM 2 points [-]

3c. Other research, especially "learning to reason from humans," looks more promising than HRAD (75%?)

I haven't thought about this in detail, but you might think that whether the evidence in this section justifies the claim in 3c might depend, in part, on what you think the AI Safety project is trying to achieve.

On first pass, the "learning to reason from humans" project seems like it may be able to quickly and substantially reduce the chance of an AI catastrophe by introducing human guidance as a mechanism for making AI systems more conservative.

However, it doesn't seem like a project that aims to do either of the following:

(1) Reduce the risk of an AI catastrophe to zero (or near zero) (2) Produce an AI system that can help create an optimal world

If you think either (1) or (2) are the goals of AI Safety, then you might not be excited about the "learning to reason from humans" project.

You might think that "learning to reason from humans" doesn't accomplish (1) because a) logic and mathematics seem to be the only methods we have for stating things with extremely high certainty, and b) you probably can't rule out AI catastrophes with high certainty unless you can "peer inside the machine" so to speak. HRAD might allow you to peer inside the machine and make statements about what the machine will do with extremely high certainty.

You might think that "learning to reason from humans" doesn't accomplish (2) because it makes the AI human-limited. If we want an advanced AI to help us create the kind of world that humans would want "if we knew more, thought faster, were more the people we wished we were" etc. then the approval of actual humans might, at some point, cease to be helpful.

Comment author: Kerry_Vaughan 07 July 2017 05:45:21AM 18 points [-]

This was the most illuminating piece on MIRIs work and on AI Safety in general that I've read in some time. Thank you for publishing it.

Comment author: MichaelDickens  (EA Profile) 03 June 2017 06:46:47AM 4 points [-]

Now that you mention it, I think this would be a much more interesting way to divide up funds. I have basically no idea whether AI safety or anti-factory farming interventions are more important; but given the choice between a "safe, guaranteed to help" fund and a "moonshot" fund I would definitely donate to the latter over the former. Dividing up by cause area does not accurately separate donation targets along the lines on which I am most confident (not sure if that makes sense). I would much rather donate to a fund run by a person who shares my values and beliefs than a fund for a specific cause area, because I'm likely to change my mind about which cause area is best, and perhaps the fund manager will, too, and that's okay.

Some possible axes:

  1. live-improving vs. life-saving (or, similarly, total view vs. person-affecting view)
  2. safe bets vs. moonshots
  3. suffering-focused vs. "classical"
  4. short-term vs. far future

Although having all possible combinations just along these axes would require 16 funds so in practice this won't work exactly as I've described.

Comment author: Kerry_Vaughan 07 June 2017 04:02:29PM 2 points [-]

I have basically no idea whether AI safety or anti-factory farming interventions are more important; but given the choice between a "safe, guaranteed to help" fund and a "moonshot" fund I would definitely donate to the latter over the former. Dividing up by cause area does not accurately separate donation targets along the lines on which I am most confident (not sure if that makes sense).

Great idea. This makes sense to me.

Comment author: MichaelDickens  (EA Profile) 03 June 2017 06:39:32AM 1 point [-]

RE #1, organizations doing cause prioritization and not EA community building: Copenhagen Consensus Center, Foundational Research Institute, Animal Charity Evaluators, arguably Global Priorities Project, Open Philanthropy Project (which would obviously not be a good place to donate, but still fits the criterion).

RE #2: if the point is to do what Nick wants, it should really be a "Nick Beckstead fund", not an EA Community fund.

Comment author: Kerry_Vaughan 07 June 2017 04:00:28PM 0 points [-]

RE #2: if the point is to do what Nick wants, it should really be a "Nick Beckstead fund", not an EA Community fund.

The fund is whatever he thinks is best in EA Community building. If he wanted to fund other things the EA Community fund would not be a good option.

Comment author: MichaelPlant 02 June 2017 12:13:34PM 7 points [-]

Thanks for this Kerry, very much appreciate the update.

Three funds I'd like to see:

  1. The 'life-improving' or 'quality of life'-type fund that tries to find the best way to increase the happiness of people whilst they are alive. My view on morality leads me to think that is what matters most. This is the area I do my research on too, so I'd be very enthusiastic to help whoever the fund manager was.

  2. A systemic change fund. Part of this would be reputational (i.e. no one could then complain EAs don't take systemic change seriously) another part would be that I'd really like to see what the fund manager would choose to give money too if it had to go to systemic change. I feel that would be a valuable learning experience.

  3. A 'moonshots' fund that supported high-risk, potentially high-reward projects. For reasons similar to 2 I think this would be a really useful way for us to learn.

My general thought is the more funds the better, presuming you can find qualified enough people to run them. It has the positive effect of demonstrating EA's openess and diversity, which should mollify our critics. As mentioned, it provides chances to learn stuff. And it strikes me as unlikely new funds would divert much money away from the current options. Suppose we had an EA environmentalism fund. I assume people who would donate to that wouldn't have been donating to, say, the health fund already. They'd probably be supporting green charities instead.

Comment author: Kerry_Vaughan 02 June 2017 05:02:58PM 1 point [-]

Hey Michael, great ideas. I'd like to see all of these as well. My concern would just be whether there are charities available to fund in the areas. Do you have some potential grant recipients for these funds in mind?

Comment author: RandomEA 01 June 2017 08:23:37PM 5 points [-]

One option is to split the EA Community Fund into a Movement/Community Building Fund (which could fund organizations that engage in outreach, support local groups, build online platforms etc.) and a Cause/Means Prioritization Fund (which could fund organizations that engage in cause prioritization, explore new causes, research careers, study the policy process etc.).

Comment author: Kerry_Vaughan 02 June 2017 05:00:58PM 3 points [-]

This is an interesting idea. I have a few hesitations about it, however:

  1. The number of organizations which are doing cause prioritization and not also doing EA Community Building is very small (I can't think of any off the top of my head).
  2. My sense is that Nick wants to fund both community building and cause prioritization, so splitting these might place artificial constraints on what he can fund.
  3. EA Community building has the least donations so far ($83,000). Splitting might make the resulting funds too small to be able to do much.
Comment author: casebash 01 June 2017 11:59:56PM 4 points [-]

I do see some advantages of keeping the number of funds low at this amount of money moving through because it increases the chance that any one particular fund will be able to support a particularly promising project that isn't appreciated by other donors.

Comment author: Kerry_Vaughan 02 June 2017 04:46:39PM *  0 points [-]

Great point.

A different option for handling this concern would be for us to let fund managers email the EA Funds users if they have a good opportunity, but lack funding.

Comment author: BenHoffman 08 May 2017 03:59:55PM *  4 points [-]


I think that in a writeup for the two funds Nick is managing, CEA has done a fine job making it clear what's going on. The launch post here on the Forum was also very clear.

My worry is that this isn't at all what someone attracted by EA's public image would be expecting, since so much of the material is about experimental validation and audit.

I think that there's an opportunity here to figure out how to effectively pitch far-future stuff directly, instead of grafting it onto existing global-poverty messaging. There's a potential pitch centered around: "Future people are morally relevant, neglected, and extremely numerous. Saving the world isn't just a high-minded phrase - here are some specific ways you could steer the course of the future a lot." A lot of Nick Bostrom's early public writing is like this, and a lot of people were persuaded by this sort of thing to try to do something about x-risk. I think there's a lot of potential value in figuring out how to bring more of those sorts of people together, and - when there are promising things in that domain to fund - help them coordinate to fund those things.

In the meantime, it does make sense to offer a fund oriented around the far future, since many EAs do share those preferences. I'm one of them, and think that Nick's first grant was a promising one. It just seems off to me to aggressively market it as an obvious, natural thing for someone who's just been through the GWWC or CEA intro material to put money into. I suspect that many of them would have valid objections that are being rhetorically steamrollered, and a strategy of explicit persuasion has a better chance of actually encountering those objections, and maybe learning from them.

I recognize that I'm recommending a substantial strategy change, and it would be entirely appropriate for CEA to take a while to think about it.

Comment author: Kerry_Vaughan 08 May 2017 05:33:28PM 3 points [-]

Hey, Ben. Just wanted to note that I found this very helpful. Thank you.

Comment author: BenHoffman 28 April 2017 02:48:27AM 1 point [-]

The featured event was the AI risk thing. My recollection is that there was nothing else scheduled at that time so everyone could go to it. That doesn't mean there wasn't lots of other content (there was), nor do I think centering AI risk was necessarily a bad thing, but I stand by my description.

Comment author: Kerry_Vaughan 28 April 2017 05:58:28PM *  4 points [-]

We didn't offer any alternative events during Elon's panel because we (correctly) perceived that there wouldn't be demand for going to a different event and putting someone on stage with few people in the audience is not a good way to treat speakers.

We had to set up an overflow room for people that didn't make it into the main room during the Elon panel, and even the overflow room was standing room only.

I think this is worth pointing out because of the proceeding sentence:

However, EA leadership tends to privately focus on things like AI risk.

The implication is that we aimed to bias the conference towards AI risk and against global poverty because of some private preference for AI risk as a cause area.[1]

I think we can be fairly accused of aiming for Elon as an attendee and not some extremely well known global poverty person.

However, with the exception of Bill Gates (who we tried to get), I don't know of anyone in global poverty with anywhere close to the combination of a) general renown and b) reachability. So, I think trying to get Elon was probably the right call.

Given that Elon was attending, I don't see what reasonable options we had for more evenly distributing attention between plausible causes. Elon casts a big shadow.

[1] Some readers contacted me to let me know that they found this sentence confusing. To clarify, I do have personal views on which causes are higher impact than others, but the program design of EA Global was not an attempt to steer EA on the basis of those views.

Comment author: Kerry_Vaughan 27 April 2017 08:56:51PM 6 points [-]

Two years ago many attendees at the EA Global conference in the San Francisco Bay Area were surprised that the conference focused so heavily on AI risk, rather than the global poverty interventions they’d expected.

EA Global 2015 had one pannel on AI (in the morning, on day 2) and one talk tripplet on Global Poverty (in the afternoon, on day 2). Most of the content was not cause-specific.

People remember EA Global 2015 as having a lot of AI content because Elon Musk was on the AI pannel which made it loom very large in people's minds. So, while it's fair to say that more attention ended up on AI than on global poverty, it's not fair to say that the content focused more on AI than on global poverty

View more: Next