Comment author: satvikberi 28 September 2016 05:15:58PM 15 points [-]

These are both good points worth addressing! My understanding on (2) is that any proposed method of slowing down AGI research would likely antagonize the majority of AI researchers with relatively little actual slowdown. It seems more valuable to build alliances with current AI researchers, and get them to care about safety, in order to increase the amount of safety-concerned research done vs. safety-agnostic research.

Comment author: Ben_Kuhn 20 May 2015 12:56:45PM 2 points [-]

I wouldn't say that New Incentives has "a lot of evidence and aren't really exploring the space of possible interventions." But again, this is just dueling anecdata for now.

GiveWell style research seems very trainable, and it is plausible that GiveWell could hire less experienced people & provide more training if they had significantly more money

GiveWell already hires and trains a number of people with 0 experience (perhaps most of their hires).

The right way to learn organization-starting skills might be to start an organization; Paul Graham suggests that this is the right way to learn startup-building skills. In that case we'd want to fund more people running experimental EA projects.

Ah, good point. This seems like a pretty plausible mechanism.

Comment author: satvikberi 20 May 2015 09:19:20PM 1 point [-]

GiveWell already hires and trains a number of people with 0 experience (perhaps most of their hires).

Oh, cool! I definitely didn't realize this.

Comment author: Ben_Kuhn 20 May 2015 01:05:14PM 4 points [-]
  • Donors discuss publicly whether they feel the pool of giving opportunities is deep;
  • Charities talk publicly about whether they are more funding- or talent-constrained;

You mean like they are in the comments of this post? ;-)

(This reminds me of a similar conversation we had on this post...)

  • Charities raise or lower salaries, making direct work more or less appealing to people using that as a heuristic in choosing a career.

Do we know how sensitive recruiting is to salaries? I would have thought not very for direct work, because many people weren't using salary as a heuristic.

Comment author: satvikberi 20 May 2015 05:01:56PM 7 points [-]

I get the (purely anecdotal) impression that recruiting is sensitive to salaries in the sense that some people who would be good fits for EA charities automatically rule them out because the salaries are low enough that they would have to make undesirable time/money tradeoffs. However, it's a bit of a tricky problem, because most nonprofits want to pay everyone roughly the same amount, so hiring one marginal person at say 20% more really means increasing all salaries by that much.

Another relevant factor is how much of a salary cut you're looking at when moving from EtG to direct work. In for-profit organizations, the most competent people frequently get paid 3-10x as much as average. I don't think a 3-10x disparity would be culturally acceptable in EA charities, which means that someone at the top essentially has to forgo a much higher percentage of their salary to do direct work.

Comment author: Ben_Kuhn 20 May 2015 04:59:46AM 2 points [-]

in practice many high-leverage opportunities are still (in my opinion) available to marginal EtGers — at-least, if those EtGers are willing to be at least 1/5th as proactive about finding good opportunities as, say, Matt Wage is.

Interesting! Are you able to be more concrete about those opportunities? (Or how proactive Matt is?)

And finally, it's also possible that individual EtGers might have different values or world-models than the public faces of GW/GV have, and for that reason those marginal EtGers could have good opportunities available to them that are not likely to be met by GW-directed funding anytime soon, if ever.

Yeah, definitely agree that this is the case--on the other hand, it seems like there are a lot of EtGers with a fairly diverse set of values/world-models in place already. I'm worried specifically about marginal EtGers; I think the average EtGer is doing super useful stuff.

Comment author: satvikberi 20 May 2015 07:02:26AM *  4 points [-]

From talking to Matt Wage a few times I got the impression that he spends the equivalent of a few full time work weeks per year figuring out where to donate. Requiring potential donors to spend that much time seems like a flaw in the system, and EA ventures seems to be addressing it.

Comment author: Ben_Kuhn 20 May 2015 04:43:09AM 3 points [-]

This doesn't necessarily mean much, because fundraising targets have a lot to do with how much money EA orgs believe they can raise.

I agree that this could confound the result, but it's still some evidence!

The general problem I see is a lack of "angel investing" or its equivalent–the idea of putting money into small, experimental organizations and funding them further as they grow. (As a counter-counterpoint, EA Ventures seems well poised to function as an angel investor in the nonprofit world.)

It's hard to say for sure without knowing the fraction of solicited EA startups that get funding, but GiveWell has made some angel-esque investments in the past (e.g. New Incentives), and I think some large individual donors have as well.

the problem might be that there are very few people with the skills needed, and more funding can be used to train people, like MIRI is doing with the summer fellows program.

This is pretty plausible for AI risk, but not so obvious for generic organization-starting, IMO. Are there specific skills you can think of that might be a factor here?

Comment author: satvikberi 20 May 2015 07:00:09AM 3 points [-]

It's hard to say for sure without knowing the fraction of solicited EA startups that get funding, but GiveWell has made some angel-esque investments in the past (e.g. New Incentives), and I think some large individual donors have as well.

I get the impression that these are going mostly to programs that already have a lot of evidence and aren't really exploring the space of possible interventions. I tend to believe that the effectiveness of projects probably follows a power law, and that therefore the most effective interventions are probably ones people haven't tried yet, so funding variants on existing programs doesn't help us find those interventions.

This is pretty plausible for AI risk, but not so obvious for generic organization-starting, IMO. Are there specific skills you can think of that might be a factor here?

GiveWell style research seems very trainable, and it is plausible that GiveWell could hire less experienced people & provide more training if they had significantly more money (I have no information on this though.)

The right way to learn organization-starting skills might be to start an organization; Paul Graham suggests that this is the right way to learn startup-building skills. In that case we'd want to fund more people running experimental EA projects.

Comment author: satvikberi 19 May 2015 09:05:19PM *  15 points [-]

To play devil's advocate (these don't actually represent my beliefs):

I can’t remember any EA orgs failing to reach a fundraising target.

This doesn't necessarily mean much, because fundraising targets have a lot to do with how much money EA orgs believe they can raise.

Open Phil has recently posted about an org they wish existed but doesn’t and funder-initiated startups.

It's pretty hard to get funding for a new organization, e.g. Spencer and I put a lot of effort into it without much success. The general problem I see is a lack of "angel investing" or its equivalent–the idea of putting money into small, experimental organizations and funding them further as they grow. (As a counter-counterpoint, EA Ventures seems well poised to function as an angel investor in the nonprofit world.)

Also, to address the general point that EA is talent-constrained, the problem might be that there are very few people with the skills needed, and more funding can be used to train people, like MIRI is doing with the summer fellows program. In that case earning to give is still a good solution to the talent constraint.