Comment author: Gregory_Lewis 11 July 2018 03:05:26PM 4 points [-]

One key challenge I see is something like 'grant-making talent constraint'. The skills needed to make good grants (e.g. good judgement, domain knowledge, maybe tacit knowledge, maybe relevant network, possibly commissioning/governance/operations skill) are not commonplace, and hard to explicitly 'train' outside i) having a lot of money of your own to practise with, or ii) working in a relevant field (so people might approach you for advice). (Open Philanthropy's recent hiring round might provide another route, but places were limited and extraordinarily competitive).

Yet the talents needed to end up at (i) or (ii) are somewhat different, as are the skills to acquire: neither (e.g.) having a lot of money and being interested in AI safety, nor being an AI safety researcher oneself, guarantee making good AI safety grants; time one spends doing either of these things is time one cannot dedicate to gaining grant-making experience.

Dividing this labour (as the suggestions in the OP point towards) seem the way to go. Yet this can only get you so far if 'grantmaking talent' is not only limited among people with the opportunity to make grants, but limited across the EA population in general. Further, good grant-makers will gravitate to the largest pools of funding (reasonably enough, as this is where their contribution has the greatest leverage). This predictably leads to gaps in the funding ecosystem where 'good projects from the point of view of the universe' and 'good projects from the point of view of the big funders' subtly differ: I'm not sure I agree with the suggestions in the OP (i.e. upskilling people, new orgs), but I find Carl Shulman's remarks here persuasive.

Comment author: Denise_Melchin 11 July 2018 04:53:52PM *  2 points [-]

+1 I didn’t spell it out this explicitly, but what I found slightly odd about this post is that infrastructure is not the bottleneck on more grant making, but qualified grant makers.

Comment author: Brendon_Wong 10 July 2018 04:09:39PM *  5 points [-]

Thanks for the information.

The "ideas" were listed more to break down possible implementations than to propose executing all of them in their exact forms. 1 could be incorporated into the new EA Hub perhaps, and I am aware Dony Christie is exploring EA Peer Funding, but perhaps you are referring to other people. I am not familiar with anyone that is working on 2 but I'm happy to hear that this is being worked on in some capacity. Yeah, I agree that improving EA Grants would be a good way to make something like 3 possible, and will likely end up happening.

I believe the exact forms of each listed idea contain problems, and my intended proposal is an attempt to fuse all of the ideas and eliminate weaknesses of implementing an listed idea on its own. I don't know of any attempts to do a fused approach, but please correct me if I'm wrong. For example, regarding the issue of implementing all three listed ideas in their exact form, centralizing this sort of grant funding much like EA Grants has done could cause many problems. There is currently no grant transparency. A lot of possibly useful projects may have applied, not gotten funded, and then given up, or as Remmelt mentioned, other donors may not support a project because it was not funded by CEA. There is no way for other donors in the community uniquely equipped to evaluate, contribute to, or fund projects to actually see what projects exist in EA and evaluate, contribute to, or fund them. Basically, not only is centralization potentially inefficient, it may have already led to a large number of project failures, some of which may have evolved to become successful, high impact projects with a different grantmaking model.

Seconding alexherwix, unless there are privacy concerns, sharing information about other people working in this space and their ideas would be useful for coordination purposes. Also, early stage projects often don't work out, so if the project is important enough, then coordinating efforts or perhaps even building the same broad idea with different teams with very different implementations is a good idea in case one team-implementation pairing would succeed, but other team-implementation pairings would not fare well or be highly suboptimal.

Comment author: Denise_Melchin 11 July 2018 09:41:10AM 1 point [-]

I agree collaboration between the various implementations of the different ideas is valuable and it can be good to help out technically. I'm less convinced of starting a fused approach as an outsider. As Ryan Carey said, most important for good work in this field is i) having people good at grantmaking i.e. making funding decisions ii) the actual money.

Thinking about approaches how to ideally handle grantmaking without having either strikes me as putting the cart before the horse. While it might be great to have a fused approach, I think this will largely be up to the projects who have i) and ii) whether they wish to collaborate further, though other people might be able to help with technical aspects.

Comment author: Denise_Melchin 10 July 2018 10:26:43AM 4 points [-]

All of your ideas listed are already being worked on by some people. I talked just yesterday to someone who is intending to implement #1 soon, #3 will likely be achieved by handling EA Grants differently in the future, and there are already a couple of people working on #2, though there is further room for improvement.

Comment author: remmelt  (EA Profile) 04 July 2018 03:54:25PM 1 point [-]

Hi Denise, can you give some examples of superfluous language? I tried to explain it as simply as possible (though sometimes jargon and links are needed to avoid having to explain concepts in long paragraphs) but I’m sure I still made it too complicated in places.

Comment author: Denise_Melchin 08 July 2018 07:55:31PM 2 points [-]

It is still not clear to me how your model is different to what EAs usually call different levels of meta. What is it adding? Using words like 'construal level' complicates the matter further.

I'm happy to elaborate more via PM if you like.

Comment author: Denise_Melchin 04 July 2018 02:53:37PM 6 points [-]

I think you're making some valuable points here (e.g. making sure information is properly implemented into the 'higher levels') but I think your posts would have been a lot better if had skipped all the complicated modelling and difficult language. It strikes me as superfluous and the main result seems to me that it makes your post harder to read without adding any content.

Comment author: Denise_Melchin 04 July 2018 10:37:53AM *  0 points [-]

(Denise as mod)

The EA Forum is a place for high level discussion on EA matters which are often too long or inappropriate in other spaces like Facebook. Not yet fully fledged or thoroughly argued ideas are better placed there, since the EA Forum gets too crowded otherwise.

Therefore I'll delete your post. You can modify it and repost, or alternatively, post it elsewhere (like the EA Hangout Facebook group).

Edit: All further comments will be deleted.

Comment author: Khorton 09 June 2018 10:28:04AM 3 points [-]

What are the forum norms around advertising?

Comment author: Denise_Melchin 11 June 2018 01:48:48PM 7 points [-]

Usually advertising is not welcome, but in this case, Lynette asked (us EA Forum moderators) for permission in advance. Lynette got an EA Grant to do her work and it's complementary to other EA community services.

Comment author: Denise_Melchin 06 June 2018 10:38:18PM *  15 points [-]

I’m really curious which description of EA you used in your study, could you post that here? What kind of attitudes towards EA did you ask about?

I can imagine there might be very different results depending on the framing.

My take on this is that while many more people than now might agree with EA ideas, fewer of them will find the lived practice and community to be a good fit. I think that’s a pretty unfortunate historical lock in.

Comment author: Denise_Melchin 29 May 2018 01:07:00PM 3 points [-]

Where are you actually disagreeing with Joey and the conclusions he is drawing?

Joey is arguing that the --EA Movement-- might accidentally overcount its impact by adding each individual actor's counterfactual impact together. You point out a scenario in which various individual actor's actions are necessary for the counterfactual impact to happen so it is legitimate for each actor to claim the full counterfactual impact. This seems tangential to Joey's point, which is fundamentally about the practical implications of this problem. The question of who is responsible for the counterfactual impact and who should get credit are being asked because as the EA Movement we have to decide how to allocate our resources to the different actors. We also need to be cautious not to overcount impact as a movement in our outside communications and to not get the wrong impression ourselves.

Comment author: Denise_Melchin 29 May 2018 12:27:00PM 2 points [-]

I think it would have been better for you to post this as a comment on your own or Joey’s post. Having a discussion in three different places makes the discussion hard to follow. Two are more than enough.

View more: Prev | Next