Comment author: Benito 14 September 2018 01:39:59AM 2 points [-]

It is remarkable what humans can do when we think carefully and coordinate.

This short essay inspires me to work harder for the things I care about. Thank you for writing it.

Comment author: Jan_Sz 10 September 2018 10:51:26AM 3 points [-]

Cerasoli, C. P., Nicklin, J. M., & Ford, M. T. (2014). Intrinsic motivation and extrinsic incentives jointly predict performance: A 40-year meta-analysis.

In general, our most important theoretical and empirical contribution is that incentives and intrinsic motivation are not of necessity antagonistic; We found that incentives coexist with intrinsic motivation, depending on the type of performance and the contingency of the incentive. http://psycnet.apa.org/record/2014-03897-001

Comment author: Benito 12 September 2018 06:24:58AM 1 point [-]

Yeah, this matches my personal experience a bunch. I'm planning to look into this literature sometime soon, but I'd be interested to know if anyone has strong opinions about what first-principles model best fits with the existing work in this area.

Comment author: kbog  (EA Profile) 10 September 2018 10:26:03PM *  0 points [-]

Sure! Which is why I've been exchanging arguments with you.

And, therefore, you would be wise to treat Open Phil in the same manner, i.e. something to disagree with, not something to attack as not being Good Enough for EA.

Now what on earth is that supposed to mean? What are you trying to say with this? You want references, is that it? I have no idea what this claim is supposed to stand for :-/

It means that you haven't argued your point with the sufficient rigor and comprehensiveness that is required for you to convince every reasonable person. (no, stating "experts in my field agree with me" does not count here, even though it's a big part of it)

Sure, and so far you haven't given me a single good reason.

Other people have discussed and linked Open Phil's philosophy, I see no point in rehashing it.

Comment author: Benito 10 September 2018 10:33:54PM 3 points [-]

I don't have the time to join the debate, but I'm pretty sure Dunja's point isn't "I know that OpenPhil's strategy is bad" but "Why does everyone around here act as though it is knowable that their strategy is good, given their lack of transparency?" It seems like people act OpenPhil's strategy is good and aren't massively confused / explicitly clear that they don't have the info that is required to assess the strategy.

Dunja, is that accurate?

(Small note: I'd been meaning to try to read the two papers you linked me to above a couple months ago about continental drift and whatnot, but I couldn't get non-paywalled versions. If you have them, or could send them to me at gmail.com preceeded by 'benitopace', I'd appreciate that.)

Comment author: RandomEA 23 July 2018 04:43:08PM *  6 points [-]

What do people think of the idea of having multiple funds (each run by a different manager) for those two areas (with donors allowed to choose a specific manager)?

Benefits would include:

  • a greater incentive for managers to spend money promptly and transparently

  • greater choice for donors (if managers have different worldviews e.g. long term future and thinks AI safety should be a priority, long term future but thinks AI safety should be less of a priority, community with a focus on community building, community with a focus on cause prioritization)

  • an increase in the chance that good projects are funded

Costs could include:

  • creating tension between the fund managers (and perhaps in the community at large)

  • no fund manager having enough money for bigger grants (though perhaps large grants could be left to Open Phil)

  • an increase in the chance that harmful projects are funded

Note: The idea of multiple fund managers has been proposed before.

Comment author: Benito 23 July 2018 07:46:12PM 4 points [-]

I think that one of the constraints that is faced here is a lack of experienced grantmakers who have a good knowledge of x-risk and the EA community.

I'm not sure I agree that this constraint is real, I think I probably know a lot of good people who I'd trust to be competent EA / x-risk grantmakers, but I certainly haven't spent 10 hours thinking about what key qualities for the role are, and it's plausible that I'd find there are far fewer people competent enough than I currently think.

But if there are more grant managers, I think that I disagree with your costs. Two or more grantmakers acting on their own, different, first-principles models seems great to me, and to increase the likelihood of good grantmaking occuring, not increasing tension or anything. Competition is really rare and valuable in domains like this.

Comment author: SiebeRozendal 23 July 2018 12:31:04PM *  1 point [-]

Feature request: show reading times

It would be useful to show approximate reading times for posts, because readers can decide whether to commit to a long article or not. This saves valuable time of EA's, and improves the engagement with the post.

Comment author: Benito 23 July 2018 12:50:21PM *  5 points [-]

Yup, we actually already built this for LessWrong 2.0 (check it out on the frontpage, where each post says how many minutes reading it is), and so you'll get them when the CEA team launches the new EA Forum 2.0.

Comment author: cole_haus 22 July 2018 06:08:09PM 7 points [-]

I've not yet read it myself, but I'm curious if anyone involved in this project has read "Building Successful Online Communities: Evidence-Based Social Design" (https://mitpress.mit.edu/books/building-successful-online-communities). Seems quite relevant.

Comment author: Benito 22 July 2018 07:18:00PM *  7 points [-]

I actually have made detailed notes on the first 65% of the book, and hope to write up some summaries of the chapters.

It’s a great work. To do the relevant literature reviews would likely have taken me 100s of hours, rather than the 10s to study the book. As with all social science, the conclusions from most of the individual studies are suspect, but I think it sets out some great and concrete models to start from and test against other data we have.

Added: I’m Ben Pace, from LessWrong.

Added2: I finished the book. Not sure when my priorities will allow me to turn it into blogposts, alas.

Comment author: Benito 24 April 2018 11:16:09AM 6 points [-]

I’m such a big fan of “outreach is an offer, not persuasion”.

In general, my personal attitude to outreach in student groups is not to ‘get’ the best people via attraction and sales, but to just do something awesome that seems to produce value (e.g. build a research group around a question, organise workshops around a thinking tool, write a talk on a topic you’re confused about and want to discuss), and then the best people will join you on your quest. (Think quests, not sales.)

If your quest involves sales as a side-effect (e.g. you’re running an EAGx) then that’s okay, as long as the core of what you’re doing is trying to solve a real problem and make progress on an open question you have. Run EAGxes around a goal of moving the needle forward on certain questions, on making projects happen, solving some coordination problem in the community, or some other concrete problem-based metric. Not just “get more EAs”.

I think the reason this post (and all other writing on the topic) has had difficulty suggesting particular quests is that they tend to be deeply tied up in someone’s psyche. Nonetheless l think this is what’s necessary.

Comment author: itaibn 05 April 2018 12:13:20PM 13 points [-]

On this very website, clicking the link "New to Effective Altruism?" and a little browsing quickly leads to recommendations to give to EA funds. If EA funds really is intended to be a high-trust option, CEA should change that recommendation.

Comment author: Benito 05 April 2018 07:22:51PM *  1 point [-]

Yup. I suppose I wrote down my assessment of the information available about the funds and the sort of things that would cause me to donate to it, not the marketing used to advertise it - which does indeed feel disconnected. It seems that there's a confusing attempt to make this seem reasonable to everyone whilst in fact not offering the sort of evidence that should make it so.

The evidence about it is not the 'evidence-backed charities' that made GiveWell famous/trustworthy, but is "here is a high status person in a related field that has a strong connection to EA", which seems not that different from the way other communities ask their members to give funding - it's based on trust in the leaders in the community, not on objectively verifiable metrics to outsiders. So you should ask yourself what causes you to trust CEA and then use that, as opposed to the objective metrics associated with the EA funds (which there are far fewer of than with GiveWell). For example if CEA has generally made good philosophical progress in this area and also made good hiring decisions, that would make you trust the grant managers more.

Comment author: Ervin 04 April 2018 10:58:49PM 18 points [-]

Looking at the EA Community Fund as an especially tractable example (due to the limited field of charities it could fund):

  • Since its launch in early 2017 it appears to have collected $289,968, and not regranted any of it until a $83k grant to EA Sweden currently in progress. I am basing this on https://app.effectivealtruism.org/funds/ea-community - it may not be precisely right.

  • On the one hand, it's good that some money is being disbursed. On the other hand the only info we have is https://app.effectivealtruism.org/funds/ea-community/payouts/1EjFHdfk3GmIeIaqquWgQI . All we're told about the idea and why it was funded is that it's an "EA community building organization in Sweden" and Will McAskill recommended Nick Beckstead fund it "on the basis of (i) Markus's track record in EA community building at Cambridge and in Sweden and (ii) a conversation he had with Markus." Putting it piquantly (and over-strongly I'm sure, for effect), this sounds concerningly like an old boy's network: Markus > Will > Nick. (For those who don't know, Will and Nick were both involved in creating CEA.) It might not be, but the paucity of information doesn't let us reassure ourselves that it's not.

  • With $200k still unallocated, one would hope that the larger and more reputable EA movement building projects out there would have been funded, or we could at least see that they've been diligently considered. I may be leaving some out, but these would at least include the non-CEA movement building charities: EA Foundation (for their EA outreach projects), Rethink Charity and EA London. As best as I could get an answer from Rethink Charity at http://effective-altruism.com/ea/1ld/announcing_rethink_priorities/dir?context=3 this is not true in their case at least.

  • Meanwhile these charities can't make their case direct to movement building donors whose money has gone to the fund since its creation.

This is concerning, and sounds like it may have done harm.

Comment author: Benito 05 April 2018 12:52:10AM *  5 points [-]

Note: EA is totally a trust network - I don't think the funds are trying to be anything like GiveWell, who you're supposed to trust based on the publicly-verifiable rigour of their research. EA funds is much more toward the side of the spectrum of "have you personally seen CEA make good decisions in this area" or "do you specifically trust one of the re-granters". Which is fine, trust is how tightly-knit teams and communities often get made. But if you gave to it thinking "this will look like if I give to Oxfam, and will have the same accountability structure" then you'll correctly be surprised to find out it works significantly via personal connections.

The same way you'd only fund a startup if you knew them and how they worked, you should probably only fund EA funds for similar reasons - and if the startup tried to make its business plan such that anyone would have reason to fund it, the business plan probably wouldn't be very good. I think that EA should continue to be a trust-based network, and so on the margin I guess people should give less to EA funds rather than EA funds make grants that are more defensible.

Comment author: Jan_Kulveit 04 April 2018 10:27:46AM *  24 points [-]

From my observations, the biggest problem in current EA funding ecosystem is structural bottlenecks.

It seems difficult to get relatively modest funding for a promising project if you are not well connected in the network / early stage projects in general (?).

Why?

While OpenPhil has an abundance of resources, they are at the moment staff limited, unlikely to grant to subjects they don't know directly, and unlikely to grant to small projects ($10k)

EA funds seem to also be staff limited and also not capable of giving small grants.

In theory, EA Grants should fill this gap, but the program also seems staff limited (I'm familiar with one grant application where since Nov 2017 the term when the grant program will be open is pushed into future at a rate 1 month per month)

Part of the early-stage projects grant support problem is it generally means investing into people. Investing in people needs either trust or lot of resources to evaluate the people (which is in some aspects more difficult than evaluating projects which are up and running)

Trust in our setting usually comes via network links in the social network, which is quite limiting resource.

So my conclusion is, the efficient allocation is structurally limited by 1] lack of staff in grant-making organizations 2] insufficient size of the "trust network" allowing investment in promising projects based on theor founders

Individual EAs have good opportunities to get more impact from their donations than by donating to EA funds if they're overcoming the structural bottlenecks by their funding. That may mean

a] donating to projects which are under the radar of OpenPhil and EA Funds

b] using their personal knowledge of people to support early stage efforts

Comment author: Benito 04 April 2018 06:46:26PM *  12 points [-]

On trust networks: These are very powerful and effective. YCombinator, for example, say they get most of their best companies via personal recommendation, and the top VCs say that the best way to get funded by them is an introduction by someone they trust.

(Btw I got an EA Grant last year I expect in large part because CEA knew me because I successfully ran an EAGx conference. I think the above argument is strong on its own but my guess is many folks around here would like me to mention this fact.)

On things you can do with your money that are better than EA funds: personally I don’t have that much money, but with my excess I tend to do things like buy flights and give money to people I’ve made friends with who seem like they could get a lot of value from it (e.g. buy a flight to a CFAR workshop, fund them living somewhere to work on a project for 3 months, etc). This is the sort of thing only a small donor with personal connections can do, at least currently.

On EA grants:

Part of the early-stage projects grant support problem is it generally means investing into people. Investing in people needs either trust or lot of resources to evaluate the people (which is in some aspects more difficult than evaluating projects which are up and running)

Yes. If I were running EA grants I would continually be in contact with the community, finding out peoples project ideas, discussing it with them for 5 hours and getting to know them and how much I could trust them, and then handing out money as I saw fit. This is one of the biggest funding bottlenecks in the community. The place that seems most to have addressed them has actually been the winners of the donor lotteries, who seemed to take it seriously and use the personal information they had.

I haven’t even heard about EA grants this time around, which seems like a failure on all the obvious axes (including the one of letting grantees know that the EA community is a reliable source of funding that you can make multi-year plans around - this makes me mostly update toward EA grants being a one-off thing that I shouldn’t rely on).

View more: Next