18

remmelt comments on Update on Effective Altruism Funds - Effective Altruism Forum

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (85)

You are viewing a single comment's thread.

Comment author: remmelt  (EA Profile) 20 April 2017 11:51:05PM 18 points [-]

While this way of gauging feedback is far from perfect, our impression is that community feedback has been largely positive. Where we’ve received criticism it has mostly been around how we can improve the website and our communication about EA Funds as opposed to criticism about the core concept.

As much as I admire the care that has been put into EA Funds (e.g. the 'Why might you choose not to donate to this fund?' heading for each fund), this sentence came across as 'too easy' for me. To be honest, it made me wonder if the analysis was self-critical enough (I admit to having scanned it) as I'd be surprised if the trusted people you spoke with couldn't think of any significant risks. I also think 'largely positive' reception does not seem like a good indicator. If a person like Eliezer would stand out as the sole person in disagreement, that should give pause for thought.

Even though the article is an update, I'm somewhat concerned by that it goes little into possible long-term risks. One that seems especially important is the consequences of centralising fund allocation (mostly to managers connected to OP) to having a diversity of views and decentralised correction mechanisms within our community. Please let me know where you think I might have made mistakes/missed important aspects.

I especially want to refer to Rob Wiblin's earlier comment: http://effective-altruism.com/ea/17v/ea_funds_beta_launch/aco

I love EA Funds, but my main concern is that as a community we are getting closer and closer to a single point of failure. If OPP reaches the wrong conclusion about something, there's now fewer independent donors forming their own views to correct them. This was already true because of how much people used the views of OPP and its staff to guide their own decisions.

We need some diversity (or outright randomness) in funding decisions for robustness.

Comment author: Peter_Hurford  (EA Profile) 21 April 2017 02:05:13AM 11 points [-]

One that seems especially important is the consequences of centralising fund allocation (mostly to managers connected to OP)

This is my largest concern as well. As someone who looks for funding for projects, I've noticed a lot of donors centralizing around these funds. This is good for them, because it saves them the time of having to evaluate, and good for me, because it gives me a single place to request funding. But if I can't convince them to fund me for some reason and I think they're making a mistake, there are no other donors to appeal to anymore. It's all or nothing.

Comment author: Kerry_Vaughan 21 April 2017 05:11:07PM 8 points [-]

But if I can't convince them to fund me for some reason and I think they're making a mistake, there are no other donors to appeal to anymore. It's all or nothing.

The upside of centralization is that it helps prevent the unilateralist curse for funding bad projects. As the number of funders increases, it becomes increasingly easy for the bad projects to find someone who will fund them.

That said, I share the concern that EA Funds will become a single point of failure for projects such that if EA Funds doesn't fund you, the project is dead. We probably want some centralization but we also want worldview diversification. I'm not yet sure how to accomplish this. We could create multiple versions of the current funds with different fund managers, but that is likely to be very confusing to most donors. I'm open to ideas on how to help with this concern.

Comment author: Benito 21 April 2017 08:45:14PM 4 points [-]

Quick (thus likely wrong) thought on solving unilateralist's curse: put multiple position in charge of each fund, each representing a different worldview, and give everyone 3 grant vetoes each year (so they can prevent grants that are awful in their worldview). You can also give them control of a percentage of funds in proportion to CEA's / the donor's confidence in that worldview.

Comment author: Peter_Hurford  (EA Profile) 22 April 2017 02:08:29AM 5 points [-]

Or maybe allocate grants according to a ranked preference vote of the three fund managers, plus have them all individually and publicly write up their reasoning and disagreements? I'd like that a lot.

Comment author: DonyChristie 23 April 2017 07:04:02PM 0 points [-]

Or maybe allocate grants according to a ranked preference vote of the three fund managers, plus have them all individually and publicly write up their reasoning and disagreements?

Serious question: What do you think of N fund managers in your scenario?

Comment author: Peter_Hurford  (EA Profile) 23 April 2017 07:16:06PM 1 point [-]

I don't understand the question.

Comment author: DonyChristie 24 April 2017 09:38:50PM 0 points [-]

Allocating grants according to a ranked preference vote of an arbitrary amount of people (and having them write up their arguments); what is the optimal number here? Where is the inflection point where adding more people decreases the quality of the grants?

On tertiary reading I somewhat misconstrued "three fund managers" as "three fund managers per fund" rather than "the three fund managers we have right now (Nick, Elie, Lewis)", but the possibility is still interesting with any variation.

Comment author: Peter_Hurford  (EA Profile) 25 April 2017 02:11:02AM 0 points [-]

That's a good question. I did intend "three fund managers" to mean "the three fund managers we have right now", but I could also see the optimal number of people being 2-3.

Comment author: ChristianKleineidam 23 April 2017 07:30:23AM 2 points [-]

As the number of funders increases, it becomes increasingly easy for the bad projects to find someone who will fund them.

I'm not sure that's true. There are a lot of venture funds in the Valley but that doesn't mean it's easy to get any venture fund to give you money.

Comment author: Daniel_Eth 24 April 2017 01:32:39AM 1 point [-]
Comment author: MichaelDickens  (EA Profile) 24 April 2017 03:11:23AM 4 points [-]

There's no shortage of bad ventures in the Valley

Every time in the past week or so that I've seen someone talk about a bad venture, they've given the same example. That suggests that there is indeed a shortage of bad ventures--or at least, ventures bad enough to get widespread attention for how bad they are. (Most ventures are "bad" in a trivial sense because most of them fail, but many failed ideas looked like good ideas ex ante.)

Comment author: Daniel_Eth 24 April 2017 04:32:46AM 2 points [-]

Or that there's one recent venture that's so laughably bad that everyone is talking about it right now...

Comment author: ChristianKleineidam 24 April 2017 08:13:49AM 0 points [-]

It's not clear that Juicero is actually a bad venture in the sense that doesn't return the money for it's investors.

Even if that would be the case, VC's make most of the money with a handful companies. A VC can have a good fund if 90% of their investments don't return their money.

I would guess that the same is true for high risk philanthropic investments. It's okay if some high risk investments don't provide value as long as you are betting on some investments that deliever.

Comment author: Kerry_Vaughan 23 April 2017 07:32:52PM 1 point [-]

I'm not sure that's true. There are a lot of venture funds in the Valley but that doesn't mean it's easy to get any venture fund to give you money.

I don't have the precise statistics handy, but my understanding is that VC returns are very good for a small number of firms and break-even or negative for most VC firms. If that's the case, it suggests that as more VCs enter the market, more bad companies are getting funded.

Comment author: Ben_Todd 24 April 2017 03:28:44AM *  2 points [-]

This is a huge digression, but:

I'm not sure it's obvious that current VCs fund all the potentially top companies. If you look into the history of many of the biggest wins, many of them nearly failed multiple times and could have easily shut down if a key funder didn't exist (e.g. Airbnb and YC).

I think a better approximation is an efficient market, in which the risk-adjusted returns of VC at the margin are equal to the market. This means that the probability of funding a winner for a marginal VC is whatever it would take for their returns to equal the market.

Then also becoming a VC, to a first order, has no effect on the cost of capital (which is fixed to the market), so no effect on the number of startups formed. So you're right that additional VCs aren't helpful, but it's for a different reason.

To a second order, there probably are benefits, depending on how skilled you are. The market for startups doesn't seem very efficient and requires specialised knowledge to access. If you develop the VC skill-set, you can reduce transaction costs and make the market for startups more efficient, which enables more to be created.

Moreover, the more money that gets invested rather than consumed, the lower the cost of capital in the economy, which lets more companies get created.

The second order benefits probably diminish as more skilled VCs enter, so that's another sense in which extra VCs are less useful than those we already have.

Comment author: ChristianKleineidam 24 April 2017 07:46:39AM -1 points [-]

I don't think the argument that there are a lot of VC firms that don't get good returns suggest that centralization into one VC firm would be good. There are different successful VC firms that have different preferences in how to invest.

Having one central hub of decision making is essentially the model used in the Soviet Union. I don't think that's a good model.

Decentral decision making usually beats central planning with one single decision making authority in domain with a lot of spread out information.

Comment author: John_Maxwell_IV 22 April 2017 07:50:55AM 1 point [-]

The upside of centralization is that it helps prevent the unilateralist curse for funding bad projects.

This is an interesting point.

It seems to me like mere veto power is sufficient to defeat the unilateralist's curse. The curse doesn't apply in situations where 99% of the thinkers believe an intervention is useless and 1% believe it's useful, only in situations where the 99% think the intervention is harmful and would want to veto it. So technically speaking we don't need to centralize power of action, just power of veto.

That said, my impression is that the EA community has such a strong allergic reaction to authority that anything that looks like an official decisionmaking group with an official veto would be resisted. So it seems like the result is that we go past centralization of veto in to centralization of action, because (ironically) it seems less authority-ish.

Comment author: John_Maxwell_IV 22 April 2017 07:55:12AM *  1 point [-]

On second thought, perhaps it's just an issue of framing.

Would you be interested in an "EA donors league" that tried to overcome the unilateralist's curse by giving people in the league some kind of power to collectively veto the donations made by other people in the league? You'd get the power to veto the donations of other people in exchange for giving others the power to veto your donations (details to be worked out)

(I guess the biggest detail to work out is how to prevent people from simply quitting the league when they want to make a non-kosher donation. Perhaps a cash deposit of some sort would work.)

Submitting...

Comment author: Peter_Hurford  (EA Profile) 23 April 2017 01:41:53PM 2 points [-]

Every choice to fund has false positives (funding something that should not have been funded) and false negatives (not funding something that should have been funded). Veto power only guards against the first one.

Comment author: kbog  (EA Profile) 22 April 2017 01:31:09PM 0 points [-]

The unilateralist's curse does not apply to donations, since funding a project can be done at a range of levels and is not a single, replaceable decision.

Comment author: Owen_Cotton-Barratt 23 April 2017 08:54:29AM 2 points [-]

The basic dynamic applies. Think it's pretty reasonable to use the name to point loosely in such cases, even if the original paper didn't discuss this extension.

Comment author: remmelt  (EA Profile) 27 April 2017 12:22:58AM *  0 points [-]

I hadn't considered the unilateralist's curse and I'll keep this in mind.

To what extent do you think it's sustainable to

a) advocate for a centralised system run by trusted professionals VS.

b) building up the capacity of individual funders to recognise activities that are generally seen as problematic/negative EV by cause prioritisation researchers?

Put simply, I wonder if going for a) centralisation would make the 'system' fragile because EA donors would be less inclined to build up their awareness of big risks. For those individual donors who'd approach cause-selection with rigour and epistemic humility, I can see b) being antifragile. But for those approaching it amateuristically/sloppily, it makes sense to me that they're much better off handing over their money and employing their skills elsewhere.

I admit I don't have a firm grasp of unilateralist's curse scenarios.

Comment author: Kerry_Vaughan 21 April 2017 04:53:04PM 8 points [-]

As much as I admire the care that has been put into EA Funds (e.g. the 'Why might you choose not to donate to this fund?' heading for each fund), this sentence came across as 'too easy' for me. To be honest, it made me wonder if the analysis was self-critical enough (I admit to having scanned it) as I'd be surprised if the trusted people you spoke with couldn't think of any significant risks. I also think 'largely positive' reception does not seem like a good indicator.

I agree. This was a mistake on my part. I was implicitly thinking about some of the recent feedback I'd read on Facebook and was not thinking about responses to the initial launch post.

I agree that it's not fair to say that the criticism have been predominately about website copy. I've changed the relevant section in the post to include links to some of the concerns we received in the launch post.

I'd like to develop some content for the EA Funds website that goes into potential harms of EA Funds that are separate from the question of whether EA Funds is the best option right now for individual donors. Do you have a sense of what concerns seem most compelling or that you'd particularly like to see covered?

Comment author: remmelt  (EA Profile) 27 April 2017 12:52:21AM *  0 points [-]

I haven't looked much into this but basically I'm wondering if simple, uniform promotion of EA Funds would undermine the capacity of community members in say the upper quartile of rationality/commitment to built robust idea sharing and collaboration networks.

In other words, whether it would decrease their collective intelligence pertaining to solving cause-selection problems. I'm really interested in getting practical insights on improving the collective intelligence of a community (please send me links: remmeltellenis[at]gmail.dot.com)

My earlier comment seems related to this:

Put simply, I wonder if going for a) centralisation would make the 'system' fragile because EA donors would be less inclined to build up their awareness of big risks. For those individual donors who'd approach cause-selection with rigour and epistemic humility, I can see b) being antifragile. But for those approaching it amateuristically/sloppily, it makes sense to me that they're much better off handing over their money and employing their skills elsewhere.

(Btw, I admire your openness to improving analysis here.)

Comment author: nonzerosum 21 April 2017 01:15:56AM 2 points [-]

Excellent point.

My suggestion for increasing robustness:

Diverse fund managers, and willingness to have funds for less-known causes. A high diversity of background/personal social networks amongst fund managers, and a willingness to have EA funds for causes not currently championed by OPP or other well known orgs in the EA-sphere could be a good way to increase robustness.

Do you agree? And what are your thoughts in general on increasing robustness?