In response to comment by kbog  (EA Profile) on Why I left EA
Comment author: Fluttershy 21 February 2017 06:30:06AM 5 points [-]

I agree with your last paragraph, as written. But this conversation is about kindness, and trusting people to be competent altruists, and epistemic humility. That's because acting indifferent to whether or not people who care about similar things as we do waste time figuring things out is cold in a way that disproportionately drives away certain types of skilled people who'd otherwise feel welcome in EA.

But this is about optimal marketing and movement growth, a very empirical question. It doesn't seem to have much to do with personal experiences

I'm happy to discuss optimal marketing and movement growth strategies, but I don't think the question of how to optimally grow EA is best answered as an empirical question at all. I'm generally highly supportive of trying to quantify and optimize things, but in this case, treating movement growth as something suited to empirical analysis may be harmful on net, because the underlying factors actually responsible for the way & extent to which movement growth maps to eventual impact are impossible to meaningfully track. Intersectionality comes into the picture when, due to their experiences, people from certain backgrounds are much, much likelier to be able to easily grasp how these underlying factors impact the way in which not all movement growth is equal.

The obvious-to-me way in which this could be true is if traditionally privileged people (especially first-worlders with testosterone-dominated bodies) either don't understand or don't appreciate that unhealthy conversation norms subtly but surely drive away valuable people. I'd expect the effect of unhealthy conversation norms to be mostly unnoticeable; for one, AB-testing EA's overall conversation norms isn't possible. If you're the sort of person who doesn't use particularly friendly conversation norms in the first place, you're likely to underestimate how important friendly conversation norms are to the well-being of others, and overestimate the willingness of others to consider themselves a part of a movement with poor conversation norms.

"Conversation norms" might seem like a dangerously broad term, but I think it's pointing at exactly the right thing. When people speak as if dishonesty is permissible, as if kindness is optional, or as if dominating others is ok, this makes EA's conversation norms worse. There's no reason to think that a decrease in quality of EA's conversation norms would show up in quantitative metrics like number of new pledges per month. But when EA's conversation norms become less healthy, key people are pushed away, or don't engage with us in the first place, and this destroys utility we'd have otherwise produced.

It may be worse than this, even: if counterfactual EAs who care a lot about having healthy conversational norms are a somewhat homogeneous group of people with skill sets that are distinct from our own, this could cause us to disproportionately lack certain classes of talented people in EA.

In response to comment by Fluttershy on Why I left EA
Comment author: Owen_Cotton-Barratt 21 February 2017 09:43:53AM 3 points [-]

Really liked this comment. Would be happy to see a top level post on the issue.

Comment author: Owen_Cotton-Barratt 17 February 2017 09:37:35PM 5 points [-]

Awesome, strongly pro this sort of thing.

You don't mention covering travel expenses. Do you intend to? If not, would you consider donations to let you do so? (Haven't thought about it much, but my heuristics suggest good use of marginal funds.)

Comment author: Owen_Cotton-Barratt 17 February 2017 09:41:03PM 3 points [-]

Actually that's probably overridden by a heuristic of not trying to second-guess decisions as a donor. I rather mean something like: please say if you thought this was a good idea but were budget-constrained.

Comment author: Owen_Cotton-Barratt 17 February 2017 09:37:35PM 5 points [-]

Awesome, strongly pro this sort of thing.

You don't mention covering travel expenses. Do you intend to? If not, would you consider donations to let you do so? (Haven't thought about it much, but my heuristics suggest good use of marginal funds.)

Comment author: CalebWithers  (EA Profile) 12 February 2017 03:43:35AM 1 point [-]

Second, we should generally focus safety research today on fast takeoff scenarios. Since there will be much less safety work in total in these scenarios, extra work is likely to have a much larger marginal effect.

Does this assumption depend on how pessimistic/optimistic one is about our chances of achieving alignment in different take-off scenarios, i.e. what our position on a curve something like this is expected to be for a given takeoff scenario?

Comment author: Owen_Cotton-Barratt 12 February 2017 10:45:52AM 1 point [-]

I think you get an adjustment from that, but that it should be modest. None of the arguments we have so far about how difficult to expect the problem to be seem very robust, so I think it's appropriate to have a somewhat broad prior over possible difficulties.

I think the picture you link to is plausible if the horizontal axis is interpreted as a log scale. But this changes the calculation of marginal impact quite a lot, so that you probably get more marginal impact towards the left than in the middle of the curve. (I think it's conceivable to end up with well-founded beliefs that look like that curve on a linear scale, but that this requires (a) very good understanding of what the problem actually is, & (b) justified confidence that you have the correct understanding.)

Comment author: Kerry_Vaughan 11 February 2017 06:31:32PM 1 point [-]

No cost. In fact, we think we can get lower donation processing fees than might be available to people elsewhere. However, CEA is a plausible recipient for the movement building fund.

Comment author: Owen_Cotton-Barratt 11 February 2017 11:20:23PM *  5 points [-]

Presumably there's an operational cost to CEA in setting up / running the funds? I'd thought this was what Tom was asking about.

Comment author: William_MacAskill 11 February 2017 12:07:00AM 4 points [-]

One thing to note, re diversification (which I do think is an important point in general) is that it's easy to think of Open Phil as a single agent, rather than a collection of agents; and because Open Phil is a collective entity, there are gains from diversification even with the funds.

For example, there might be a grant that a program officer wants to make, but there's internal disagreement about it, and the program officer doesn't have time (given opportunity cost) to convince others at Open Phil why it's a good idea. (This has been historically true for, say, the EA Giving Fund). Having a separate pool of money would allow them to fund things like that.

Comment author: Owen_Cotton-Barratt 11 February 2017 11:23:57AM 3 points [-]

I think this is an important point. But it's worth acknowledging there's a potential downside to this too -- perhaps the bar of getting others on board is a useful check against errors of individual judgement.

Comment author: Telofy  (EA Profile) 31 December 2016 05:42:16PM 2 points [-]

Thank you for all the interesting thoughts! Though the general thesis confirmed my prior on the topic, there were many insightful nuggets in it that I need to remember.

One question though. Either I’m parsing this sentence wrong, the “in expectation” is not meant to be there, or it’s supposed to be something along the lines of “per time investment”:

In light of the availability of donor lotteries the rest of this post will be assuming that large donation sizes and time investments are accessible for small donors in expectation.

Comment author: Owen_Cotton-Barratt 31 December 2016 05:50:49PM 2 points [-]

I think "in expectation" is meant to mean that they can access a probability of having large donation size and time investment. You might say "stochastically".

Comment author: Owen_Cotton-Barratt 31 December 2016 05:18:28PM 5 points [-]

Thanks for such a thorough exploration of the advantages of scaling up, and why small donors may be able to beat larger institutions at the margin. I'd previously (prior to the other thread) thought that there was typically not that much to gain (or lose) from entering a lottery, but I'm now persuaded that it's probably a good idea for many small donors.

I still see a few reasons one might prefer not to commit to a lottery:

1) If you see significant benefit to the community of more people seriously thinking through donation decisions, you might prefer to reserve enough of your donation to be allocated personally that you will take the process seriously (even if you give something else to a donor lottery). Jacob Steinhardt discusses this in his recent donation post. I'm sympathetic to this for people who actively want to take some time to think through where to give (but I don't think that's everyone).

2) If you prefer giving now over giving later, you may wish to make commitments about future donations to help charities scale up faster. This is much harder to do with donor lotteries. If you trusted other lottery entrants enough you could all commit to donating to it in the future, with the ability to make commitments about next year's allocation of funds being randomised today. But that's a much higher bar of trust than the current lottery requires. Alternatively you could borrow money to donate more (via the lottery) today. If you think that there are significant advantages to the lottery and to giving earlier, this strategy might be correct, even if borrowing to give to a particular charity is often beaten by making commitments about future donations. But if you think you're only getting a small edge from entering the lottery, this might be smaller than the benefit of being able to make commitments, and so not worthwhile.

3) If you think you might be in a good position to recognise small giving opportunities which are clearly above the bar for the community as a whole to fund, it could make sense for you to reserve some funds to let you fill these gaps in a low-friction manner. I think this is most likely to be the case for people who are doing direct work in high-priority areas. Taking such opportunities directly can avoid having to pull the attention of large or medium-sized funders. This is similar to the approach of delegating to another small donor, where the small donor is future-you.

Comment author: Mac- 31 December 2016 02:57:00PM 1 point [-]

I think this is a very good idea. Unfortunately, I don't really know any of you, and I don't think it's worth the time to thoroughly research your reputations and characters, so I'm not going to contribute.

However, I would be interested in a registered charitable organization whose sole purpose is to run a donation lottery annually. In fact, I would donate to the operations of such a charity if the necessary safeguards and/or reputation were in place. Seems like an easy "bolt-on" project for GiveWell, no?

If anyone else would like to see a permanent donor lottery from GiveWell, let me know how much you're willing to contribute to start it (via private message if you prefer). I'll total the amounts in a few weeks and present to GiveWell. Maybe it will pique their interest.

Comment author: Owen_Cotton-Barratt 31 December 2016 04:32:30PM *  0 points [-]

This seems like a reasonable concern, and longer term building good institutions for donor lotteries seems valuable.

However, I suspect there may be more overheads (and possible legal complications) associated with trying to run it as part of an existing charity. In the immediate, I wonder if there are enough people who you do trust who might give character references which would work for this? (You implied trust in GiveWell, and I believe Paul and Carl are fairly well known to several GiveWell staff; on the other hand you might think that the institutional reputation of GiveWell is more valuable than the individual reputations of people who work there, and so be more inclined to trust a project it backs not because you know more about it, but because it has more at stake.)

Comment author: carneades 23 December 2016 03:45:51PM -1 points [-]

I agree, and I would even go farther, I would claim that the AMF should not simply be ranked below the top spot, but that it in fact does more harm than good. I live and work in international development in West Africa. Consistently bednet distributions provide short term benefits that organizations diligently document and cause long term economic and social harm that they conveniently ignore, because if they did not, they would be out of a job. They save lives at the expense of economic growth, freedom to choose, and community independence. Here's the full argument Stop Giving Well.

Comment author: Owen_Cotton-Barratt 23 December 2016 07:12:19PM 2 points [-]

Is there a written version of this anywhere? I'm interested in the content of the argument, but I don't like video.

View more: Next