Comment author: RyanCarey 25 March 2017 10:53:14PM *  3 points [-]

I doubly agree here. The title "Hard-to-reverse decisions destroy option value" is hard to disagree with because it is pretty tautological.

Over the last couple of years, I've found it to be a widely held view among researchers interested in the long-run future that the EA movement should on the margin be doing less philosophical analysis. It seems to me that it would be beneficial for more work to be done on the margin on i) writing proposals for concrete projects, ii) reviewing empirical literature, and iii) analyzing technological capabilities and fundamental limitations, and less philosophical analysis.

Philosophical analysis such as in much of EA Concepts and these characterizations of how to think about counterfactuals and optionality are less useful than (i-iii) because they do not very strongly change how we will try to affect the world. Suppose I want to write some EA project proposals. In such cases, I am generally not very interested in citing these generalist philosophical pieces. Rather, I usually want to build from a concrete scientific/empirical understanding of related domains and similar past projects. Moreover, I think "customers" like me who are trying to propose concrete work are usually not asking for these kinds of philosophical analysis and are more interested in (i-iii).

Comment author: Owen_Cotton-Barratt 26 March 2017 11:13:12AM 2 points [-]

Over the last couple of years, I've found it to be a widely held view among researchers interested in the long-run future that the EA movement should on the margin be doing less philosophical analysis.

I agree with some versions of this view. For what it's worth I think there may be a selection effect in terms of the people you're talking to, though (perhaps in terms of the organisations they've chosen to work with): I don't think there's anything like consensus about this among the researchers I've talked to.

Comment author: redmoonsoaring 18 March 2017 05:38:04PM 13 points [-]

While I see some value in detailing commonly-held positions like this post does, and I think this post is well-written, I want to flag my concern that it seems like a great example of a lot of effort going into creating content that nobody really disagrees with. This sort of armchair qualified writing doesn't seem to me like a very cost-effective use of EA resources, and I worry we do a lot of it, partly because it's easy to do and gets a lot of positive social reinforcement, to a much greater degree than empirical bold writing tends to get.

Comment author: Owen_Cotton-Barratt 25 March 2017 03:10:04PM *  6 points [-]

I think that the value of this type of work comes from: (i) making it easier for people entering the community to come up to the frontier of thought on different issues; (ii) building solid foundations for our positions, which makes it easier to go take large steps in subsequent work.

Cf. Olah & Carter's recent post on research debt.

In response to comment by kbog  (EA Profile) on Why I left EA
Comment author: Fluttershy 21 February 2017 06:30:06AM 7 points [-]

I agree with your last paragraph, as written. But this conversation is about kindness, and trusting people to be competent altruists, and epistemic humility. That's because acting indifferent to whether or not people who care about similar things as we do waste time figuring things out is cold in a way that disproportionately drives away certain types of skilled people who'd otherwise feel welcome in EA.

But this is about optimal marketing and movement growth, a very empirical question. It doesn't seem to have much to do with personal experiences

I'm happy to discuss optimal marketing and movement growth strategies, but I don't think the question of how to optimally grow EA is best answered as an empirical question at all. I'm generally highly supportive of trying to quantify and optimize things, but in this case, treating movement growth as something suited to empirical analysis may be harmful on net, because the underlying factors actually responsible for the way & extent to which movement growth maps to eventual impact are impossible to meaningfully track. Intersectionality comes into the picture when, due to their experiences, people from certain backgrounds are much, much likelier to be able to easily grasp how these underlying factors impact the way in which not all movement growth is equal.

The obvious-to-me way in which this could be true is if traditionally privileged people (especially first-worlders with testosterone-dominated bodies) either don't understand or don't appreciate that unhealthy conversation norms subtly but surely drive away valuable people. I'd expect the effect of unhealthy conversation norms to be mostly unnoticeable; for one, AB-testing EA's overall conversation norms isn't possible. If you're the sort of person who doesn't use particularly friendly conversation norms in the first place, you're likely to underestimate how important friendly conversation norms are to the well-being of others, and overestimate the willingness of others to consider themselves a part of a movement with poor conversation norms.

"Conversation norms" might seem like a dangerously broad term, but I think it's pointing at exactly the right thing. When people speak as if dishonesty is permissible, as if kindness is optional, or as if dominating others is ok, this makes EA's conversation norms worse. There's no reason to think that a decrease in quality of EA's conversation norms would show up in quantitative metrics like number of new pledges per month. But when EA's conversation norms become less healthy, key people are pushed away, or don't engage with us in the first place, and this destroys utility we'd have otherwise produced.

It may be worse than this, even: if counterfactual EAs who care a lot about having healthy conversational norms are a somewhat homogeneous group of people with skill sets that are distinct from our own, this could cause us to disproportionately lack certain classes of talented people in EA.

In response to comment by Fluttershy on Why I left EA
Comment author: Owen_Cotton-Barratt 21 February 2017 09:43:53AM 4 points [-]

Really liked this comment. Would be happy to see a top level post on the issue.

Comment author: Owen_Cotton-Barratt 17 February 2017 09:37:35PM 5 points [-]

Awesome, strongly pro this sort of thing.

You don't mention covering travel expenses. Do you intend to? If not, would you consider donations to let you do so? (Haven't thought about it much, but my heuristics suggest good use of marginal funds.)

Comment author: Owen_Cotton-Barratt 17 February 2017 09:41:03PM 3 points [-]

Actually that's probably overridden by a heuristic of not trying to second-guess decisions as a donor. I rather mean something like: please say if you thought this was a good idea but were budget-constrained.

Comment author: Owen_Cotton-Barratt 17 February 2017 09:37:35PM 5 points [-]

Awesome, strongly pro this sort of thing.

You don't mention covering travel expenses. Do you intend to? If not, would you consider donations to let you do so? (Haven't thought about it much, but my heuristics suggest good use of marginal funds.)

Comment author: CalebWithers  (EA Profile) 12 February 2017 03:43:35AM 1 point [-]

Second, we should generally focus safety research today on fast takeoff scenarios. Since there will be much less safety work in total in these scenarios, extra work is likely to have a much larger marginal effect.

Does this assumption depend on how pessimistic/optimistic one is about our chances of achieving alignment in different take-off scenarios, i.e. what our position on a curve something like this is expected to be for a given takeoff scenario?

Comment author: Owen_Cotton-Barratt 12 February 2017 10:45:52AM 1 point [-]

I think you get an adjustment from that, but that it should be modest. None of the arguments we have so far about how difficult to expect the problem to be seem very robust, so I think it's appropriate to have a somewhat broad prior over possible difficulties.

I think the picture you link to is plausible if the horizontal axis is interpreted as a log scale. But this changes the calculation of marginal impact quite a lot, so that you probably get more marginal impact towards the left than in the middle of the curve. (I think it's conceivable to end up with well-founded beliefs that look like that curve on a linear scale, but that this requires (a) very good understanding of what the problem actually is, & (b) justified confidence that you have the correct understanding.)

Comment author: Kerry_Vaughan 11 February 2017 06:31:32PM 2 points [-]

No cost. In fact, we think we can get lower donation processing fees than might be available to people elsewhere. However, CEA is a plausible recipient for the movement building fund.

Comment author: Owen_Cotton-Barratt 11 February 2017 11:20:23PM *  5 points [-]

Presumably there's an operational cost to CEA in setting up / running the funds? I'd thought this was what Tom was asking about.

Comment author: William_MacAskill 11 February 2017 12:07:00AM 4 points [-]

One thing to note, re diversification (which I do think is an important point in general) is that it's easy to think of Open Phil as a single agent, rather than a collection of agents; and because Open Phil is a collective entity, there are gains from diversification even with the funds.

For example, there might be a grant that a program officer wants to make, but there's internal disagreement about it, and the program officer doesn't have time (given opportunity cost) to convince others at Open Phil why it's a good idea. (This has been historically true for, say, the EA Giving Fund). Having a separate pool of money would allow them to fund things like that.

Comment author: Owen_Cotton-Barratt 11 February 2017 11:23:57AM 4 points [-]

I think this is an important point. But it's worth acknowledging there's a potential downside to this too -- perhaps the bar of getting others on board is a useful check against errors of individual judgement.

Comment author: Telofy  (EA Profile) 31 December 2016 05:42:16PM 2 points [-]

Thank you for all the interesting thoughts! Though the general thesis confirmed my prior on the topic, there were many insightful nuggets in it that I need to remember.

One question though. Either I’m parsing this sentence wrong, the “in expectation” is not meant to be there, or it’s supposed to be something along the lines of “per time investment”:

In light of the availability of donor lotteries the rest of this post will be assuming that large donation sizes and time investments are accessible for small donors in expectation.

Comment author: Owen_Cotton-Barratt 31 December 2016 05:50:49PM 2 points [-]

I think "in expectation" is meant to mean that they can access a probability of having large donation size and time investment. You might say "stochastically".

Comment author: Owen_Cotton-Barratt 31 December 2016 05:18:28PM 5 points [-]

Thanks for such a thorough exploration of the advantages of scaling up, and why small donors may be able to beat larger institutions at the margin. I'd previously (prior to the other thread) thought that there was typically not that much to gain (or lose) from entering a lottery, but I'm now persuaded that it's probably a good idea for many small donors.

I still see a few reasons one might prefer not to commit to a lottery:

1) If you see significant benefit to the community of more people seriously thinking through donation decisions, you might prefer to reserve enough of your donation to be allocated personally that you will take the process seriously (even if you give something else to a donor lottery). Jacob Steinhardt discusses this in his recent donation post. I'm sympathetic to this for people who actively want to take some time to think through where to give (but I don't think that's everyone).

2) If you prefer giving now over giving later, you may wish to make commitments about future donations to help charities scale up faster. This is much harder to do with donor lotteries. If you trusted other lottery entrants enough you could all commit to donating to it in the future, with the ability to make commitments about next year's allocation of funds being randomised today. But that's a much higher bar of trust than the current lottery requires. Alternatively you could borrow money to donate more (via the lottery) today. If you think that there are significant advantages to the lottery and to giving earlier, this strategy might be correct, even if borrowing to give to a particular charity is often beaten by making commitments about future donations. But if you think you're only getting a small edge from entering the lottery, this might be smaller than the benefit of being able to make commitments, and so not worthwhile.

3) If you think you might be in a good position to recognise small giving opportunities which are clearly above the bar for the community as a whole to fund, it could make sense for you to reserve some funds to let you fill these gaps in a low-friction manner. I think this is most likely to be the case for people who are doing direct work in high-priority areas. Taking such opportunities directly can avoid having to pull the attention of large or medium-sized funders. This is similar to the approach of delegating to another small donor, where the small donor is future-you.

View more: Next