Comment author: RyanCarey 25 April 2017 06:19:18PM 3 points [-]

I expect that if anything it is broader than lognormally distributed.

It might depend what we're using the model for.

In general, it does seem reasonable that direct (expected) net impact of interventions should be broader than lognormal, as Carl argued in 2011. On the other hand, it seems like the expected net impact all things considered shouldn't be broader than lognormal. For one argument, most charities probably funge against each other by at least 1/10^6. For another, you can imagine that funding global health improves the quality of research a bit, which does a bit of the work that you'd have wanted done by funding a research charity. These kinds of indirect effects are hard to map. Maybe people should think more about them.

AFAICT, the basic thing for a post like this one to get right is to compare apples with apples. Tom is trying to evaluate various charities, of which some are evaluators. If he's evaluating the other charities on direct estimates, and is not smoothing the results over by assuming indirect effects, then he should use a broader than lognormal assumption for the evaluators too (and they will be competitive). If he's taking into account that each of the other charities will indirectly support the cause of one another (or at least the best ones will), then he should assume the same for the charity evaluators.

I could be wrong about some of this. A couple of final remarks: it gets more confusing if you think lots of charities have negative value e.g. because of the value of technological progress. Also, all of this makes me think that if you're so convinced that flow-through effects cause many charities to have astronomical benefits, perhaps you ought to be studying these effects intensely and directly, although that admittedly does seem counterintuitive to me, compared with working on problems of known astronomical importance directly.

Comment author: Owen_Cotton-Barratt 26 April 2017 11:45:34AM 1 point [-]

I largely agree with these considerations about the distribution of net impact of interventions (although with some possible disagreements, e.g. I think negative funging is also possible).

However, I actually wasn't trying to comment on this at all! I was talking about the distribution of people's estimates of impact around the true impact for a given intervention. Sorry for not being clearer :/

Comment author: Owen_Cotton-Barratt 25 April 2017 02:59:15PM 4 points [-]

The fact that sometimes people's estimates of impact are subsequently revised down by several orders of magnitude seems like strong evidence against evidence being normally distributed around the truth. I expect that if anything it is broader than lognormally distributed. I also think that extra pieces of evidence are likely to be somewhat correlated in their error, although it's not obvious how best to model that.

Comment author: RyanCarey 22 April 2017 09:13:00PM 2 points [-]

Writing this made me cry, a little. It's late, and I should have gone to bed hours ago, but instead, here I am being filled with sad determination and horror that it feels like I can't trust anyone I haven't personally vetted to communicate honestly with me.

There are a range of reasons that this is not really an appropriate way to communicate. It's socially inappropriate, it could be interpreted as emotional blackmail, and it could encourage trolling.

It's a shame you've been upset. Still, one can call others' writing upsetting, immoral, mean-spirited, etc etc etc - there is a lot of leeway to make other reasonable conversational moves.

Comment author: Owen_Cotton-Barratt 23 April 2017 09:10:49AM *  8 points [-]

Ryan, I substantially disagree and actually think all of your suggested alternatives are worse. The original is reporting on a response to the writing, not staking out a claim to an objective assessment of it.

I think that reporting honest responses is one of the best tools we have for dealing with emotional inferential gaps -- particularly if it's made explicit that this is a function of the reader and writing, and not the writing alone.

Comment author: kbog  (EA Profile) 22 April 2017 01:31:09PM 0 points [-]

The unilateralist's curse does not apply to donations, since funding a project can be done at a range of levels and is not a single, replaceable decision.

Comment author: Owen_Cotton-Barratt 23 April 2017 08:54:29AM 2 points [-]

The basic dynamic applies. Think it's pretty reasonable to use the name to point loosely in such cases, even if the original paper didn't discuss this extension.

Comment author: RyanCarey 25 March 2017 10:53:14PM *  4 points [-]

I doubly agree here. The title "Hard-to-reverse decisions destroy option value" is hard to disagree with because it is pretty tautological.

Over the last couple of years, I've found it to be a widely held view among researchers interested in the long-run future that the EA movement should on the margin be doing less philosophical analysis. It seems to me that it would be beneficial for more work to be done on the margin on i) writing proposals for concrete projects, ii) reviewing empirical literature, and iii) analyzing technological capabilities and fundamental limitations, and less philosophical analysis.

Philosophical analysis such as in much of EA Concepts and these characterizations of how to think about counterfactuals and optionality are less useful than (i-iii) because they do not very strongly change how we will try to affect the world. Suppose I want to write some EA project proposals. In such cases, I am generally not very interested in citing these generalist philosophical pieces. Rather, I usually want to build from a concrete scientific/empirical understanding of related domains and similar past projects. Moreover, I think "customers" like me who are trying to propose concrete work are usually not asking for these kinds of philosophical analysis and are more interested in (i-iii).

Comment author: Owen_Cotton-Barratt 26 March 2017 11:13:12AM 2 points [-]

Over the last couple of years, I've found it to be a widely held view among researchers interested in the long-run future that the EA movement should on the margin be doing less philosophical analysis.

I agree with some versions of this view. For what it's worth I think there may be a selection effect in terms of the people you're talking to, though (perhaps in terms of the organisations they've chosen to work with): I don't think there's anything like consensus about this among the researchers I've talked to.

Comment author: redmoonsoaring 18 March 2017 05:38:04PM 15 points [-]

While I see some value in detailing commonly-held positions like this post does, and I think this post is well-written, I want to flag my concern that it seems like a great example of a lot of effort going into creating content that nobody really disagrees with. This sort of armchair qualified writing doesn't seem to me like a very cost-effective use of EA resources, and I worry we do a lot of it, partly because it's easy to do and gets a lot of positive social reinforcement, to a much greater degree than empirical bold writing tends to get.

Comment author: Owen_Cotton-Barratt 25 March 2017 03:10:04PM *  5 points [-]

I think that the value of this type of work comes from: (i) making it easier for people entering the community to come up to the frontier of thought on different issues; (ii) building solid foundations for our positions, which makes it easier to go take large steps in subsequent work.

Cf. Olah & Carter's recent post on research debt.

In response to comment by kbog  (EA Profile) on Why I left EA
Comment author: Fluttershy 21 February 2017 06:30:06AM 7 points [-]

I agree with your last paragraph, as written. But this conversation is about kindness, and trusting people to be competent altruists, and epistemic humility. That's because acting indifferent to whether or not people who care about similar things as we do waste time figuring things out is cold in a way that disproportionately drives away certain types of skilled people who'd otherwise feel welcome in EA.

But this is about optimal marketing and movement growth, a very empirical question. It doesn't seem to have much to do with personal experiences

I'm happy to discuss optimal marketing and movement growth strategies, but I don't think the question of how to optimally grow EA is best answered as an empirical question at all. I'm generally highly supportive of trying to quantify and optimize things, but in this case, treating movement growth as something suited to empirical analysis may be harmful on net, because the underlying factors actually responsible for the way & extent to which movement growth maps to eventual impact are impossible to meaningfully track. Intersectionality comes into the picture when, due to their experiences, people from certain backgrounds are much, much likelier to be able to easily grasp how these underlying factors impact the way in which not all movement growth is equal.

The obvious-to-me way in which this could be true is if traditionally privileged people (especially first-worlders with testosterone-dominated bodies) either don't understand or don't appreciate that unhealthy conversation norms subtly but surely drive away valuable people. I'd expect the effect of unhealthy conversation norms to be mostly unnoticeable; for one, AB-testing EA's overall conversation norms isn't possible. If you're the sort of person who doesn't use particularly friendly conversation norms in the first place, you're likely to underestimate how important friendly conversation norms are to the well-being of others, and overestimate the willingness of others to consider themselves a part of a movement with poor conversation norms.

"Conversation norms" might seem like a dangerously broad term, but I think it's pointing at exactly the right thing. When people speak as if dishonesty is permissible, as if kindness is optional, or as if dominating others is ok, this makes EA's conversation norms worse. There's no reason to think that a decrease in quality of EA's conversation norms would show up in quantitative metrics like number of new pledges per month. But when EA's conversation norms become less healthy, key people are pushed away, or don't engage with us in the first place, and this destroys utility we'd have otherwise produced.

It may be worse than this, even: if counterfactual EAs who care a lot about having healthy conversational norms are a somewhat homogeneous group of people with skill sets that are distinct from our own, this could cause us to disproportionately lack certain classes of talented people in EA.

In response to comment by Fluttershy on Why I left EA
Comment author: Owen_Cotton-Barratt 21 February 2017 09:43:53AM 4 points [-]

Really liked this comment. Would be happy to see a top level post on the issue.

Comment author: Owen_Cotton-Barratt 17 February 2017 09:37:35PM 5 points [-]

Awesome, strongly pro this sort of thing.

You don't mention covering travel expenses. Do you intend to? If not, would you consider donations to let you do so? (Haven't thought about it much, but my heuristics suggest good use of marginal funds.)

Comment author: Owen_Cotton-Barratt 17 February 2017 09:41:03PM 3 points [-]

Actually that's probably overridden by a heuristic of not trying to second-guess decisions as a donor. I rather mean something like: please say if you thought this was a good idea but were budget-constrained.

Comment author: Owen_Cotton-Barratt 17 February 2017 09:37:35PM 5 points [-]

Awesome, strongly pro this sort of thing.

You don't mention covering travel expenses. Do you intend to? If not, would you consider donations to let you do so? (Haven't thought about it much, but my heuristics suggest good use of marginal funds.)

Comment author: CalebWithers  (EA Profile) 12 February 2017 03:43:35AM 1 point [-]

Second, we should generally focus safety research today on fast takeoff scenarios. Since there will be much less safety work in total in these scenarios, extra work is likely to have a much larger marginal effect.

Does this assumption depend on how pessimistic/optimistic one is about our chances of achieving alignment in different take-off scenarios, i.e. what our position on a curve something like this is expected to be for a given takeoff scenario?

Comment author: Owen_Cotton-Barratt 12 February 2017 10:45:52AM 2 points [-]

I think you get an adjustment from that, but that it should be modest. None of the arguments we have so far about how difficult to expect the problem to be seem very robust, so I think it's appropriate to have a somewhat broad prior over possible difficulties.

I think the picture you link to is plausible if the horizontal axis is interpreted as a log scale. But this changes the calculation of marginal impact quite a lot, so that you probably get more marginal impact towards the left than in the middle of the curve. (I think it's conceivable to end up with well-founded beliefs that look like that curve on a linear scale, but that this requires (a) very good understanding of what the problem actually is, & (b) justified confidence that you have the correct understanding.)

View more: Prev | Next