T

trammell

1458 karmaJoined Sep 2018

Bio

Econ PhD student at Oxford and research associate at the Global Priorities Institute. I'm slightly less ignorant about economic theory than about everything else.

https://philiptrammell.com/

Comments
127

Hey David, I've just finished a rewrite of the paper which I'm hoping to submit soon, which I hope does a decent job of both simplifying it and making clearer what the applications and limitations are: https://philiptrammell.com/static/Existential_Risk_and_Growth.pdf

Presumably the referees will constitute experts on the growth front at least (if it's not desk rejected everywhere!), though the new version is general enough that it doesn't really rely on any particular claims about growth theory.

Hold on, just to try wrapping up the first point--if by "flat" you meant "more concave", why do you say "I don't see how [uncertainty] could flatten out the utility function. This should be in "Justifying a more cautious portfolio"?"

Did you mean in the original comment to say that you don't see how uncertainty could make the utility function more concave, and that it should therefore also be filed under "Justifying a riskier portfolio"?

I can't speak for Michael of course, but as covered throughout the post, I think that the existing EA writing on this topic has internalized the pro-risk-tolerance points (e.g. that some other funding will be coming from uncorrelated sources) quite a bit more than the anti-risk-tolerance points (e.g. that some of the reasons that many investors seem to value safe investments so much, like "habit formation", could apply to philanthropists to some extent as well). If you feel you and some other EAs have already internalized the latter more than the former, then that's great too, as far as I'm concerned--hopefully we can come closer to consensus about what the valid considerations are, even if from different directions.

By flattening here, I meant "less concave" - hence more risk averse. I think we agree on this point?

Less concave = more risk tolerant, no?

I think I'm still confused about your response on the second point too. The point of this section is that since there are no good public estimates of the curvature of the philanthropic utility function for many top EA cause areas, like x-risk reduction, we don't know if it's more or less concave than a typical individual utility function. Appendix B just illustrates a bit more concretely how it could go either way. Does that make sense?

Thanks! As others have commented, the strength of this consideration (and of many of the other considerations) is quite ambiguous, and I’d love to see more research on it. But at least qualitatively, I think it’s been underappreciated by existing discussion.

Thanks! Hardly the first version of an article like this (or most clearly written), but hopefully a bit more thorough…!

I agree! As noted under Richard’s comment, I’m afraid my only excuse is that the points covered are scattered enough that writing a short, accessible summary at the top was a bit of a pain, and I ran out of time to write this before I could make it work. (And I won’t be free again for a while…)

If you or anyone else reading this manages to write one in the meantime, send it over and I’ll stick it at the top.

Thanks! I agree that would be helpful. My only excuse is that the points covered are scattered enough that writing a short, accessible summary at the top was a bit of a pain, and I ran out of time to write this before I could make it work…

Hi Peter, thanks again for your comments on the draft! I think it improved it a lot. And sorry for the late reply here—just got back from vacation.

I agree that the cause variety point includes what you might call “sub-cause variety” (indeed, I changed the title of that bit from “cause area variety” to “cause variety” for that reason). I also agree that it’s a really substantial consideration: one of several that can single-handedly swing the conclusion. I hope you/others find the simple model of Appendix C helpful for starting to quantify just how substantial it is. My own current feeing is that it’s more substantial than I thought when I first started thinking about this question, though not enough to unambiguously outweigh countervailing considerations, like the seemingly unusually high beta of EA-style philanthropic funding.

I also agree that the long-run correlations between asset returns and the consumption of the global poor seems like an important variable to look into more insofar as we’re thinking about the global poverty context, and that it could turn out to be weak enough that using an effective eta<1 is warranted even if we’re operating on a long time horizon.

Hi, sorry for the late reply--just got back from vacation.

As with most long posts, I expect this post has whatever popularity it has not because many people read it all, but because they skimmed parts and thought they made sense, and thought the overall message resonated with their own intuitions. Likewise, I expect your comment has whatever popularity it has because they have different intuitions, and because it looks on a skim as though you’ve shown that a careful reading of the post validates those intuitions instead…! But who knows.

Since there are hard-to-quantify considerations both for and against philanthropists being very financially risk tolerant, if your intuitions tend to put more weight on the considerations that point in the pro-risk-tolerance direction, you can certainly read the post and still conclude that a lot of risk tolerance is warranted. E.g. my intuition differs from yours at the top of this comment. As Michael Dickens notes, and as I say in the introduction, I think the post argues on balance against adopting as much financial risk tolerance as existing EA discourse tends to recommend.

Beyond an intuition-based re-weighting of the considerations, though, you raise questions about the qualitative validity of some of the points I raise. And as long as your comment is, I think the post does already address essentially all these questions. (Indeed, addressing them in advance is largely why the post is as long as it is!) For example, regarding “arguments from uncertainty”, you say

I don't see how this could flatten out the utility function. This should be in "Justifying a more cautious portfolio".

But to my mind, the way this flattening could work is explained in the “Arguments from uncertainty” section:

“one might argue that philanthropists have a hard time distinguishing between the value of different projects, and that this makes the “ex ante philanthropic utility function”, the function from spending to expected impact, less curved than it would be under more complete information…”

Or, in response to my point that “The philanthropic utility function for any given “cause” could exhibit more or less curvature than a typical individual utility function”, you say

I don't find any argument convincing that philanthropic utility functions are more curved than typical individuals. (As I've noted above where you've attempted to argue this. This should be in "Justifying a riskier portfolio", .

Could you point me to what you're referring to, when you say you note this above? To my mind, one way that a within-cause philanthropic utility function could exhibit arbitrarily more curvature than a typical individual utility function is detailed in Appendix B.

So I can better understand what might be going on with all these evident failures of communication on my end more generally, instead of producing an ever-lengthening series of point by point replies, could you say more about why you don’t feel your questions are answered in these cases?

Load more