M

MichaelDickens

4222 karmaJoined Sep 2014

Bio

I do independent research on EA topics. I write about whatever seems important, tractable, and interesting (to me). Lately, I mainly write about EA investing strategy, but my attention span is too short to pick just one topic.

I have a website: https://mdickens.me/ Most of the content on my website gets cross-posted to the EA Forum.

My favorite things that I've written: https://mdickens.me/favorite-posts/

I used to work as a software developer at Affirm.

Sequences
1

Quantitative Models for Cause Selection

Comments
664

It's really hard to judge whether a life is net positive. I'm not even sure when my own life is net positive—sometimes if I'm going through a difficult moment, as a mental exercise I ask myself, "if the rest of my life felt exactly like this, would I want to keep living?" And it's genuinely pretty hard to tell. Sometimes it's obvious, like right at this moment my life is definitely net positive, but when I'm feeling bad, it's hard to say where the threshold is. If I can't even identify the threshold for myself, I doubt I can identify it in farm animals.

If I had to guess, I'd say the threshold is something like

  • if the animals spend most of their time outdoors, their lives are net positive
  • if they spend most of their time indoors (in crowded factory farm conditions, even if "free range"), their lives are net negative

it seems important for my own decision making and for standing on solid ground while talking with others about animal suffering.

To this point, I think the most important things are

  1. whatever the threshold is, factory-farmed animals clearly don't meet it
  2. 99% of animals people eat are factory-farmed (in spite of people's insistence that they only eat meat from their uncle's farm where all of the animals are treated like their own children etc)

If we're talking about financial risk, I enjoyed Deep Risk, a short book by William Bernstein.

The use of quantitative impact estimates by EAs can mislead audiences into overestimating the quality of quantitative empirical evidence supporting these estimates.

In my experience, this is not a winnable battle. Regardless of how many times you repeat that your quantitative estimates are based on limited evidence / embed a lot of assumptions / have high margins of error / etc., people will say you're taking your estimates too seriously.

Thanks for linking your paper! I'll check it out. It sounds pretty good from the abstract.

Do you have some estimate of the cost-effectiveness of helping slaughterhouse workers as compared to, say, cage-free campaigns?

I came up with a few problems that pose challenges for ergodicity economics (EE).

First I need to explicitly define what we're talking about. I will take the definition of the ergodic property as Peters defines it:

The expected value of the observable is a constant (independent of time), and the finite-time average of the observable converges to this constant with probability one as the averaging time tends to infinity.

More precisely, it must satisfy

where W(t) is wealth at time t, f(W) is a transformation function that produces the "observable", and the second integral is taking the expected value of wealth across all bet outcomes at a single point in time. (Peters specifically talked about wealth, but W(t) could be a function describing anything we care about.)

According to EE, a rational agent ought to maximize the expected value of some observable f(W) such that that observable satisfies the ergodic property.

Problem of choosing a transformation function

According to EE, making decisions in a non-ergodic system requires applying a transformation function to make it ergodic. For example, if given a series of bets with multiplicative payout, those bets are non-ergodic, but you can transform them with , and the output of now satisfies the ergodic property.

(Peters seems confused here because in The ergodicity problem in economics he defines the transformation function f as a single-variable function, but in Evaluating gambles using dynamics, he uses a two-dimensional function of wealth at two adjacent time steps. I will continue to follow his second construction where f is a function of two variables, but it appears his definition of ergodicity is under-specified or possibly contradictory.)

The problem: There are infinitely many transformation functions that satisfy the ergodic property.

The function "f(x) = 0 for all x" is ergodic: its EV is constant wrt time (because the EV is 0), and the finite-time average converges to the EV (b/c the finite-time average is 0). There is nothing in EE that says f(x) = 0 is not a good function to optimize over, and EE has no way of saying that (eg) maximizing geometric growth rate is better than maximizing f(x) = 0.

Obviously there are infinitely many constant functions with the ergodic property. You can also always construct an ergodic piecewise function for any given bet ("if the bet outcome is X, the payoff is A; if the bet outcome is Y, the payoff is B; ...")

Peters does specifically claim that

  1. A rational agent faced with an additive bet (e.g.: 50% chance of winning $2, 50% chance of losing $1) ought to maximize the expected value of
  2. A rational agent faced with a multiplicative bet (e.g.: 50% chance of a 10% return, 50% chance of a –5% return) ought to maximize the expected value of

These assumptions are not directly entailed by the foundations of EE, but I will take them as given. They're certainly more reasonable than f(x) = 0.

Problem of incomparable bets

Consider two bets:

Bet A: 50% chance of winning $2, 50% chance of losing $1

Bet B: 99% chance of 100x'ing your money, 1% chance of losing 0.0001% of your money

EE cannot say which of these bets is better. It doesn't evaluate them using the same units: Bet A is evaluated in dollars, Bet B is evaluated in growth rate. I claim Bet B is clearly better.

There is no transformation function that satisfies Peters' requirement of maximizing geometric growth rate for multiplicative bets (Bet B) while also being ergodic for additive bets (bet A). Maximizing growth rate specifically requires using the exact function , which does not satisfy ergodicity for additive bets (expected value is not constant wrt t).

In fact, multiplicative bets cannot be compared to any other type of bet, because is only ergodic when W(t) grows at a constant long-run exponential rate.

More generally, I believe any two bets are incomparable if they require different transformation functions to produce ergodicity, although I haven't proven this.

Problem of risk

This is relevant to Paul Samuelson's article that I linked earlier. EE presumes that all rational agents have identical appetite for risk. For example, in a multiplicative bet, EE says all agents must bet to maximize expected log wealth, regardless of their personal risk tolerance. This defies common sense—surely some people should take on more risk and others should take on less risk? Standard finance theory says that people should change their allocation to stocks vs. bonds based on their risk tolerance; EE says everyone in the world should have the same stock/bond allocation.

Problem of multiplicative-additive bets

Consider a bet:

Bet A: 50% chance of doubling your money, 50% chance of losing $1

More generally, consider the class of bets:

50% chance of multiplying your money by a, 50% chance of losing b dollars

Call these multiplicative-additive bets.

EE does not allow for the existence of any non-constant evaluation function for multiplicative-additive bets. In other words, EE has no way to evaluate these bets.

Proof.

Consider bet A above. By the first clause of the ergodic property, the transformation must satisfy (for some wealth value )

for some constant . This equation says must have a constant expected value.

Now consider what happens at x = -1. There we have and therefore .

That is, f(-1, -2) must equal the expected value of f for any x.

we can generalize this to all multiplicative-additive bets to show that the transformation function must be a constant function.

Consider the class of all multiplicative-additive bets. The transformation function must satisfy

for some constants a, b which define the bet (in Bet A, a = 2 and b = 1). (Note: It is not required that a and be be positive.)

The transformation function must equal when . To see this, observe that and , so .

For any pair defining a particular additive-multiplicative bet, it must be the case that is a constant (and, specifically, it equals the expected value of the transformation function with parameters ).

Next I will show that, for (almost) any pair in , there exists some pair that forces to be a constant.

Solving for in terms of , we get . This is well-defined for all pairs of real numbers except where . For any pair of values we care to choose, there is some bet parameterized by such that is a constant.

Therefore, if there is a function that is ergodic for all additive-multiplicative bets, then that function must be constant everywhere (except on the line , i.e., where your starting wealth is 0, which isn't relevant to this model anyway). A constant-everywhere function says that every multiplicative-additive bet is equally good.

EDIT: I thought about this some more and I think there's a way to define a reasonable ergodic function over a subset of multiplicative-additive bets, namely where a > 1 and b > 0. Let

This gives which is the average value, and I think it's also the long-run expected value but I'm not sure about the math, this function is impossible to define using the single-variable definition of the ergodic property so I'm not sure what to do with that.

(Sorry this comment is kind of rambly)

I looked through the first two pages of Google Scholar for economics papers that cite Peters' work on ergodicity. There were a lot of citations but almost none of the papers were about economics. The top relevant(ish) papers on Google Scholar (excluding other papers by Peters himself) were:

  • Economists’ views on the ergodicity problem, a short opinion piece which basically says Peters misrepresents mainstream economics, e.g. that expected utility theory doesn't implicitly assume ergodicity (which is what I said I thought in my parent comment). They have a considerably longer supplemental piece with detailed explanations of their claims, which I mostly did not read. They gave an interesting thought experiment: "Would a person ever prefer a process that, after three rounds, diminishes wealth from US$10,000 to 0.5 cents over one that yields a 99.9% chance of US$10,000,000 and otherwise US$0? Ergodic theory predicts that this is so because the former has a higher average growth rate." (This supplemental piece looks like the most detailed analysis of ergodicity economics out of all the articles I found.)
  • The influence of ergodicity on risk affinity of timed and non-timed respondents, which is about economics but it's a behavioral experiment so not super relevant, my main concern is that the theory behind ergodicity doesn't seem to make sense.
  • 'Ergodicity Economics' is Pseudoscience (Toda 2023) which, uh, takes a pretty strong stand that you can probably infer from the title. It says "[ergodicity economics] has not produced falsifiable implications" which is true AFAICT (edit: actually I don't think this is true). This paper's author admits some confusion about what ergodicity economics prescribes and he interprets it as prescribing maximizing geometric growth rate, which wasn't my interpretation, and I think this version is in fact falsifiable, and indeed falsified—it implies investors should take much more risk than they actually do, and that all investors should have identical risk tolerance, which sounds pretty wrong to me. But I read the Peters & Gell-Mann article as saying not to maximize geometric growth rate, but to maximize a function of the observable that has the ergodic property (and geometric growth of wealth is one such function). I think that's actually a worse prescription because there are many functions that can generate an ergodic property so it's not a usable optimization criterion (although Peters claims it provides a unique criterion, I don't see how that's true?). Insofar as ergodicity economics recommends maximizing geometric growth rate, it's false because that's not a good criterion in all situations, as discussed by Samuelson linked in my previous comment (and for a longer, multi-syllabic treatment, see Risk and Uncertainty: A Fallacy of Large Numbers where Samuelson proves that the decision criterion "choose the option that maximizes the probability of coming out ahead in the long run" doesn't work because it's intransitive; and an even longer take from Merton & Samuelson in Fallacy of the log-normal approximation to optimal portfolio decision-making over many periods. Anyway that was a bit of a tangent but the Toda paper basically says nobody has explained how ergodicity economics can provide prescriptions in certain fairly simple and common situations even though it's been around for >10 years years.
  • Ergodicity Economics in Plain English basically just rephrases Peters' papers, there's no further analysis.
  • A comment on ergodicity economics (Kim 2019) claims that, basically, mainstream economists think ergodicity ergonomics is silly but they don't care enough to publicly rebut it. It says Peters' rejection of expected utility theory doesn't make sense because for an agent to not have a utility function, it must reject one of the Von Neuman-Morgenstern axioms, and it is not clear which axiom Peters rejects, and in fact he hasn't discussed them at all. (Which I independently noticed when I read Peters & Gell-Mann, although I didn't think about it much.) And Kim claims that none of Peters' demonstrative examples contradict expected utility theory (which also sounds right to me).
  • Ergodicity Economics and the High Beta Conundrum says that ergodicity economics implies that investors should invest with something like 2:1 leverage, which is way more risk than most people are comfortable risk. (This is also implied by a logarithmic utility function.) The author appears sympathetic to ergodicity economics and presents this as a conundrum; I take it as evidence that ergodicity economics doesn't make sense (it's not a definitive falsification but it's evidence). This is not a conundrum for expected utility theory: the solution is simply that most investors don't have logarithmic utility, they have utility functions that are more risk-averse than that.
  • A letter to economists and physicists: on ergodicity economics (Kim, unspecified date). Half the text is about how expected utility theory is unfalsifiable, I think the thesis is something like "we should throw out expected utility theory because it's bad, and it doesn't matter whether ergodicity economics is a good enough theory to replace it". The article doesn't really say anything in favor of or against ergodicity economics.
  • What Work is Ergodicity Doing in Economics? (Ford, unspecified date), which I mostly didn't read because it's long but it appears to be attempting to resolve confusion around ergodicity. Ford appears basically okay with ergodicity as the term is used by some economists but he has a long critique of Peters' version of it. I only skimmed the critique, it seemed decent but it was based on a bunch of wordy arguments with not much math so I can't evaluate it quickly. He concludes with:

    Peters makes a similar mistake to Davidson by making ergodicity the centre of his work, rather than a supporting concept where relevant. The metaphysical baggage accompanying EE [ergodicity economics] is supposed to clarify the problem. In practice it has obscured the observation that EE is essentially no more than a mechanical claim that stochastic processes, when iterated many times, are very likely to give certain outcomes. One does not have to accept [expected utility theory] as a good model of decision making to see that it is nonetheless more reasonable than EE.

So basically, I found a few favorable articles but they were shallow, and all the other articles were critiques. Some of the critiques were harsh (calling ergodicity pseudoscience or confused) but AFAIK the harshness is justified. From what I can tell, ergodicity economics doesn't have anything to contribute.

For example, consider playing a game where you flip a coin, and if it's heads, you increase your wealth by 50%, but if it's tails, you lose 40%. Mathematically, the average outcome looks positive. But, if you play this game repeatedly, because of the multiplicative nature of wealth (losing 40% can't just be "averaged out" by gaining 50% later), you're likely to end up with less money over time. This game is non-ergodic - the long-term outcome for an individual doesn't match the seemingly positive average outcome.

The long-term outcome in this game is only the correct thing to optimize for under specific circumstances (namely, the circumstances where you have a logarithmic utility function). Paul Samuelson discussed this in Why we should not make mean log of wealth big though years to act are long. For a more modern explanation, see Kelly is (just) about logarithmic utility on LessWrong.

Taking ergodicity seriously can strengthen the EA longtermist movement both from a theoretical and a practical perspective.

Are you saying that it immediately produces solutions? Or that it hasn't produced solutions yet, but it might with more research? For example, what does an ergodicity framework say is the correct amount to bet in the St. Petersburg game? Or the correct decision in Cowen's game where you have a 90% chance to double the world's happiness and a 10% chance to end it?

(I should say that my questions are motivated by the fact that I'm pretty skeptical of ergodicity as a useful framework, but I don't understand it very well so I could be missing something. I skimmed the Peters & Gell-Mann paper and I see it makes some relevant claims like "expectation values are only meaningful in the presence of [...] systems with ergodic properties" and "Maximizing expectation values of observables that do not have the ergodic property [...] cannot be considered rational" but I don't see where it justifies them, and they're both false as far as I can tell.)

I was originally going to write an essay based on this prompt but I don't think I actually understand the Epicurean view well enough to do it justice. So instead, here's a quick list of what seem to me to be the implications. I don't exactly agree with the Epicurean view but I do tend to believe that death in itself isn't bad, it's only bad in that it prevents you from having future good experiences.

  1. Metrics like "$3000 per life saved" don't really make sense.
    • I avoid referencing dollars-per-life-saved when I'm being rigorous. I might use them when speaking casually—it's an easy way to introduce EA or GiveWell to new people.
  2. Interventions that focus on preventing deaths are not good purely because they prevent deaths. Preventing a person's death is good if that person then gets to experience a good life, and the goodness of preventing the death exactly equals the goodness of the life (minus the goodness of any life that would have existed otherwise).
    • This is most obviously relevant for life-saving global poverty charities such as the Against Malaria Foundation (AMF). Some people (including Michael Plant and me) have criticized GiveWell's recommendation of AMF on this basis—my post doesn't explicitly discuss the Epicurean view, but Michael Plant's post does (under "4. Epicureanism").
  3. One's view of death isn't relevant to most of the popular EA charities:
    • Many popular global poverty charities, like GiveDirectly, don't prevent deaths (much). Any reasonable philosophical view should agree that improving people's welfare is good, all else equal.
    • Factory farming interventions such as cage-free campaigns improve animals' welfare but don't affect death.
    • Vegetarian/vegan advocacy causes animals not to exist (by reducing demand for meat). This neither causes nor prevents deaths so it's also not affected by the Epicurean view.
    • People who prioritize preventing existential risk rarely do so because it cost-effectively prevents deaths. Instead, they want to preserve the value of the long-run future, which again applies equally well whether you adopt the Epicurean view or not.
      • One could argue that existential risk is indeed cost-effective at preventing deaths, as Carl Shulman does here. In that case, your view of the badness of death becomes relevant. But I think Carl Shulman's argument still works even under the Epicurean view.

RE #2, I helped develop CCM as a contract worker (I'm not contracted with RP currently) and I had the same thought while we were working on it. The reason we didn't do it is that implementing good numeric integration is non-trivial and we didn't have the capacity for it.

I ended up implementing analytic and numeric methods in my spare time after CCM launched. (Nobody can tell me I'm wasting my time if they're not paying me!) Doing analytic simplifications was pretty easy, numeric methods were much harder. I put the code in a fork of Squigglepy here: https://github.com/michaeldickens/squigglepy/tree/analytic-numeric

Numeric methods are difficult because if you want to support arbitrary distributions, you need to handle a lot of edge cases. I wrote a bunch of comments in the code (mainly in this file) about why it's hard.

I did get the code to work on a wide variety of unit tests and a couple of integration tests but I haven't tried getting CCM to run on top of it. Refactoring CCM would take a long time because a ton of CCM code relies on the assumption that distributions are represented as Monte Carlo samples.

Load more