M

MichaelDickens

4172 karmaJoined Sep 2014

Bio

I do independent research on EA topics. I write about whatever seems important, tractable, and interesting (to me). Lately, I mainly write about EA investing strategy, but my attention span is too short to pick just one topic.

I have a website: https://mdickens.me/ Most of the content on my website gets cross-posted to the EA Forum.

My favorite things that I've written: https://mdickens.me/favorite-posts/

I used to work as a software developer at Affirm.

Sequences
1

Quantitative Models for Cause Selection

Comments
654

Agreed. I disagree with the general practice of capping the probability distribution over animals' sentience at 1x that of humans'. (I wouldn't put much mass above 1x, but it should definitely be more than zero mass.)

It seems to me that the naive way to handle the two envelopes problem (and I've never heard of a way better than the naive way) is to diversify your donations across two possible solutions to the two envelopes problem:

  • donate half your (neartermist) money on the assumption that you should use ratios to fixed human value
  • donate half your money on the assumption that you should fix the opposite way (eg fruit flies have fixed value)

Which would suggest donating half to animal welfare and probably half to global poverty. (If you let moral weights be linear with neuron count, I think that would still favor animal welfare, but you could get global poverty outweighing animal welfare if moral weight grows super-linearly with neuron count.)

Plausibly there are other neartermist worldviews you might include that don't relate to the two envelopes problem, e.g. a "only give to the most robust interventions" worldview might favor GiveDirectly. So I could see an allocation of less than 50% to animal welfare.

If my goal is to help other people make their donations more effective, and I can either:

  1. move $1 million from a median charity to AMF
  2. move $10 million from a median art-focused charity to the most effective art-focused charity

I would prefer to do #1 because AMF is >10x better (maybe even >1000x better) than the best art charity. So while in theory I would encourage an art-focused foundation to make more effective donations within their area, I don't think trying to do that would be a good use of my time.

Getting 1.5 points by 2.7x'ing GDP actually sounds like a lot to me? It predicts that the United States should be 1.9 points ahead of China and China should be 2.0 points ahead of Kenya. It's very hard to get a 1.9 point improvement in satisfaction by doing anything.

IMO if a forecasting org does manage to make money selling predictions to companies, that's a good positive update, but if they fail, that's only a weak negative update—my prior is that the vast majority of companies don't care about getting good predictions even if those predictions would be valuable. (Execs might be exposed as making bad predictions; good predictions should increase the stock price, but individual execs only capture a small % of the upside to the stock price vs. 100% of the downside of looking stupid.)

I was just thinking about this a few days ago when I was flying for the holidays. Outside the plane was a sign that said something like

Warning: Jet fuel emits chemicals that may increase the risk of cancer.

And I was thinking about whether this was a justified double-hedge. The author of that sign has a subjective belief that exposure to those chemicals increases the probability that you get cancer, so you could say "may give you cancer" or "increases the risk of cancer". On the other hand, perhaps the double-hedge is reasonable in cases like this because there's some uncertainty about whether a dangerous thing will cause harm, and there's also uncertainty about whether a particular thing is dangerous, so I supposed it's reasonable to say "may increase the risk of cancer". It means "there is some probability that this increases the probability that you get cancer, but also some probability that it has no effect on cancer rates."

I may be misinterpreting your argument, but it sounds like it boils down to:

  1. Given that we don't know much about qualia, we can't be confident that shrimp have qualia.
  2. [implicit] Therefore, shrimp have an extremely low probability of having qualia.
  3. Therefore, it's ok to eat shrimp.

The jump from step 1 to step 2 looks like a mistake to me.

You also seemed to suggest (although I'm not quite sure whether you were actually suggesting this) that if a being cannot in principle describe its qualia, then it does not have qualia. I don't see much reason to believe this to be true—it's one theory of how qualia might work, but it's not the only theory. And it would imply that, e.g., human stroke victims who are incapable of speech do not have qualia because they cannot, even in principle, talk about their qualia.

(I think there is a reasonable chance that I just don't understand your argument, in which case I'm sorry for misinterpreting you.)

I have only limited resources with which to do good. If I'm not doing good directly through a full-time job, I budget 20% of my income toward doing as much good as possible, and then I don't worry about it after that. If I spend time and money on advocating for a ceasefire, that's time and money that I can't spend on something else.

If you ask me my opinion about whether Israel should attack Gaza, I'd say they shouldn't. But I don't know enough about the issue to say what should be done about it, and I doubt advocacy on this issue would be very effective—"Israel and Palestine should stop fighting" has been more or less the consensus position among the general public for ~70 years, and it still hasn't happened, and I doubt anything I do will have an impact on the same scale as a donation to a GiveWell top charity.

To convince me to advocate for a ceasefire, you have to argue not just that it's good, but that it's the best thing I could be doing. All you've said is that it's good. Why is it the best thing that I could be doing? I'd like this post better if you said more about why it's the best thing. (I doubt I'd end up agreeing, but I appreciate when people make the argument.)

the results here don't depend on actual infinities (infinite universe, infinitely long lives, infinite value)

This seems pretty important to me. You can handwave away standard infinite ethics by positing that everything is finite with 100% certainty, but you can't handwave away the implications of a finite-everywere distribution with infinite EV.

(Just an offhand thought, I wonder if there's a way to fix infinite-EV distributions by positing that utility is bounded, but that you don't know what the bound is? My subjective belief is something like, utility is bounded, I don't know the bound, and the expected value of the upper bound is infinity. If the upper bound is guaranteed finite but with an infinite EV, does that still cause problems?)

I think this subject is very important and underrated, so I'm glad you wrote the post, and you raised some points that I wasn't aware of, and I would like to see people write more posts like this one. The post didn't do as much for me as it could have because I found two of its three main arguments hard to understand:

  1. For your first argument ("Unbounded utility functions are irrational"), the post spends several paragraphs setting up a specific function that I could have easily constructed myself (for me it's pretty obvious that there exist finite utility functions with infinite EV), and then ends by saying utilitarianism "lead[s] to violations of generalizations of the Independence axiom and the Sure-Thing Principle", which I take to be the central argument, but I don't know what the Sure-Thing Principle is. I think I know what Independence is, but I don't know what you mean by "generalizations of Independence". So it feels like I still have no idea what your actual argument is.
  2. I had no difficulty following your money pump argument.
  3. For the third argument, the post claims that some axioms rule out expectational total utilitarianism, but the axioms aren't defined and I don't know what they mean, and I don't know how they rule out expectational total utilitarianism. (I tried to look at the cited paper, but it's not publicly available and it doesn't look like it's on Sci-Hub either.)
Load more