M

MichaelDickens

4290 karmaJoined Sep 2014

Bio

I do independent research on EA topics. I write about whatever seems important, tractable, and interesting (to me). Lately, I mainly write about EA investing strategy, but my attention span is too short to pick just one topic.

I have a website: https://mdickens.me/ Most of the content on my website gets cross-posted to the EA Forum.

My favorite things that I've written: https://mdickens.me/favorite-posts/

I used to work as a software developer at Affirm.

Sequences
1

Quantitative Models for Cause Selection

Comments
671

I primarily prioritize animal welfare in my personal donations since I think that on the margin, it is greatly neglected compared to other EA priorities and leads to orders of magnitude more suffering reduction compared to GHP charities.

Could you say more about your thoughts on animal welfare vs. x-risk? I agree that animal welfare is relatively neglected, but it also seems to me that x-risk needs a lot more funding and marginal dollars are still really valuable. (I don't have a strong opinion about which to prioritize but those two considerations seem relevant.)

I'm not particularly knowledgeable about this but my take is:

  1. Yes enlightenment is real, for some understanding of what "enlightenment" means.
  2. As I understand, enlightenment doesn't free you from all suffering. Enlightenment is better described as "ego death", where you stop identifying with your experiences. There is a sense in which you still suffer but you don't identify with your suffering.
  3. Enlightenment is extremely hard to achieve (it requires spending >10% of your waking life meditating for many years) and doesn't appear to make you particularly better at anything. Like if I could become enlightened and then successfully work 80 hours a week because I stop caring about things like motivation and tiredness, that would be great, but I don't think that's possible.
  • Example 1 is referencing a post that's sitting at a score of –6. It was not a well-received post.
  • Example 2 is a very popular post denouncing Richard Hanania.

I would not interpret that as the community being complacent.

I had an idea for a different way to evaluate meta-options. A meta-option behaves like a call option where the price equals the current value of the equity and the strike price equals the cash salary you'd be able to get instead.[1]

If I compare an equity package worth $100K per year versus a counterfactual cash salary of $100K and assume a volatility of 70% (my research suggests that small companies have a volatility around 70–100%), the call option for the equity that vests in the first year is worth $29K, and the call option for the equity that vests in the 4th year is worth $56K (which is equivalent to a 12% annual return). So on average, a meta-option on a 4-year equity package is worth somewhere in the ballpark of an 18% annual return.

(But if the equity has a lower face value than the counterfactual cash salary, it pretty quickly becomes not worth it.)

[1] This is kind of wrong because with a normal stock option you don't have to pay the strike until you exercise, but with an employee meta-option, you have to give up your counterfactual salary as soon as you start working, and you don't vest for the first year so you have to give up a full year of cash salary no matter what. If you have monthly vesting, the fact that you have to pay at the beginning of the month instead of the end doesn't matter much.

(edited to make the numbers make more sense)

I disagree-voted to indicate that I did not donate my mana because of this post (I use Manifold sometimes but I have only a trivial amount of mana)

I feel your pain. I hope the amount of upvotes and hearts you're getting helps you feel better, but I know brains don't always work that way (mine doesn't).

I believe this sort of thing doesn't get much attention from EAs because there's not really a strong case for it being a global priority in the same way that existential risk from AI is.

It's really hard to judge whether a life is net positive. I'm not even sure when my own life is net positive—sometimes if I'm going through a difficult moment, as a mental exercise I ask myself, "if the rest of my life felt exactly like this, would I want to keep living?" And it's genuinely pretty hard to tell. Sometimes it's obvious, like right at this moment my life is definitely net positive, but when I'm feeling bad, it's hard to say where the threshold is. If I can't even identify the threshold for myself, I doubt I can identify it in farm animals.

If I had to guess, I'd say the threshold is something like

  • if the animals spend most of their time outdoors, their lives are net positive
  • if they spend most of their time indoors (in crowded factory farm conditions, even if "free range"), their lives are net negative

it seems important for my own decision making and for standing on solid ground while talking with others about animal suffering.

To this point, I think the most important things are

  1. whatever the threshold is, factory-farmed animals clearly don't meet it
  2. 99% of animals people eat are factory-farmed (in spite of people's insistence that they only eat meat from their uncle's farm where all of the animals are treated like their own children etc)

If we're talking about financial risk, I enjoyed Deep Risk, a short book by William Bernstein.

The use of quantitative impact estimates by EAs can mislead audiences into overestimating the quality of quantitative empirical evidence supporting these estimates.

In my experience, this is not a winnable battle. Regardless of how many times you repeat that your quantitative estimates are based on limited evidence / embed a lot of assumptions / have high margins of error / etc., people will say you're taking your estimates too seriously.

Load more