M

MichaelStJules

10037 karmaJoined

Sequences
2

Human impacts on animals
Welfare and moral weights

Comments
2274

Topic contributions
12

Ah, I definitely saw your post before, but it looks like I forgot about it. Thanks for the reminder.

(I started building the table here for another piece that uses it, and decided to spin off a separate piece with the table. That next post should be up in the next few days.)

I guess I'll add humans for comparison, like you.

Caring about the world we leave for the real people, with emotions and needs and experiences as real as our own, who very well may inherit our world but who we’ll never meet, is an extraordinary act of empathy and compassion — one that’s way harder to access than the empathy and warmth we might feel for our neighbors by default. It’s the ultimate act of care. And it’s definitely concerned with justice.

If we go extinct, they won't exist, so won't be real people or have any valid moral claims. I also consider compassion, by definition, to be concerned with suffering, harms or losses. People who don't come to exist don't experience suffering or harm and have lost nothing. They also don't experience injustice.

Longtermists tend to seem focused on ensuring future moral patients exist, i.e. through extinction risk reduction. But, as above, ensuring moral patients come to exist is not a matter of compassion or justice for those moral patients. Still, they may help or (harm!) other moral patients, including other humans who would exist anyway, animals, aliens or artificial sentience.

On the other hand, longtermism is compatible with a special concern for compassion and justice, including through asymmetric person-affecting views and wide person-affecting views (e.g. Thomas, 2019, probably focus on s-risks and quality improvements), negative utilitarianism (focus on s-risks) and perhaps even narrow person-affecting views. However, utilitarian versions of most of these views still seem prone, at least in principle, to endorsing killing everyone to replace us and our descendants with better off individuals, even if each of us and our descendants would have had an apparently good life and object. I think some (symmetric and perhaps asymmetric) narrow person-affecting views can avoid this, and maybe these are the ones that fit best with compassion and justice. See my post here

That being said, empathy could mean more than just compassion or justice and could endorse bringing happy people into existence for their own sake, e.g. Carlsmith, 2021. I disagree that we should create people for their own sake, though, and my intuitions are person-affecting.

Other issues people have with longtermism are fanaticism and ambiguity; the probability that any individual makes averts an existential catastrophe is usually quite low (e.g. 1 in a million), and the numbers are also pretty speculative.

He could have said different nice things or just left out the bit about safety. Do you think he's straightfowardly lying to the public about what he believes?

Or maybe he's just being (probably knowingly) misleading? "confident that OpenAI will build AGI that is both safe and beneficial" might mean 95% in safe beneficial AGI from OpenAI, and 5% it kills everyone.

Worth noting he said he's "confident that OpenAI will build AGI that is both safe and beneficial under [current leadership]".

There are descriptions of and opinions on some animal welfare certifications here and here. It seems Animal Welfare Approved, Certified Humane and Animal Welfare Certified (level 5 and up, maybe level 4, too?) should be pretty good.

GAP was funded by Open Phil for its Animal Welfare Certified program back in 2016, and this was one of the first grants Open Phil made in farm animal welfare.

Bitcoin is only up around 20% from its peaks in March and November 2021. It seems far riskier in general than just Nvidia (or SMH) when you look over longer time frames. Nvidia has been hit hard in the past, but not as often or usually as hard.

Smaller cap cryptocurrencies are even riskier.

I also think the case for outperformance of crypto in general is much weaker than for AI stocks, and it has gotten weaker as institutional investment has increased, which should increase market efficiency. I think the case for crypto has mostly been greater fool theory (and partly as an inflation hedge), because it's not a formally productive asset and its actual uses seem overstated to me. And even if crypto were better, you could substantially increase (risk-adjusted) returns by also including AI stocks in your portfolio.

I'm less sure about private investments in general, and they need to be judged individually.

I don't really see why your point about the S&P500 should matter. If I buy 95% AI stocks and 5% other stuff and don't rebalance between them, AI will also have a relatively smaller share if it does relatively badly, e.g. due to regulation.

Maybe there's a sense in which market cap-weighting from across sectors and without specifically overweighting AI/tech is more "neutral", but it really just means deferring to market expectations, market time discount rates and market risk attitudes, which could differ from your own. Equal-weighting (securities above a certain market cap or asset classes) and rebalancing to maintain equal weights seems "more neutral", but also pretty arbitrary and probably worse for risk-adjusted returns.

Furthermore, I can increase my absolute exposure to AI with leverage on the S&P500, like call options, margin or leveraged ETFs. Maybe I assume non-AI stocks will do roughly neutral or in line with the past, or the market as a whole will do so assuming AI progress slows. Then leverage on the S&P500 could really just be an AI play.

How much impact do you expect such a COI to have compared to the extra potential donations?

For reference:

  1. You could have more than doubled your investments over the past 1 year period by investing in the right AI companies, e.g. Nvidia, which seemed like a predictably good investment based on market share and % exposure to AI and is up +200% (3x). SMH is up +77%.
  2. Even the S&P500 is around 30% Microsoft, Apple (maybe not much of an AI play now), Nvidia, Amazon, Meta, Google/Alphabet and Broadcom, and these big tech companies have driven most of its gains recently (e.g. this and this).

And how far do you go in recommending divestment from AI to avoid COIs?

  1. Do you think people should avoid the S&P500, because its exposure to AI companies is so high? (Maybe equal-weight ETFs, or specific ETFs missing these companies, or other asset classes.)
  2. Do you think people should short or buy put options on AI companies? This way they're even more incentivized to see them do badly.

You could invest in AI stocks through a donor-advised fund or private foundation to reduce the potential for personal gain and so COIs.

What impact do you expect a marginal demand shift of $1 million (or $1 billion) in AI stocks to have on AI timelines? And why?

(Presumably the impact on actual investments in AI is much lower, because of elasticity, price targets for public companies, limits on what private companies intend to raise at a time.)

Or is the concern only really COIs?

On 7

Aquaculture also uses fished animals to feed their animals: therefore bans that lower fishing efforts can also make aquiculture less interesting to be pursued?

I think this would be true for species caught primarily for fishmeal. While those caught for direct human consumption also contribute to fishmeal/fish oil/feed (through byproducts/processing waste, e.g. OECD/FAO, 2023Figure 8.4), they seem more likely to compete with rather than support aquaculture overall (World Bank, 2013, Table E.2, scenario 5 Capture growth vs Baseline).

On the other hand, shrimp are major fishmeal consumers, so a decrease in fishmeal even with a decrease in overall fish and invertebrate catch could reduce shrimp farming in particular and the number of animals farmed, even if it increases aquaculture by tonnage. The increase in aquaculture by tonnage could result from an increase in more herbivorous species, like carps, tilapias, catfishes and bivalves. That being said, I'm not confident that it would decrease the number of animals farmed.

On the other hand again, banning fishing, especially for fishmeal, could also promote insect farming for aquafeed. But we could work on that, too.

So, it seems pretty messy.

Load more