Hide table of contents

Cross-posted to my blog.

There are a few cause areas that are plausibly highly effective, but as far as I know, no one is working on them. If there existed a charity working on one of these problems, I might consider donating to it.

Happy Animal Farm

The closest thing we can make to universal eudaimonia with current technology is a farm of many small animals that are made as happy as possible. Presumably the animals are cared for by people who know a lot about their psychology and welfare and can make sure they’re happy. One plausible species choice is rats, because rats are small (and therefore easy to take care of and don’t consume a lot of resources), definitively sentient, and we have a reasonable idea of how to make them happy.

I am not aware of any public discussion on this subject, so I will perform a quick ad-hoc effectiveness estimate.

(Most of the figures below come from a personal communication with Emily Cutts Worthington, who is more knowledgeable about taking care of rats than I am. These figures are not robust but are based on her best guesses.)

A rat curator working a few hours a week can probably support 100 happy rats. I have a lot of uncertainty about how brain size affects sentience, but say a happy rat is half as happy as a happy human. Suppose the rats are euthanized when their health starts to deteriorate, so they get close to 1 QALY per year. This would cost about $5 per rat per month plus an opportunity cost of maybe $500 per month for the time spent, which works out to another $5 per rat per month. Thus creating 1 rat QALY costs $120 per year, which is $240 per human QALY per year.

Deworming treatments cost about $30 per DALY. Thus a rat farm looks like a fairly expensive way of producing utility. It may be possible to decrease costs by scaling up the rat farm operation, but it would have to be about an order of magnitude cheaper to rival deworming treatments.

This is just a rough back-of-the-envelope calculation so it should not be taken literally, but I’m still surprised by how cost-inefficient this looks. I expected rat farms to be highly cost-effective based on the fact that most people don’t care about rats, and generally the less people care about some group, the easier it is to help that group. (It’s easier to help developing-world humans than developed-world humans, and easier still to help factory-farmed animals.) Again, I could be completely wrong about these calculations, but rat farms look less promising than I had expected.

Humane Insecticides

http://reducing-suffering.org/humane-insecticides/

I know very little about humane insecticides but it’s a cause that’s plausibly highly cost-effective and virtually no one is working on it. I’m inclined to want to focus more on high-learning-value or far-future interventions; supporting humane insecticides probably only has short-term effects (albeit extremely large ones). But the overwhelming importance of reducing insect suffering (if insects feel pain, which seems sufficiently likely to be a concern) and the extreme neglectedness of this cause could possibly make it the best thing to work on.

High-Leverage Values Spreading

In On Values Spreading, I discuss the possibility of focusing values-spreading efforts on high-leverage individuals:

Probably, some sorts of values spreading matter much, much more than others. Perhaps convincing AI researchers to care more about non-human animals could substantially increase the probability that a superintelligent AI will also care about animals. This could be highly impactful and may even be the most effective thing to do right now. (I am much more willing to consider donating to MIRI now that its executive director values non-human animals.) I don’t see anyone doing this, and if I did, I would want to see evidence that they were doing a good job; but this would plausibly be a highly effective intervention.

I’d like to see an organization that focuses specifically on seeking out and implementing extremely high-leverage values spreading interventions. Perhaps this could mean trying to persuade AI researchers or geoengineering researchers to care about non-human animals–the results of their research could have drastic effects on animals and we want to make sure those effects are positive. I’m sure there are other high-leverage values spreading interventions that no one is currently doing (targeting high-impact researchers is just the first one I came up with off the top of my head); a dedicated organization could explore this space and try to find other highly effective strategies.

Promoting Universal Eudaimonia

Right now, shockingly few people are concerned with filling the universe with tiny beings whose minds are specifically geared toward feeling as much pure happiness as possible. I’d like to see more efforts to promote this outcome. Maybe the best way to do this is to start a Pay David Pearce to Do Whatever He Wants Fund, but I don’t know if David Pearce is funding constrained.

7

0
0

Reactions

0
0

More posts like this

Comments31
Sorted by Click to highlight new comments since: Today at 11:36 AM

Micheal, I like your blog and enjoyed the post.

I agree there are no good charities for hedonistic utilitarians at the moment, because they are either not very aligned with hedonistic utilitarian goals or their cost-effectiveness is not tractable. (You can still donate if you have so much money that your alternative spending would be "bigger car/yacht", otherwise it doesn't make much sense.)

Your ideas are all interesting, but values spreading and promoting universal eudaimonia are non-starters. You get downvoted on an EA forum, and you are not going to find a more open-minded amicable target group than this.

Happy animals are problematic because their feedback is limited; you don't know when they are suffering unless you monitor them with unreasonable effort. Their minds are not optimized for high pleasure/low suffering. Perhaps with future technology this sort of thing will be trivial, but that is not certain and investing in the necessary research will give too much harmful knowledge to non-value-aligned people. Even if it were net good to fund such research, it will probably be done for other reasons anyway (commercial applications, publicly funded neurology etc.), so again it's something you should only fund if you have too much money.

I don't know enough about insect biology to judge humane insecticides; the idea is certainly not unrealistic. But remember real people would have to use it preferentially, so even if such a charity existed, there's no guarantee anyone would use it instead of laughing you out of the room.

Given the recent post on the marketability of EA (which went so far as to suggest excluding MIRI from the EA tent to make EA more marketable - or maybe that was a comment in response to the post; don't remember), a brief reaction from someone who is excited about Effective Altruism but has various reservations. (My main reservation, so you have a feel for where I'm coming from, is that my goal in life is not to maximize the world's utility, but, roughly speaking, to maximize my own utility and end-of-life satisfaction, and therefore I find it hard to get excited about theoretically utility maximizing causes rather than donating to things which I viscerally care about -- I know this will strike most people here as incredibly squishy, but I'd bet that much of the public outside the EA community probably has a similar reaction, though few would actually come out and say it)

  • I like your idea about high-leverage values spreading.
  • The ideas about Happy Animal Farm / Promoting Universal Eudaimonia seem nuts to me, so much so that I actually reread the post to see if this was a parody. If it gains widespread popularity among the EA movement, I will move from being intrigued about EA and excited to bring it up in conversation to never raising it in conversations with all but the most open-minded / rationalist people, or raising it in the tone of "yeah these guys are crazy but this one idea they have about applying data analysis to charity has some merit... Earning to Give is intriguing too...." I could be wrong, but I'd strongly suspect that most people who are not thoroughgoing utilitarians find it incredibly silly to argue that creating more beings who experience utility is a good cause, and this would quickly push EA away from being taken seriously in the mainstream.
  • The humane insecticides idea doesn't seem AS silly as those two above, but it places EA in the same mental category as my most extreme caricature of PETA (and I'm someone who eats mostly vegan, and some Certified Humane, because I care about animal welfare). I don't think insects are a very popular cause.

Just my 2 cents.

The ideas about Happy Animal Farm / Promoting Universal Eudaimonia seem nuts to me, so much so that I actually reread the post to see if this was a parody.

Yeah I definitely understand that reaction which is why I was not sure it was a good idea to post this. It looks like it probably wasn't. Thanks for the feedback.

[anonymous]9y0
0
0

It looks like it probably wasn't.

Please don't be discouraged! I very much appreciate this post.

I am a negative-leaning hedonistic utilitarian who has thought a little about the effectiveness of intense pleasure production. Like you, I estimate that donating to MIRI, ACE, and its recommended charities is more efficient than a wireheading operation at this time.

That being said, I wish more people would at least consider the possibility that wireheading is altruistic. If relative costs change in the future, it may prove to be effective. Unfortunately, the concept of intense pleasure production causes most people, even many EAs, to recoil in disgust.

I would enjoy discussing cost estimates of intense pleasure production/wireheading more with you, please send me a message if you're interested. :)

You don’t have to be concerned about somewhat outré ideas (more outré than AI risk I guess) becoming popular among EAs since their tractability – how easily someone can gain widespread support for scaling them up – will necessarily be very limited. That will make them vastly inferior to causes for whose importance there is such widespread support. There may be exceptions to this rule, but I think by and large it holds.

I think there are also a lot of non-selfish reasons for not wanting to breed a load of rats and protect insects that even entemologists think don't have a concept of suffering / pain that's in any way equivalent to what we consider morally valuable.

Not all entomologists think that insects don't have suffering or pain.

Great to know - can you point me to an entomologist that thinks, or a paper that argues (that isn't philosophy), that insects have suffering that is in any way equivalent to what we would understand it as please?

Concept of suffering != experience of suffering.

Human babies don't have such concepts either, but experience of suffering is still realistic.

Reading Tom charitably, I'm not sure he meant to talk about whether insects themselves have an idea of suffering?

Sorry, using suffering losely. The quality of suffering largely determines its value in my eyes. I ve seen entemologists argue there is no possible way insects can feel suffering. I don't necessarily go along with that, we deny suffering at every opportunity: Black people in apartheid denied pain killers, animals thought mot to feel pain, fish, mentally illetc etc but really, a little system of chemicals resembling something simpler than electronic systems we've built? The point in trying to make is that this seems like a rabbit hole. Get out of it and wake up to what really matters. There are litrrally millions of things anyone can be getting on with that are more pressing than the imaginary plight of insects.

kbog
9y-4
0
0

I don't understand what's off-putting about optimizing far-future outcomes. This is a good sketch of what we are talking about: http://www.abolitionist.com/

But apparently even people who call themselves "effective altruists" would rather downvote than engage in rational discussion.

FYI I believe you're getting downvoted because your second paragraph comes across as mean-spirited.

Upvoted, because although I disagree with much of this on object level, I think the post is totally legit and I think we should encourage original thinking.

Perhaps we need to find a time and place to start a serious discussion of ethics. I think hedonistic utilitarianism is wrong already on the level of meta-ethics. It seems to assume the existence of universal morals which from my point of view is a non-sequitur. Basically all discussions of universal morals are games with meaningless words, maps of non-existing territories. The only sensible meta-ethics I know is equating ethics with preferences. It seems that there is such a thing as intelligent agents with preferences (although we have no satisfactory mathematical definition yet). Of course each agent has its own preferences and the space of possible preferences is quite big (orthogonality thesis). Hence ethical subjectivism. Human preferences don't seem to differ much from human to human once you take into account that much of the differences in instrumental goals are explained by different beliefs rather than different terminal goals (=preferences). Therefore it makes sense in certain situations to use approximate models of ethics that don't explicitly mention the reference human, like utilitarianism. On the other hand, there is no reason the precise ethics should have a simple description (complexity of value). It is a philosophical error to think ethics should be low complexity like physical law since ethics (=preferences) is a property of the agent and has quite a bit of complexity put in by evolution. In other words, ethics is in the same category as the shape of Africa rather than Einstein's equations. Taking simplified models which take only one value into account (e.g. pleasure) to the extreme is bound to lead to abhorrent conclusions as all other values as sacrificed.

Happiness and suffering in the utilitarian sense are both extraordinarily complicated concepts and encompass a lot of different experiences. They're shorthand for "things conscious beings experience that are good/bad."

Meta-ethically I don't disagree with you that much.

This strikes me as a strange choice of words since e.g. I think it is good to occasionally experience sadness. But arguing over words is not very fruitful.

I'm not sure this interpretation is consistent with "filling the universe with tiny beings whose minds are specifically geared toward feeling as much pure happiness as possible."

First, "pure happiness" sounds like a raw pleasure signal rather than "things conscious beings experience that are good" but ok, maybe it's just about wording.

Second, "specifically geared" sounds like wireheading. That is, it sounds like these beings would be happy even if they witnessed the holocaust which again contradicts my understanding of "things conscious beings experience that are good." However I guess it's possible to read it charitably (from my perspective) as minds that have superior ability to have truly valuable experiences i.e. some kind of post-humans.

Third, "tiny beings" sounds like some kind of primitive minds rather than superhuman minds as I would expect. But maybe you actually mean physical size in which case I might agree: it seems much more efficient to do something like running lots of post-humans on computronium than allocating for each the material resources of a modern biological human (although at the moment I have no idea what volume of computronium is optimal for running a single post-human: on the one hand, running a modern-like human is probably possible in a very small volume, on the other hand a post-human might be much more computationally expensive).

So, for a sufficiently charitable (from my perspective) reading I agree, but I'm not sure to which extent this reading is aligned with your actual intentions.

Here is a paper that argues that the money saved by being vegan can be used to bring lots of mice happy lives (overcoming the "logic of the larder" that eating meat creates lives worth living).

I can see why it’s been downvoted, but marketing aside, I had some nerdy fun reading it. I think we need a forum (or does it exist already?) that clearly proclaims that everything posted there makes sense in the author’s personal morality, and they wouldn’t post it there if they thought these conclusions were shared widely. That way people could discuss unusual ideas (and signal their readiness to consider them ^^) without the risk of others mistaking them for mainstream opinions.

My brand of utilitarianism is more focused on reducing extreme suffering or preference frustration, not to the extend of complete negative utilitarianism but somewhat. Hence the risk of something going wrong and some of the tiny beings suddenly experiencing significant pain, for me, quickly outweighs the expected benefits.

Value-spreading and humane insecticides seem genuinely interesting to me, especially since the latter might be turned into a social enterprise through which I could ETG effectively at the same time. The success of the enterprise would hinge on demand, though, which is probably where the plan breaks down.

Rat happiness is HALF as good as human happiness? I'm not so sure about that.

I would be willing to support high leverage value spreading though. And I'd like to know what Pearce is up to these days. Many people are strangely skeptical or dismissive of universal eudaimonia scenarios, and it's an important idea to establish.

Rat happiness is HALF as good as human happiness? I'm not so sure about that.

Reasons to believe rat happiness and human happiness is about comparable:

  • The brain structures that make humans happy look similar to the brain structures that make rats happy.
  • Rats behave in similar ways to humans in response to pleasurable or painful stimuli.
  • Most of the parts of the human brain that other animals don't possess have to do with high-level cognitive processing, which doesn't seem to have much to do with happiness or suffering.

Reasons to believe human happiness is substantially greater than rat happiness:

  • Sentience may increase rapidly as number of neurons increases. (But I don't expect that most human neurons are involved in producing conscious experiences.)
  • High-level cognitive abilities may increase capacity for happiness or suffering. (I find this implausible because subjectively when I feel very unhappy it usually doesn't have much to do with my cognitive abilities.)

High-level cognitive abilities may increase capacity for happiness or suffering. (I find this implausible because subjectively when I feel very unhappy it usually doesn't have much to do with my cognitive abilities.)

Because it can be useful to report disagreement on these things: I disagree. I find it very plausible that high-level cognitive abilities increase capacity for happiness or suffering, and my subjective experience is different from yours.

Yeah I had definitely considered that we might have different subjective experiences. I wonder how much of this comes from the fact that we introspect differently and how much is just that our brains work in different ways. Introspection is hard.

The difference in size between a human brain and a rat brain is significant. An average adult human brain is 1300-1400g and the average rat brain is 2g. There's no reason to peg the capability of the latter to generate vivid mental states as within the same order of magnitude, or two orders of magnitude in my opinion, of capability as the former.

The brain structures that make humans happy look similar to the brain structures that make rats happy.

Yes, but one is much larger and more powerful than the other.

Rats behave in similar ways to humans in response to pleasurable or painful stimuli.

So do all sorts of non-conscious entities.

Most of the parts of the human brain that other animals don't possess have to do with high-level cognitive processing, which doesn't seem to have much to do with happiness or suffering.

But the difference in size and capacity is altogether too large to be handwaved in this way. Besides, many components of human happiness do depend on higher level cognitive processing. What constitutes brute pain is simple, but what makes someone truly satisfied and grateful for their life is not.

Yes, I'd treat the ratio of brain masses as a lower bound on the ratio of moral patient-ness.

Wouldn't farming bees be better than farming rats? They are even smaller, and you could support the operation by selling the honey. (If bees don't win because they are smaller, why not go bigger and create a happy farm full of egg laying chickens or something? Same advantage of being able to support the operation by selling animal products.)

That's not a bad idea. The main problem is we know less about what makes bees' lives good, or if their lives are even capable of being good or bad.

Seems reasonable.

Such research might be very high leverage if bee happiness correlates positively with honey production and you can develop and market a relatively cheap product to bee farmers that increases bee happiness and thus honey production.

Upvoted as these are interested ideas, particularly the humane pesticide. On the other hand, the happy animal farm idea is too far outside the mainstream and would be too damaging to the reputation of EA for me to want it to get too much support from the EA movement at the current time.

I would suggest 1) the happy animal farm idea shouldn't be explored too much until at least EA has established itself 2) if these ideas are explored, they should be framed more as philosophical discussion than as a solid policy proposal 3) if in the future, someone decides to pursue this, it shouldn't be directly supported by mainstream EA organisations. Even AI safety research is enough to make people skeptical of EA.

I'm pretty skeptical of the whole "we will ignore the vastly more important subjects for the sake of PR, and instead only talk about the vastly less important subjects" approach. It's possible that the most ethical thing we could do is fill the universe with small happy animals, and doing this could be the most important decision we ever make, vastly outweighing relatively trivial problems like malaria and factory farming. It's also conceivable that filling the universe with happy animals is massively worse than something else we could do with the universe's resources, and doing it would be a monumental error. I understand that it sounds weird, and we probably shouldn't introduce EA with "I'm trying to improve the world in the most effective way possible, which is why I want to make a rat farm" (I definitely don't say that to people—I talk about malaria and GiveWell and more mainstream topics). But that doesn't mean no one should ever publicly discuss weird-but-important issues. I believe that reading Brian Tomasik's (public) essays on wild animal suffering was more important for me than learning about GiveWell or Giving What We Can, and I wouldn't have considered this critically important topic if Brian had been too concerned about PR to write about it.

My concern with working to achieve universal eudaimonia is that, technologically, we are quite far from being able to achieve this. There is possibility that if we focus solely on that, society in general will suffer, perhaps to the point where we collapse and are never able to achieve eudaimonia. We might also miss out on the chance of stopping some x-risk because we put too many resources into eudaimonia. Also, I believe that by helping the world overcome poverty and working on short-term technology boosts, we get closer to a point where working on universal eudaimonia is more achievable.

I don't have any values or thought experiments to back this up, these are just my initial concerns. That's not to say that I don't think the concept should be worked on. I met David Pearce this year and he convinced me that such a concept should be the 'final goal' of humanity and EA.

That's certainly something worth worrying about. But we could also worry that if we successfully eliminate x-risks, we still need to ensure that the far future has lots of happiness and minimal suffering, and this might not happen by default. It's not clear which is more important. I lean a little toward x-risk reduction but it's hard to say.

Curated and popular this week
Relevant opportunities