My personal religion involves two gods – the god of humanity (who I sometimes call "Humo") and the god of the robot utilitarians (who I sometimes call "Robutil"). 

When I'm facing a moral crisis, I query my shoulder-Humo and my shoulder-Robutil for their thoughts. Sometimes they say the same thing, and there's no real crisis. For example, some naive young EAs try to be utility monks, donate all their money, never take breaks, only do productive things... but Robutil and Humo both agree that quality intellectual world requires slack and psychological health. (Both to handle crises and to notice subtle things, which you might need, even in emergencies)

If you're an aspiring effective altruist, you should definitely at least be doing all the things that Humo and Robutil agree on. (i.e. get to to the middle point of Tyler Alterman's story here).

But Humo and Robutil in fact disagree on some things, and disagree on emphasis. 

They disagree on how much effort you should spend to avoid accidentally recruiting people you don't have much use for.

They disagree on how many people it's acceptable to accidentally fuck up psychologically, while you experiment with new programs to empower and/or recruit them.

They disagree on how hard to push yourself to grow better/stronger/wiser/faster, and how much you should sacrifice to do so.

Humo and Robutil each struggle to understand things differently. Robutil eventually acknowledges you need Slack, but it didn't occur to him initially. His understanding was born in the burnout and tunnel-vision of thousands of young idealists, and Humo eventually (patiently, kindly) saying "I told you so." (Robutil responds "but you didn't provide any arguments about how that maximized utility!". Humo responds "but I said it was obviously unhealthy!" Robutil says "wtf does 'unhealthy' even mean? taboo unhealthy!")

It took Robutil longer still to consider that perhaps humans (with their current self-awareness) not only need to prioritize their own wellbeing and your friendships, but that it can be valuable to prioritize them for their own sake, not just as part of a utilitarian calculus, because trying to justify them in utilitarian terms may be a subtly wrong step in the dance that leaves them hollow, burned out for years  

(Though Robutil notes that this is likely a temporary state of affairs. A human with sufficiently nuanced self-knowledge can probably wring more utilons out of their wellbeing activities)

Humo struggles to acknowledge that if you spend all your time making sure to uphold deontological commitments to avoid harming the people in your care, then this effort is in fact measured in real human beings who suffer and die because you took longer to scale up your program. 

In my headcanon, Humo and Robutil are gods who are old and wise, and they got over their naive struggles long ago. They respect each other as brothers. They understand that each of their perspectives is relevant to the overall project of human flourishing. They don't disagree as much as you'd naively expect, but they speak different languages and emphasize things differently. 

Humo might acknowledge that I can't take care of everyone, or even respond compassionately to all the people who show up in my life who I don't have time to help. But he says so with a warm, mournful compassion, whereas Robutil says in with brief, efficient ruthlessness.

I find it useful to query them independently, and to imagine the wise version of each of them as best I can – even if my imagining is but a crude shadow of their idealized platonic selves.

50

0
0

Reactions

0
0

More posts like this

Comments6
Sorted by Click to highlight new comments since: Today at 11:45 AM

Nice short post!

Robutil responds "but you didn't provide any arguments about how that maximized utility!". Humo responds "but I said it was obviously unhealthy!" Robutil says "wtf does 'unhealthy' even mean? taboo unhealthy!"

I found this funny!

I like this post so much that I'm buying you a sandwich (check your email).

Wow, that was wonderful. It feels like that needs to be published somewhere, even far beyond the EA community. Thanks so much for writing and sharing. 

Executive summary: The author conceived of two deities - Humo, representing humanity, and Robutil, representing robot utilitarians - that offer guidance on moral issues. Though sometimes aligned, they have different priorities and languages. Consulting both provides balance.

Key points:

  1. Humo and Robutil agree on some basic effective altruism principles but disagree on others, like slack and psychological health.
  2. Humo emphasizes compassion, health, and intrinsic value of relationships. Robutil is more utilitarian and ruthless.
  3. Each struggles to fully understand the other's perspective at first.
  4. With wisdom, they respect each other and see their views as complementary.
  5. Consulting both gods helps the author find balance between human and utilitarian considerations.
  6. Imagining their idealized Platonic forms, despite imperfections, is useful for moral guidance.

This comment was auto-generated by the EA Forum Team. Contact us if you have feedback.

More from Raemon
Curated and popular this week
Relevant opportunities