I’m currently writing a fantasy novel to encourage people to join the EA community.

I’ve made a living self publishing novels for years (mainly Pride and Prejudice fan fiction, but also two litrpg-esque works), and I’ve been interested in writing something to directly promote Effective Altruism for a while.

I applied for one of Scott Alexander’s ACX grants to fund taking three - five months off from other projects to write this, and I received an ACX plus grant. Now that I’m actively developing the story, I’m hoping to get some thoughts from the community on ideas (especially about AI safety and regulation) that are important to signal boost/ treat as info hazards. Also, I think it would be cool to try brainstorming and bouncing my ideas off of people who are interested in this sort of fiction outreach project. I’ve never really showed my work to people who might meaningfully critique it while it was still in the outline phase, and I’d like to see if I get a lot of value from doing that.

I’ve set up a discord to talk about the story in, and here is a link to a shared google doc that has my outline. It’s here — if you are quick you might be the first person there who isn’t me. Come say hi! 

Here is a link to my current outline if you want to look at it and leave comments: 

Also, I’m planning on having a discussion event on the discord server on Tuesday April 5 at 6 pm Central European Time (which is, I’m afraid, 9 am in California). The plan is for me to talk a bit more about the project, and then talk about any questions or ideas people have. So it would be awesome if you join, and even if you are just vaguely curious, I’d love you to come.

The theory underlying why writing a novel like this is a good idea has two components: 

  1.  HPMOR, Atlas Shrugged, and a variety of books that did not influence me personally such as The Alchemist, Who Moved the Cheese, or Ishmael so the value of works of fiction in getting people to take ideas seriously, start talking about issues, and engage with a community.
  2. A good novel for exposing people to these ideas needs to be successful as a novel, and hence writing something that targets an existing market niche makes sense.

Two years ago I actually wrote a whole essay about the topic of using fiction to change the world that I cross posted on the EA forum and Less Wrong

I’m looking for some help in terms of what ideas to fill the novel with:

So, are there ideas around AI risk that people think are robustly likely to make the long term future better, despite the problem of cluelessness and missing crucial considerations?

If we don’t really have ideas that are robustly positive despite cluelessness, are there things that even though there are plausible worlds in which they make things worse, the balance of the evidence suggests they will improve the odds of a good long-term future, and/or make a good long term future even better than it would otherwise be?

While I don’t have any ideas that I think can robustly meet a cluelessness constraint, I do think that certain things are sufficiently likely to meet this requirement to be a good idea to promote, for example:

  • Strongly promoting basic income and some form of universal ownership of space and deep sea resources.
  • Successfully increasing collaboration and peaceful feelings among major powers
  • Convincing lots of AI researchers and the top business people in companies researching AI that AI could be really dangerous (this seems more likely than not to be good, even though it has infohazard ways it can backfire)
  • Passing luddite style regulations designed to burden AI research teams with enormous safety reporting requirements (definitely plenty of ways to backfire, again)
  • Expanding the Effective Altruist community’s size and influence

If you have ideas that you think are robustly likely to do good, please tell me (along with the robustness argument).

Or do you have ideas that are likely to be long term good, even if the argument for their goodness is not robust?

I intend to keep the novel neutral in terms of not taking stands on arguments within the EA community, but to have different characters express a version of most popular views. 

I also want to have sympathetic characters express the most common objections to EA thinking, including:

Why aren’t you pursuing systemic change?

I feel vastly more certain that I’m doing good when I help nearby (time/space) people.

I have a particular duty towards my own community and projects rather than other communities.

Donating ‘that’ much is just crazy.

What is important is being a kind person, rather than making the biggest difference.

I just can’t think that way in terms of choosing a career or a cause, I want to spend my time doing the things that matter to me personally even if they aren’t the best possible thing to do.

What are other common criticisms or arguments that are worth having someone express?

This outline lists the majority of the things I’m currently trying to include in the novel, and most of them already have a place in the current (fairly tentative and non detailed) outline.

  1. Donate a larger portion of their income (ie 10% giving pledges)
    1. The book will have characters act and think about their actions in ways that model behavior that a normal person can use
    2. Conversations and constructing a fictional society designed to normalize donating ten percent as something that ordinary people just decide to do.
    3. It will try to minimize guilt and perfectionism appeals that might make someone feel really bad or filled with regrets after reading this.
  2. Evaluating giving opportunities to improve effectiveness
    1. Cause neutrality
    2. Helping distant people is just as important, and just as much a thing for normal people to do as helping nearby people
      1. A discussion of long termism, without having the text necessarily side one way or the other
    3. Expected Value Calculations
      1. Some interventions are vastly more effective than others
    4. Tractability/ Neglectedness/ Importance framework
  3. Thinking about direct work in EA terms
    1. Replaceability
    2. Are talent gaps or funding gaps bigger
    3. Leveraging existing skills in new ways
  4. Awareness of the basic cause areas
    1. X-risk/ Long termism
      1. AI safety and biosecurity 
      2. Cluelessness and crucial considerations 
      3. Should potential future humans dominate our decision making
        1. Carl Shulman’s argument that long termism isn’t necessary to motivate x-risk as an important issue
        2. Uncertainty criticisms
        3. Paul Torres messianism and extreme behavior criticism
    2. Animal welfare
      1. Include the weird ideas like wild animal suffering and insect suffering
    3. Global Health and welfare
  5. Self Care for effective altruists
    1. We are not altruistic good maximizing machines
    2. Any step in the right direction is an improvement, and worth doing
    3. Any good policy is a policy that actual human beings can use.
  6. Secondary Cause areas:
    1. Institutional decision making
    2. Improvements in the scientific process
  7. Other possibly important ideas?
    1. Actual things that have been done to enormously improve the world, and the ability for individuals to make new good things happen

Do people think I’ve left out anything important from this list that is essential to be included in an introduction to effective altruism? Reversing the question: Are there things people think are essential to not include in an introductory discussion of effective altruism?

I want to note, I don’t think having more advanced topics intermixed with everything else is a bad thing or a problem. The goal very much will be for everything to flow in the text, and to be discussed through arguments and dialogues that are supposed to be fun to read (and that will therefore be forced to simplify the actual ideas and arguments, perhaps occasionally too far).

The other area I’m looking for help is with the publication plan. I’m currently planning when it is published to upload the novel chapter by chapter to Royal Road and the Space Battles forums, both of which are popular places to serially publish fantasy/scifi fiction for free. Around when the regular posting of chapters gets to the middle of the novel/ first book, I plan to publish the whole thing on Amazon.com and other ebook retailers while continuing to publish new chapters on the free sites every few days. I’d also probably have a version on my own website that is updated at the same time as the Royal Road and Space Battles versions.

I know there are several other free fiction sites that are popular, which I’d like to post at, but I am not sure which ones will be worth the time and effort to keep updated. So if people have ideas about that, please tell me. 


 

Here’s the discord link again

And here is my current outline

Also, I'd like to thank Milan, Richard Horvath and Gergő Gáspár for reading the draft of this post for suggestions and mistakes.
 

16

0
0

Reactions

0
0

More posts like this

Comments1
Sorted by Click to highlight new comments since:

While not directly EA, I think fiction isalso great to promote some ideas from the Progress Movement, namely humanism, agency, and the idea that "problems are hard, but solvable".

I also feel that the idea that "actually, humans are good, and humanity is worth protecting" needs some promotion. We are not just some greedy, aggressive, nature-destroying species that the earth would probably be better off without!

Curated and popular this week
Relevant opportunities