Hide table of contents

Also available on LessWrong.
Preceded By: Encultured AI Pre-planning, Part 2: Providing a Service

If you've read to the end of our last post, you maybe have guessed: we’re building a video game!

This is gonna be fun :)

Our homepage: https://encultured.ai/

Will Encultured save the world?

Is this business plan too good to be true?  Can you actually save the world by making a video game?

Well, no.  Encultured on its own will not be enough to make the whole world safe and happy forever, and we'd prefer not to be judged by that criterion.  The amount of control over the world that's needed to fully pivot humanity from an unsafe path onto a safe one is, simply put, more control than we're aiming to have.  And, that's pretty core to our culture.  From our homepage:

Still, we don’t believe our company or products alone will make the difference between a positive future for humanity versus a negative one, and we’re not aiming to have that kind of power over the world. Rather, we’re aiming to take part in a global ecosystem of companies using AI to benefit humanity, by making our products, services, and scientific platform available to other institutions and researchers.

Our goal is to play a part in what will be or could be a prosperous civilization.  And for us, that means building a successful video game that we can use in valuable ways to help the world in the future!

Fun is a pretty good target for us to optimize

You might ask: how are we going to optimize for making a fun game and helping the world at the same time?  The short answer is that creating a game world in which lots of people are having fun in diverse and interesting ways in fact creates an amazing sandbox for play-testing AI alignment & cooperation.  If an experimental new AI enters the game and ruins the fun for everyone — either by overtly wrecking in-game assets, subtly affecting the game culture in ways people don't like, or both — then we're in a good position to say that it probably shouldn't be deployed autonomously in the real world, either.  In the long run, if we're as successful we hope as a game company, we can start posing safety challenges to top AI labs of the form "Tell your AI to play this game in a way that humans end up endorsing." 

Thus, we think the market incentive to grow our user base in ways they find fun is going to be highly aligned with our long-term goals.  Along the way, we want our platform to enable humanity to learn as many valuable lessons as possible about human↔AI interaction, in a low-stakes game environment before having to learn those lessons the hard way in the real world.

Principles to exemplify

In preparation for growing as a game company, we’ve put a lot of thought into how to ensure our game has a positive rather than negative impact on the world, accounting for its scientific impact, its memetic impact, as well as the intrinsic moral value of the game as a positive experience for people.

Below are some guiding principles we’re planning to follow, not just for ourselves, but also to set an example for other game companies:

  • Pursue: Fun!  We’re putting a lot of thought into not only how our game can be fun, but also ensuring that the process of working at Encultured and building the game is itself fun and enjoyable.  We think fun and playfulness are key for generating outcomes we want, including low-stakes high-information settings for interacting with AI systems.
     
  • Maintain: opportunities to experiment.  No matter how our product develops, we’re committed to maintaining its value as a platform for experiments, especially experiments that help humanity navigate the present and future development of AI technology.
     
  • Avoid: teaching bad lessons.  On the margin, we expect our game to incentivize cooperation over conflict, relative to other games.  If players demand some amount of in-game violence, we might enable it, but only along with other features that reward people/groups for finding ways to avoid violence (like in the real world).  We hope that our creativity in this regard can set a positive example for other game companies.
     
  • Avoid: in-game suffering.  Unlike other game developers, we are committed to ensuring that the entities in our game are not themselves susceptible to conscious suffering.  Today’s narrow AI systems are not likely to be entities that suffer, but if that changes, we’ll be on the lookout to avoid it, and to promote industry-wide standards for minimizing the in-game suffering of algorithmic entities.
     
  • Avoid: uncontrolled intelligence explosions.  This should go without saying given our founding team, but: we expect to be much more careful than other companies to ensure that recursively self-improving intelligent agents don’t form within our game and break out onto the internet!  Again, with today’s AI technology – especially as used in our video game as-planned — this possibility is extremely unlikely; however, as AI progresses, we’re going to exercise and promote industry-wide caution around the potential for intelligence explosions.  
     
  • Pursue: more fun :)  We want our developers’ sense of creativity and our users’ sense of fun to drive our product development for the most part; otherwise, we’ll miss out on a huge number of connections with people who can teach us valuable lessons about how human↔AI interactions should work.
     

So, that’s it.  Make a fun game, make sure it remains a healthy and tolerant place for experiments with AI safety and alignment, and be safe and ethical ourselves in the ways we want all game companies to be safe and ethical.  We hope you’ll like it!  

If we're very lucky and the global development of AI technology moves in a really safe and positive direction — e.g., if we end up with a well-functioning Comprehensive AI Services economy — maybe our game will even stick around as a long-lasting source of healthy entertainment.  While it's beyond our ability to unilaterally prevent every disaster that could avert such a positive future, it's definitely our intention to help steer things in that direction.

Also, we’re hiring!  Definitely reach out to our team via contact@encultured.ai if you have any questions or ideas to share, or if you might want to get involved :)

34

0
0

Reactions

0
0

More posts like this

Comments6
Sorted by Click to highlight new comments since:

Is there a reason you are starting from scratch and building a whole new videogame? This seems like a lot of work, and it risks failure for a bunch of mundane reasons (ie, not enough people like the game) even if the AI part works well. For an early minimim-viable-product, why not create an AI that operates within an already popular and easily moddable game? Things like Minecraft or Terraria servers come to mind since they have a focus on building in a voxel-based environment, but I'm sure there are lots of potential games that you could look at.

In other news, I think this is a really interesting and promising way of studying alignment problems and testing out various proposed solutions. Congrats for working on such a cool idea!

I was pretty concerned about this too, but one reason for optimism is they have a very experienced professional game developer on their team, Brandon Reinhart:

Brandon has been a professional game developer since 1998, starting his career at Epic Games with engineering and design on Unreal Tournament and Unreal Engine 1.0. More recently, Brandon spent 12 years at Valve wearing (and inventing) hats. Many, many hats… Brandon has spent considerable amounts of time in development and leadership on Team Fortress 2 and Dota 2 where he wrote mountains of code and pioneered modern approaches to game development. Also an advisor for the Makers Fund family of companies, Brandon offers his expertise to game startups at all stages of growth.

Hi there!

Your website says:

Encultured AI is a for-profit video game company with a public benefit mission: to develop technologies promoting the long-term survival and flourishing of humanity and other sentient life.

Can you share any information about the board of directors, the investors, and governance mechanisms (if there are any) that aim to cause the company to make good decisions when facing conflicts between its financial goals and EA-aligned goals?

This is going to be a really challenging issue. It's not that you couldn't accomplish it; rather, it's a little outside the realm of what people are doing right now. Modern video game reinforcement learning play uses considerably easier objectives, like navigating a maze or discovering an object, to teach players in 3D worlds. If you'd want, I can offer you some papers.

This is so cool. I had a similar idea about an ethical game a while ago! The idea was that:

  • The objective is to improve decisionmakers' ethics
    • More points are gained for impact-maximization decisions in places and at times of large important meetings
      • The game settings/new developments are unrelated to the actual meetings but inspire thinking alongside similar lines[1]
    • At places and at times without large important meetings, on the other hand, points are gained for more deontological and active-listening-based decisions - the greater diversity of places of engagement, the better
      • This should motivate the consideration of a broader variety of groups, also though confirming that individuals should be nice to others[2]
  • Traditional social hierarchy shortcuts are played with in the design
    • For example, any gender person or entity can save another entity from a tower/pond/etc, if that task is included in the game
    • Authority characters exhibit some of the same body language as traditional[3] and non-traditional[4] authorities but are of any identities (traditionally more and less powerful, such as people of any gender, race, and background) who express themselves individually
    • Body shaming is entirely replaced by spirit and skill-based judgment but it is still possible to in some cases confirm one's biases about body hierarchies
    • Hierarchies related to territory, self and partners' objectification according to commerce, disregard in intimacy, ownership of items expensive due to marketing not function, fight that hurts someone, gaining attention by threat, showcasing unapproachability, and other negative standards are not used to motivate players' progress or present a hierarchy - there is not really a hierarchy since the game is cooperative
      • These hierarchies can be used for critical engagement/discourse
  • The environment and tasks are continuously created, also by the players
    • Players gain points/perks for suggesting quests and settings that motivate impact-maximization decisionmaking and active listening to a diversity of individuals
      • The explicit objective point/perk award criteria includes an ethical 'passing' standard (relatively easy to get approved by friends, as long as one is friends from at least someone from various teams/groups/experience) but is otherwise based on something exclusively game-relevant (such as the number of blocks used)
    • The developers check on the ethical developments and intervene as necessary
      • For example,  if a new ethical norm that was just accepted starts being overemphasized, as if to make a point by some groups, an interesting less ethics-intense challenge is introduced
      • If the dark triad traits become  prominent among malevolent actors, points are associated with actions that  counter the reinforcement of these traits
      • If anything becomes too repetitive or boring, new possibilities of playing are introduced
  • Friendships are formed
    • Players can participate in various teams at the same time. There is no better and worse affiliation, point maximization depends on one's skills. Players can change affiliations freely, which can be beneficial to their score.
    • Chat function is engaging and concisely informative, providing the delight of having all info available in a useful format. Sincere reactions can be exhibited (rather than e. g. stickers or memes that confirm biases or optimize for non-critical engagement)
    • Players can be recognized at large decisionmaker meetings and outside.
  • Coding challenges
    • Make it difficult to trick the GPS
      • Or not, if there may be a sufficiently small number of sufficiently cool non-decisionmaker players who can inspire the decisionmakers

Feel free to use this for inspiration.

Are you soliciting ideas for the games in any way? For example, will you have Essay Contests or ideation days? There may be high interest from the EA community.

Another question is if you seek to actually engage the players in the alignment or more so make them comfortable[5] so that you can slip any thinking to them, even if they 'wanted spaceships and it is animal welfare?'[6]

  1. ^

    For example, to acquire a bounty pirates have to critically engage parrots while finding a way to make swords when iron is not on the map.

    This can be very entertaining to the attendees of the  OPEC and non-OPEC Ministerial Meeting, if it seemed that everyone is parroting phrases. The no natural resource on the map can be a fun way to attract attention in a kind way and gain friendly understanding of fellow Meeting participants. This is a hypothetical example.

  2. ^

    The way to motivate the decisionmakers to engage non-humans can be through analogous game challenges (this blob flying around you is trying to communicate something - what do you do to understand?) or marking some places with those who understand non-humans (e. g. neuroscience researchers or sanctuary farmers) as high-point for active-listening decisionmaking.

  3. ^
  4. ^
  5. ^

    I am not sure if I am emotionally explaining the difference adequately, but this relates to the feeling 1) from the stomach up, palms going up, the person seeks to engage and is positively stimulated or 2) slight relaxation in the lower back, hands close, the person seeks to repeat ideas and avoid personal interaction.

  6. ^

    Engaging the players may be necessary, otherwise problems that need extensive engagement will not get resolved and efficiency may be much lower compared to when everyone actually tries to solve the overall inclusive alignment and continue to optimize for greater wellbeing, efficiencies, and other important objectives.

    The example is that a 60-hen cage can be better for chickens than open barns (according to EconTalk) - and that is just one aspect of life of one of the almost 9 million species and many more individuals. If people were to be 'tricked' into opening cages, a lot would remain unresolved.

[comment deleted]1
0
0
Curated and popular this week
Relevant opportunities