Published
Reading time
1 min read
Takes from videogame Source of Madness

How do you control a video game that generates a host of unique monsters for every match? With machine learning, naturally.

What’s new: The otherworldly creatures in Source of Madness learn how to target players through reinforcement learning, the developers told The Batch.

How it works: Players battle an infestation of fiends in a procedurally generated, side-scrolling wasteland.

  • At the start of each level, the game uses non-neural computation to slap together a menagerie of unique monsters, each an assemblage of spidery legs, fireball-spitting tentacles, and bulbous carapaces. The monsters become more powerful as the game progresses.
  • The endless variety of monsters makes traditional game-control techniques impractical. Instead, a feed-forward network trained on a sandbox simulation of the game receives a reward for a monster’s every step toward a player.
  • The reinforcement learning environment comes from Unity, which makes 3D software development tools.
  • The game’s developer, Carry Castle, is still fine-tuning it. The release date hasn’t been set, but you can request a test version here.

Behind the news: Most commercial titles use rules-based systems to control non-player characters. But some games have had success experimenting with neural networks.

  • Supreme Commander 2, a war game similar to Starcraft, uses neural networks to decide whether the computer’s land, airborne, and naval units will fight or flee.
  • The racing series Forza trains networks to imitate a human player’s style, such as how they take corners or how quickly they brake. These agents compete against other humans to earn points for the one they mimic.

Why it matters: Machine learning is infiltrating games as developers seek to build virtual worlds as variable and surprising as the real one.

We’re thinking: To all monsters, we say: keep learning!

Share

Subscribe to The Batch

Stay updated with weekly AI News and Insights delivered to your inbox