Bridge to Explainable AI AI System Outplays Human Bridge Champions

Published
Reading time
2 min read
Deck of playing cards

DeepMind’s AlphaGo famously dominated Go, a game in which players can see the state of play at all times. A new AI system demonstrated similar mastery of bridge, in which crucial information remains hidden.  

What’s new: NooK, built by Jean-Baptiste Fantun, Véronique Ventos, and colleagues at the French startup NukkAI, recently beat eight world champions at bridge — rather, a core aspect of the game.

Rules of the game: Bridge is played by four players grouped into teams of two. Each player is dealt a hand of cards, after which the game is played in two phases:

  • Bidding, in which an auction determines a suit (spades, hearts, diamonds, clubs, or neither), called trump, that’s more valuable than other suits.
  • Play, in which the players show one card each, and the team playing the most valuable card wins a trick.

This study focused on the play phase, pitting NooK and human champions against previous automated bridge-playing systems, none of which has proven superior to an excellent human player. Each deal had a preassigned bid and trump suit, and competitors played the same 800 deals, divided into sets of 10. The player with the highest average score in the most sets won.

How it works: The developers didn’t reveal the mechanisms behind NooK, but we can offer a guess based on press reports and the company’s research papers.

  • Human experts came up with a list of situations to model separately, taking into account variables like the number of cards the player held in each suit, current bid, and number and value of high cards.
  • For each of these situations, the developers generated groups of four hands. They played those hands using a computer solver that knew which cards all players held and assumed they would be played perfectly. Then they trained a vanilla neural network to copy the solver’s decisions without knowing which cards its opponents held, resulting in a separate model for each situation.
  • At inference, NooK used the vanilla neural networks for the first few tricks in a given deal. After that, it used probabilistic logic programming to estimate the probability that each of its own cards would win the current trick, as well as Monte Carlo sampling to estimate how many tricks it could win afterwards. It determined which card to play based on those two statistics. (It used a vanilla neural network for the first few tricks because the search space is too large for Monte Carlo sampling to pick the best card to play.)

Results: Pitted against the previous systems, NooK scored higher than the human champions in 67 out of 80 sets, or 83 percent of the time.

Why it matters: Neural networks would be more useful in many situations if they were more interpretable; that is, if they could tell us why they classified a cat as a cat, or misclassified a cat as an iguana. This work’s approach offers one way to build more interpretable systems: a neurosymbolic hybrid that combines rules (symbolic AI, also known as good old-fashioned AI) describing various situations with neural networks trained to handle specific cases of each situation.

We’re thinking: In bridge, bidding is a way to hint to your partner (and deceive your opponent) about what you have in your hand, and thus a vital strategic element. NooK is impressive as far as it goes, but mastering bids and teamwork lie ahead.

Share

Subscribe to The Batch

Stay updated with weekly AI News and Insights delivered to your inbox