The Poker Bot has long been an internet rumor, a feared monster under the bed of internet poker. A computer program designed to play poker; a bot (short for robot) would always make the "correct" move, never believing for a minute that poker is sometimes about "making the wrong move at the right time." Players have typically fallen into two camps, the conspiracy theorists that see bots everywhere in internet poker, and the scoffers who say that a computer could never be taught all the intricate moves necessary to be successful against human opponents.
But what if the opponent wasn't human? What if the bot played another bot, and then another bot, in a series of round-robin, heads-up matches, to determine the virtual bracelet winner in a "World Series of PokerBots?"
That's just what was held this summer in Boston, right as the World Series of Poker was kicking off in Las Vegas. Under the more academically-friendly auspices of the American Association of Artificial Intelligence (AAAI), a team from the University of Alberta defeated all comers in two different types of Limit Texas Hold ‘Em tournaments where none of the players had a pulse to register, or a single physical tell. They'd never read a book on poker, and not a single one of them had ever watched an episode of the World Poker Tour.
Five Poker bot programs were submitted from around the world, with names like Bluffbot, Hyperborean and Teddy. Hyperborean was created by a group of students and professors at the University of Alberta, Monash was created by a group from Monash University in Victoria, Australia, and GS2 was created by a student-professor team from Carnegie Mellon University in Pittsburgh. Teddy and Bluffbot were created by individuals from Denmark and Irvine, CA, respectively.
Bluffbot was designed primarily for the competition as a plug-in for poker training software Poker Academy. Bluffbot designer Teppo Salonen says on the Bluffbot website that because of the computer's ability to play practically mistake-free poker, Bluffbot is "a much better player than I am." Poker Academy is a training tool created by BioTools (based in Edmonton and has close links with the U. of Alberta team) and licensed a version of its poker A.I. for the new Daniel Negreanu video game Stacked, released earlier this year, but its best A.I. is only available in Poker Academy.
The competition consisted of two formats: The "bankroll tournament" consisted of 240,000 hands, and the winner was the bot with the highest total amount of (fake) money won. The "series tournament" was a round-robin tournament, where each pair of bots played a series of 12,000 hands, with the winner of the series being the bot who won more (fake) money. The winner of the series tournament is the bot who wins the most series.
Dr. Michael Littman states: "Slightly more interesting, perhaps, is the idea that the bots played the same series of cards, reversing roles. This trick allowed us to get more reliable statistical results. And, since, unlike people, we could perform complete "mind wipes" between each series, there was no danger that a bot would recognize that it had been in that situation before."
But how would these virtual players fare against human competition? According to Robert Holte from the University of Alberta "Most bots are not designed to play poker ‘like a human'. The strategies used by Hyperborean and GS2 were derived by solving a set of mathematical formulae encoding the game of poker. It's true these programs bluff and slow-play, as humans do, but that's because the mathematics of the game requires bluffing etc. to play well. I am not all that familiar with the inner workings of Teddy, Bluffbot, and Monash, but I suspect that none is programmed to mimic human play."
Holte adds more thought to the mix, "...none of these bots plays flawlessly, although Hyperborean, we know from matches against top humans in the past, is very challenging even for the best humans."
Holte adds more insight, "Although it is literally true that the computer's calculations are mistake-free every time, this has nothing to do with the quality of its play - it could be perfectly playing a lousy strategy."
Another interesting twist, by Holte, to the thought that bots have no tells: "...you might be amused to know that one recent version of our bot did have a physical tell — it played slightly more quickly if it had a good hand, and this was picked up by the human expert on our team who tests our bots. We corrected the problem before the tournament, but it shows that physical tells in bots are not impossible."
Dr. Michael Littman from Rutgers was brought in by the tournament sponsors to serve as an unbiased observer to declare the winner of each format. The tournament was deemed a success, with the 2007 AAAI program chair expressing interest in repeating the experiment next summer for the conference to be held July 22-26 in Vancouver. While no bracelets or huge prize pools were awarded this summer, the future seems bright for these young poker minds and for their programmers as well.