Return to site

Poker Neural Net

broken image


After spending a few weeks reading up on 'Artificial Neural Networks', I decided to pick a project that would cement my understanding of them. I was especially taken with Deep Mind's single Q-Learning algorithm that was able to learn to play almost any Atari game! Their idea is a mix of the best of the old with the new, really cool. The downside of oldschool Q-learning is that it pretty much has to visit every state in a game, take all possible actions from that state, and memorize the action with the best result. For anything bigger than Pacman, this is a huge problem! What Deepmind did was take Q-learning's huge state table and instead approximate that table with a trained Neural Net. This has the added advantage in that given a previously unvisited state, the Neural Net (NN) can still make a great educated guess about the best action, whereas Q-learning would have no clue! There's a ton of other benefits that's well-documented elsewhere, but I decided to apply this super-generic algorithm to Heads Up Limit Poker since the Atari solutions work out-of-the-box.

We reprogrammed our Neural Network to totally rework the already potent Neural Strategy, one of the world's most highly acclaimed gambling strategies! We developed the Super Neural Strategy. This strategy increases the efficiency of the original Neural Strategy by over 50% and reduces the bankroll requirements at the same time! Free poker - free online poker games. 247 Free Poker has free online poker, jacks or better, tens or better, deuces wild, joker poker and many other poker games that you can play online for free or download. It is shown that, al- though neural networks might be suitable for creating poker agents, a reasonable level of poker will not be achieved with only the most ele- mental features. It is suggested that to achieve a better poker agent, some form of opponent modelling is needed.

The setup

In 2017, the most cutting-edge NN library seems to be Tensorflow with Keras, which meant dusting off my old Python toolkit. Combined with a ton of open source Python Poker libs, it turned out as the best tool for the job. Interestingly, NN's sort of turn into a Declarative Programming language, where you give it a description of the problem (Maximize the reward function for poker that pays out the pot) and an input state (known cards and action history). The output of the NN in Q-learning is an action distribution such as (20% Fold, 40% Call, 40% Raise). As usual in Declarative programming, the input and environment is all that's needed to magically get a roughly correct result. For NN's there's a lot of thought that needs to go into minimizing your representation of the state. For example, Holding an Ace Queen is the same as holding a Queen Ace, so I make sure to sort the cards first to greatly minimize the input state. The more condensed your input state, the less 'knobs' your net will have and learning will speed up dramatically.

Initial Results

Poker bot neural network

With PyCharm, 2 poker libs, and Tensorflow, I created a fake hardcoded poker agent that pretty much just plays the odds and trained my NN against it. Since my setup essentially assumes that every game is the first time 2 players have met each other, playing the odds is a pretty good strategy. Indeed, my naive net (even with some optimizations like Prioritized Experience Replay) was only able to reach an average loss of 1 chip per game! This is mostly due to my laziness/cheapness as more training would have improved it I'm sure. The NN went from losing an average of 3 chips a game to 1 chip with about 24 hours of training and trying 8 different NN architectures. Just like NP-hard declarative programming, you need to spend a lot of time tuning your declarative engine, in this case I ended up with 3 fully connected layers linking to a binary tree shape of neural connections. The best insight I can give here is thinking 'What does the optimal poker brain look like?' In my testing (far too little though!) this worked out. I figure there needs to be some fully connected layers so that it could learn high-level features like win-rate probabilities, then funnel that through some magical series of reducing functions that end up producing 3 possible actions, hence the shape! I tested that against straight binary trees and straight flat maps. An ensemble of the 2 worked out better than any pure approach.

Improvements

With my curiosity mostly satisfied, I realized many possible improvements like using 1 NN per round. This would be handy since on pre-flop, the state is much tinier (only 169 unique hands!). Training this network specifically for pre-flops would take waaaayyyy less time and be way more accurate than my general naive net. Indeed, after more poking around, this is exactly what DeepStack, a state-of-the-art poker bot is doing! They have networks for 3 rounds and mix this with a perfect game-theoretic approach which combine to approximate a perfect Nash Equilibrium strategy! This is just too cool, so I've started to write the game-theoretic piece on top of what I've written so far. I'll post my results when it's done, but don't bother asking for the source! 😛

NN Takeaway

Wow, I'm an avid tool collector, but I'm never swung a hammer like this. NN's are UNIVERSAL FUNCTION APPROXIMATORs. That's huge. This is easily the most flexible piece of machine learning ever. The only requirements are:

  1. You have to be willing to accept 90-something percent accuracy
  2. You need lots and lots of data, or a way to generate data (like poker games 🙂 )
  3. You need to know what you're doing. Always tune your declarative engines!
  4. Lots of compute, the more the better.

Society?

If I were a betting man (I only build betting men), I'd say this tech won't be great for society or culture. There's only a handful of companies that have the levels of data needed for sophisticated NN's, and they know it. Google likes to throw Tensorflow around while stating that they're trying to open up Deep Learning to everyone. In the end this makes good business sense; have the open source community find the best methodology while Google is the only one with the data to use it effectively ;-P . NN's allow you to automate many tasks to beyond human levels, and the companies that use it will have an unbeatable automation advantage. This is already happening now in case you didn't notice, with Amazon taking over dozens of unrelated markets because of their sheer efficiency and access to massive hardware, which NN's and machine learning generally require. Some of the guys behind DeepStack's tech are trying to reapply their approach to contract negotiations by viewing it as a poker-like imperfect information game. Imagine Amazon automating their interactions with the gov so that they optimize not being broken up as a monopoly, the only likely way their acceleration would be curbed. Maybe worse, NN's are a great way to direct users to whatever you'd like with ever greater accuracy. This means that those with well trained NN's will by-and-large control media consumption even more so than now.

On the other hand, we're going to need all the smarts we can get in terms of weather prediction etc. At current, the private sector seems to almost exclusively be using this open source tech! Let's hope those of us with this power aim this wizardry toward the mountain of problems quickly heading our way. I will, right after I'm tired of taking your money on Pokerstars.

Poker Neural Net

I'm not worried about the singularity any time soon, but I do fear the few who could come to own it. I'd prefer hitting pause on all machine learning research, but since that's utterly impossible, the next best thing is to make sure we're all armed with this knowledge!

Expert-Level Artificial Intelligence in Heads-Up No-Limit Poker

Links

Twitch | YouTube | Twitter
Downloads & Videos | Media Contact

DeepStack bridges the gap between AI techniques for games of perfect information—like checkers, chess and Go—with ones for imperfect information games–like poker–to reason while it plays using 'intuition' honed through deep learning to reassess its strategy with each decision.

Poker Neural Network Example

With a study completed in December 2016 and published in Science in March 2017, DeepStack became the first AI capable of beating professional poker players at heads-up no-limit Texas hold'em poker.

DeepStack computes a strategy based on the current state of the game for only the remainder of the hand, not maintaining one for the full game, which leads to lower overall exploitability. Variety of poker briefly crossword clue.

DeepStack avoids reasoning about the full remaining game by substituting computation beyond a certain depth with a fast-approximate estimate. Automatically trained with deep learning, DeepStack's 'intuition' gives a gut feeling of the value of holding any cards in any situation.

DeepStack considers a reduced number of actions, allowing it to play at conventional human speeds. The system re-solves games in under five seconds using a simple gaming laptop with an Nvidia GPU.

The first computer program to outplay human professionals at heads-up no-limit Hold'em poker

In a study completed December 2016 and involving 44,000 hands of poker, DeepStack defeated 11 professional poker players with only one outside the margin of statistical significance. Over all games played, DeepStack won 49 big blinds/100 (always folding would only lose 75 bb/100), over four standard deviations from zero, making it the first computer program to beat professional poker players in heads-up no-limit Texas hold'em poker.

Poker Neural Net

Games are serious business

Don't let the name fool you, 'games' of imperfect information provide a general mathematical model that describes how decision-makers interact. AI research has a long history of using parlour games to study these models, but attention has been focused primarily on perfect information games, like checkers, chess or go. Poker is the quintessential game of imperfect information, where you and your opponent hold information that each other doesn't have (your cards).

Until now, competitive AI approaches in imperfect information games have typically reasoned about the entire game, producing a complete strategy prior to play. However, to make this approach feasible in heads-up no-limit Texas hold'em—a game with vastly more unique situations than there are atoms in the universe—a simplified abstraction of the game is often needed.

A fundamentally different approach

DeepStack is the first theoretically sound application of heuristic search methods—which have been famously successful in games like checkers, chess, and Go—to imperfect information games.

At the heart of DeepStack is continual re-solving, a sound local strategy computation that only considers situations as they arise during play. This lets DeepStack avoid computing a complete strategy in advance, skirting the need for explicit abstraction.

During re-solving, DeepStack doesn't need to reason about the entire remainder of the game because it substitutes computation beyond a certain depth with a fast approximate estimate, DeepStack's 'intuition' – a gut feeling of the value of holding any possible private cards in any possible poker situation.

Finally, DeepStack's intuition, much like human intuition, needs to be trained. We train it with deep learning using examples generated from random poker situations.

Net

DeepStack is theoretically sound, produces strategies substantially more difficult to exploit than abstraction-based techniques and defeats professional poker players at heads-up no-limit poker with statistical significance.

Download

Paper & Supplements

Hand Histories

Members (Front-back)

Michael Bowling, Dustin Morrill, Nolan Bard, Trevor Davis, Kevin Waugh, Michael Johanson, Viliam Lisý, Martin Schmid, Matej Moravčík, Neil Burch

low-variance Evaluation

The performance of DeepStack and its opponents was evaluated using AIVAT, a provably unbiased low-variance technique based on carefully constructed control variates. Thanks to this technique, which gives an unbiased performance estimate with 85% reduction in standard deviation, we can show statistical significance in matches with as few as 3,000 games.

Poker Neural Network

Abstraction-based Approaches

Despite using ideas from abstraction, DeepStack is fundamentally different from abstraction-based approaches, which compute and store a strategy prior to play. While DeepStack restricts the number of actions in its lookahead trees, it has no need for explicit abstraction as each re-solve starts from the actual public state, meaning DeepStack always perfectly understands the current situation.

Professional Matches

Poker Neural Net

With PyCharm, 2 poker libs, and Tensorflow, I created a fake hardcoded poker agent that pretty much just plays the odds and trained my NN against it. Since my setup essentially assumes that every game is the first time 2 players have met each other, playing the odds is a pretty good strategy. Indeed, my naive net (even with some optimizations like Prioritized Experience Replay) was only able to reach an average loss of 1 chip per game! This is mostly due to my laziness/cheapness as more training would have improved it I'm sure. The NN went from losing an average of 3 chips a game to 1 chip with about 24 hours of training and trying 8 different NN architectures. Just like NP-hard declarative programming, you need to spend a lot of time tuning your declarative engine, in this case I ended up with 3 fully connected layers linking to a binary tree shape of neural connections. The best insight I can give here is thinking 'What does the optimal poker brain look like?' In my testing (far too little though!) this worked out. I figure there needs to be some fully connected layers so that it could learn high-level features like win-rate probabilities, then funnel that through some magical series of reducing functions that end up producing 3 possible actions, hence the shape! I tested that against straight binary trees and straight flat maps. An ensemble of the 2 worked out better than any pure approach.

Improvements

With my curiosity mostly satisfied, I realized many possible improvements like using 1 NN per round. This would be handy since on pre-flop, the state is much tinier (only 169 unique hands!). Training this network specifically for pre-flops would take waaaayyyy less time and be way more accurate than my general naive net. Indeed, after more poking around, this is exactly what DeepStack, a state-of-the-art poker bot is doing! They have networks for 3 rounds and mix this with a perfect game-theoretic approach which combine to approximate a perfect Nash Equilibrium strategy! This is just too cool, so I've started to write the game-theoretic piece on top of what I've written so far. I'll post my results when it's done, but don't bother asking for the source! 😛

NN Takeaway

Wow, I'm an avid tool collector, but I'm never swung a hammer like this. NN's are UNIVERSAL FUNCTION APPROXIMATORs. That's huge. This is easily the most flexible piece of machine learning ever. The only requirements are:

  1. You have to be willing to accept 90-something percent accuracy
  2. You need lots and lots of data, or a way to generate data (like poker games 🙂 )
  3. You need to know what you're doing. Always tune your declarative engines!
  4. Lots of compute, the more the better.

Society?

If I were a betting man (I only build betting men), I'd say this tech won't be great for society or culture. There's only a handful of companies that have the levels of data needed for sophisticated NN's, and they know it. Google likes to throw Tensorflow around while stating that they're trying to open up Deep Learning to everyone. In the end this makes good business sense; have the open source community find the best methodology while Google is the only one with the data to use it effectively ;-P . NN's allow you to automate many tasks to beyond human levels, and the companies that use it will have an unbeatable automation advantage. This is already happening now in case you didn't notice, with Amazon taking over dozens of unrelated markets because of their sheer efficiency and access to massive hardware, which NN's and machine learning generally require. Some of the guys behind DeepStack's tech are trying to reapply their approach to contract negotiations by viewing it as a poker-like imperfect information game. Imagine Amazon automating their interactions with the gov so that they optimize not being broken up as a monopoly, the only likely way their acceleration would be curbed. Maybe worse, NN's are a great way to direct users to whatever you'd like with ever greater accuracy. This means that those with well trained NN's will by-and-large control media consumption even more so than now.

On the other hand, we're going to need all the smarts we can get in terms of weather prediction etc. At current, the private sector seems to almost exclusively be using this open source tech! Let's hope those of us with this power aim this wizardry toward the mountain of problems quickly heading our way. I will, right after I'm tired of taking your money on Pokerstars.

I'm not worried about the singularity any time soon, but I do fear the few who could come to own it. I'd prefer hitting pause on all machine learning research, but since that's utterly impossible, the next best thing is to make sure we're all armed with this knowledge!

Expert-Level Artificial Intelligence in Heads-Up No-Limit Poker

Links

Twitch | YouTube | Twitter
Downloads & Videos | Media Contact

DeepStack bridges the gap between AI techniques for games of perfect information—like checkers, chess and Go—with ones for imperfect information games–like poker–to reason while it plays using 'intuition' honed through deep learning to reassess its strategy with each decision.

Poker Neural Network Example

With a study completed in December 2016 and published in Science in March 2017, DeepStack became the first AI capable of beating professional poker players at heads-up no-limit Texas hold'em poker.

DeepStack computes a strategy based on the current state of the game for only the remainder of the hand, not maintaining one for the full game, which leads to lower overall exploitability. Variety of poker briefly crossword clue.

DeepStack avoids reasoning about the full remaining game by substituting computation beyond a certain depth with a fast-approximate estimate. Automatically trained with deep learning, DeepStack's 'intuition' gives a gut feeling of the value of holding any cards in any situation.

DeepStack considers a reduced number of actions, allowing it to play at conventional human speeds. The system re-solves games in under five seconds using a simple gaming laptop with an Nvidia GPU.

The first computer program to outplay human professionals at heads-up no-limit Hold'em poker

In a study completed December 2016 and involving 44,000 hands of poker, DeepStack defeated 11 professional poker players with only one outside the margin of statistical significance. Over all games played, DeepStack won 49 big blinds/100 (always folding would only lose 75 bb/100), over four standard deviations from zero, making it the first computer program to beat professional poker players in heads-up no-limit Texas hold'em poker.

Games are serious business

Don't let the name fool you, 'games' of imperfect information provide a general mathematical model that describes how decision-makers interact. AI research has a long history of using parlour games to study these models, but attention has been focused primarily on perfect information games, like checkers, chess or go. Poker is the quintessential game of imperfect information, where you and your opponent hold information that each other doesn't have (your cards).

Until now, competitive AI approaches in imperfect information games have typically reasoned about the entire game, producing a complete strategy prior to play. However, to make this approach feasible in heads-up no-limit Texas hold'em—a game with vastly more unique situations than there are atoms in the universe—a simplified abstraction of the game is often needed.

A fundamentally different approach

DeepStack is the first theoretically sound application of heuristic search methods—which have been famously successful in games like checkers, chess, and Go—to imperfect information games.

At the heart of DeepStack is continual re-solving, a sound local strategy computation that only considers situations as they arise during play. This lets DeepStack avoid computing a complete strategy in advance, skirting the need for explicit abstraction.

During re-solving, DeepStack doesn't need to reason about the entire remainder of the game because it substitutes computation beyond a certain depth with a fast approximate estimate, DeepStack's 'intuition' – a gut feeling of the value of holding any possible private cards in any possible poker situation.

Finally, DeepStack's intuition, much like human intuition, needs to be trained. We train it with deep learning using examples generated from random poker situations.

DeepStack is theoretically sound, produces strategies substantially more difficult to exploit than abstraction-based techniques and defeats professional poker players at heads-up no-limit poker with statistical significance.

Download

Paper & Supplements

Hand Histories

Members (Front-back)

Michael Bowling, Dustin Morrill, Nolan Bard, Trevor Davis, Kevin Waugh, Michael Johanson, Viliam Lisý, Martin Schmid, Matej Moravčík, Neil Burch

low-variance Evaluation

The performance of DeepStack and its opponents was evaluated using AIVAT, a provably unbiased low-variance technique based on carefully constructed control variates. Thanks to this technique, which gives an unbiased performance estimate with 85% reduction in standard deviation, we can show statistical significance in matches with as few as 3,000 games.

Poker Neural Network

Abstraction-based Approaches

Despite using ideas from abstraction, DeepStack is fundamentally different from abstraction-based approaches, which compute and store a strategy prior to play. While DeepStack restricts the number of actions in its lookahead trees, it has no need for explicit abstraction as each re-solve starts from the actual public state, meaning DeepStack always perfectly understands the current situation.

Professional Matches

Poker Bot Neural Network

We evaluated DeepStack by playing it against a pool of professional poker players recruited by the International Federation of Poker. 44,852 games were played by 33 players from 17 countries. Eleven players completed the requested 3,000 games with DeepStack beating all but one by a statistically-significant margin. Over all games played, DeepStack outperformed players by over four standard deviations from zero.


Heuristic Search

Poker Bot Neural Network

At a conceptual level, DeepStack's continual re-solving, 'intuitive' local search and sparse lookahead trees describe heuristic search, which is responsible for many AI successes in perfect information games. Until DeepStack, no theoretically sound application of heuristic search was known in imperfect information games.

Poker Neural Network

','resolveObject':','resolvedBy':'manual','resolved':true}'>
','resolvedBy':'manual','resolved':true}'>

Poker Neural Network Tutorial

','resolveObject':','resolvedBy':'manual','resolved':true}'>




broken image