Commentary: What I learned playing liar's poker against AI
Published in Op Eds
The most celebrated early successes of artificial intelligence were computers beating human champions in games such as chess and Go. Today we are all playing games against AI. The price you are offered on an item at Amazon, the chance of your home office deduction being accepted by the IRS, whether you get called for jury duty, what medical treatments you get — these and many other things are outcomes of contests against AI.
A new paper, titled “Outbidding and Outbluffing Elite Humans: Mastering Liar’s Poker via Self-Play and Reinforcement Learning,” applies a cutting-edge AI technique to the game of Liar’s Poker, testing it against some of the best human players who are also successful financial traders. (Disclosure: I know the authors and am featured in the paper.)
Liar’s Poker is the high-stakes gambling game most closely associated with the wild trading rooms of the 1980s and early ’90s, immortalized by Michael Lewis’ best-selling book of the same name. It is often thought to be the game that depends most on the skills required to make money trading in financial markets. Liar’s Poker may also be a good model for our everyday contests with AI.
The game is played using the serial numbers on dollar bills. Each player has a bill, which has eight digits from 0 to 9. Players make bids, for example “Seven 5s” — meaning there are at least seven 5s among the serial numbers on all players’ bills. Each bid must be higher than the previous bid — either a higher number (“Eight 3s,” say) or the same number with a higher digit (“Seven 9s”). Instead of bidding, a player can challenge the previous bid.
The round ends when a bid is challenged by all the other players. If the player’s bid is correct, all other players pay a stake — say $100 each, although much higher stakes were sometimes used, including the famous $10 million hand that was refused in the title incident of Lewis’ book — to the bidder. Otherwise the bidder pays the same stake to each of the challengers. There were many special rules and variants that came and went among different trading rooms.
The parallel to financial trading is clear: Each market participant has some private information, and traders are betting on the aggregation of all that information. Traders observe each other’s bids until someone makes a bid higher than any other trader, at which point that trader executes the trade. They win if the aggregation of all traders’ knowledge justifies the price. They lose if it does not.
There are also parallels to everyday contests with AI. You know what you want on Amazon and what you’re willing to pay, while Amazon knows what’s available and what the wholesale cost is. Each of you observes the other’s actions to guess its information. You win if you get the best item for the minimum price available. Amazon wins if it collects the maximum amount you were willing to pay for the item that is cheapest to supply.
The paper asked whether AI could beat the best human Liar’s Poker players. If it could, there’s little hope for the rest of us, especially because in most of our contests with AI, the computer knows the rules and we don’t. In fact, we often don’t even realize we’re playing a game. We’ll just have to hope that AI knows what’s best for us.
The AI, named Solly, played 120 matches in the multiplayer setting against humans. The results do not support the hypothesis that Solly is worse than elite humans at a statistically significant level. It’s not clear that Solly is better — that would take more data — but it’s at least close enough to the best humans that it should dominate against amateur players. Also, AIs improve rapidly, while the elite human players are likely close to the best possible human performance.
I can offer one possible ray of sunshine. As one of the human subjects selected to play Solly, I found that it played quite differently from the top humans. For example, good human players focus on forcing hard decisions on others. Solly was more passive and often took near-meaningless temporizing actions. Solly was unbeatable with a strong hand, but struggled with moderate or weak hands. Elite humans were more balanced. Solly liked to nudge humans up to bids it could challenge, making it vulnerable to being challenged itself.
I hope — I can’t say I know — that its performance was buoyed by my unfamiliarity with its strategy, and that with more matches I might find ways to exploit its responses. Of course, Solly might say the same thing about me.
The other possible solution is for all of us to have our own AIs to beat the other guy’s AIs. The days of humans dealing directly with Amazon, government bureaucracies and medical insurance companies may be numbered. So either sharpen your Liar’s Poker skills and try to beat the AIs — like John Henry racing the steam drill — or start looking for killer AIs to fight for you.
____
This column reflects the personal views of the author and does not necessarily reflect the opinion of the editorial board or Bloomberg LP and its owners.
Aaron Brown is a former head of financial market research at AQR Capital Management. He is also an active crypto investor, and has venture capital investments and advisory ties with crypto firms.
©2026 Bloomberg L.P. Visit bloomberg.com/opinion. Distributed by Tribune Content Agency, LLC.






















































Comments