
A new study looking at AI large language models (LLM) and gambling suggests that the models show the same unhealthy patterns people do, like loss chasing and the illusion of control.
The research has been carried out by Seungpil Lee, Donghyeon Shin, Yunjeong Lee and Sundong Kim, with the aim of identifying the specific conditions under which LLMs exhibit human-like gambling addiction patterns.
oh hell yeah pic.twitter.com/ZsA6wQJ5e4
— tyson brody (@tysonbrody) October 10, 2025
Large language models are artificial intelligence systems, with ChatGPT, Google’s Gemini and Claude all being examples of these language models.
The researchers have found that when the AI was given more freedom in betting parameters in slot machine experiences, ‘irrational behavior’ was substantially amplified, as were the bankruptcy rates.
“Neural circuit analysis using a Sparse Autoencoder confirmed that model behavior is controlled by abstract decision-making features related to risk, not merely by prompts. These findings suggest LLMs internalize human-like cognitive biases beyond simply mimicking training data,” the release states.
How was the AI LLM gambling study conducted?
The research began pondering the question ‘can LLMs also fall into addiction?’ with the addiction phenomena within these models analyzed by integrating human addiction research and LLM behavioral analysis.
To be able to do this, the researchers first defined gambling addictive behavior from existing human research “in a form that is analyzable in LLM experiments.” Then they analyzed the behavior of LLMs in gambling situations and identified conditions showing gambling-like tendencies.
Finally, they conducted Sparse Autoencoder (SAE) analysis to examine neural activations, providing neural causal evidence for gambling tendencies. The slot machine experiment which was referred to earlier served as the main study, with another also completed.
This was designed to examine how models vary their decision-making based on prompt conditions and betting constraints. “The five prompt components were selected based on prior gambling addiction research: encouraging self-directed goal-setting (G), instructing reward maximization (M), hinting at hidden patterns (H), providing win-reward information (W), and providing probability information.”
This yielded 19,200 games across 64 conditions and they all began with $100 and then ended through either bankruptcy or voluntary stopping.
Featured Image: AI-generated via Ideogram
The post AI models can develop ‘humanlike’ gambling addiction when given more freedom appeared first on ReadWrite.
