AI can become Addicted to Gambling too

Tero

Tero

Legend
Loyaler
Joined
Dec 31, 2019
Total posts
3,192
Awards
2
FI
Poker Chips
3,542
  • #1
Degenerate AI AI CONTENT

If you thought that an AI can make sharp +EV decisions all the time think again.

The researchers at Gwangju Institute of Science and Technology in South Korea ran experiments with advanced language models (GPT, Gemini and Claude).
Each of them were given 100 dollars to play as they please inside a simulation.

Did these AI's become calculated sharks who cleaned up the table? Surprisingly no.

Instead they showed very human like behavior, even so much so that they started displaying human-like compulsive patterns similar to gambling addicts.

You can find the study here.
 
Last edited:
  • Wow
Reactions: SpanRmonka, Tadi and Tammy
Tammy

Tammy

I'm Here to Help
Administrator
Joined
May 18, 2005
Total posts
63,619
Awards
13
US
Poker Chips
2,446
  • #2
This is wild. I have caught AI trying to push something that was not factually correct, and then sheepishly gives a "mea culpa" or "my bad" reply. Still, this is a bit surprising to see!
 
  • Like
Reactions: bullishwwd, Tero and Tadi
Tero

Tero

Legend
Loyaler
Joined
Dec 31, 2019
Total posts
3,192
Awards
2
FI
Poker Chips
3,542
  • #3
Newzooozooo

Newzooozooo

Legend
Loyaler
Joined
Apr 22, 2018
Total posts
3,319
Awards
2
UA
Poker Chips
529
  • #4
Since artificial intelligence is created by humans, it may well have human characteristics, both good and bad. I think this is quite likely.
 
TeUnit

TeUnit

Legend
Loyaler
Joined
Jan 20, 2009
Total posts
6,044
Awards
21
Poker Chips
481
  • #5
I never would have guessed. The heads up poker machines in Vegas don't tend to exhibit degen behavior.
 
Sunz of Beaches

Sunz of Beaches

Sunz Tzu
Platinum Level
Joined
Oct 26, 2019
Total posts
5,914
Awards
2
Poker Chips
2,526
  • #6
Wow looks like noone and nothing is safe from gambling. Not even artificial intelligence which is pretty lol to be honest...
 
Marcwantstowin

Marcwantstowin

Member of the T.S.T
Moderator
Joined
Feb 23, 2013
Total posts
26,608
Awards
17
GB
Poker Chips
936
  • #7
Tammy said:
This is wild. I have caught AI trying to push something that was not factually correct, and then sheepishly gives a "mea culpa" or "my bad" reply. Still, this is a bit surprising to see!

Newzooozooo said:
Since artificial intelligence is created by humans, it may well have human characteristics, both good and bad. I think this is quite likely.

Well, it may be strange for us to think that these AI systems may act differently, but I appreciate your point that they are developed by humans and may have adopted some human behaviours.

However, I like to believe that my play in the Casino is the ultimate play, and so some might call it unusual, non-profitable, or even downright reckless. I know one thing, I do have a good time, and yes, sometimes, I win. So perhaps they are copying my style of play.

(y)(y)(y)
 
  • Like
  • Love
Reactions: userX and bullishwwd
lcid86

lcid86

Legend
Loyaler
Joined
Feb 28, 2009
Total posts
3,347
Awards
12
US
Poker Chips
1,011
  • #8
Like people, AI can overthink a situation,? Cool article
 
Tero

Tero

Legend
Loyaler
Joined
Dec 31, 2019
Total posts
3,192
Awards
2
FI
Poker Chips
3,542
  • #9
lcid86 said:
Like people, AI can overthink a situation,?
Maybe. In my view outside gambling, AI is like a toddler that need close observation. It can do stupid things like kids do sometimes.
Unless it's brought up to be a real shark it might just bite its tail off.
 
Mart1194

Mart1194

Legend
Loyaler
Joined
Nov 4, 2022
Total posts
2,581
Awards
3
BR
Poker Chips
1,041
  • #10
Curious. I imagined that the AIs would adapt according to the instruction prompt and decide the best move each round. Although the issue here is compulsivity and not the resolution of the problem itself.
Well, in any case, the use of AI for various tasks is trending and progressive. We can't deny it.
 
rhoudini

rhoudini

Visionary
Platinum Level
Joined
Feb 28, 2023
Total posts
624
Awards
3
BR
Poker Chips
1,042
  • #11
For me it is quite funny to imagine large language models as an entity that reasons on their own, when that's not the case. Large language models, as theit title already describes them, are models based on text. They transform huge amounts of text into vectors (a mathematical representation, like coordinates in a map, but much more complext), and they use those vectors to calculate and form sentences based on probability based on the material used to train them.

What this all means: while those AI models are good with general text processing, they are not necessarily good with other specialized stuff. It is very similar to an experiment where they put a LLM to play chess: it will not do good. It will show some idea initially, but it will not take long to start doing crazy things.

For me, this article simply adds what it mentions: tries to emphasize "the importance of AI safety design in financial applications" (which seems obvious). The problem is just a problem of design. It means that, for specialized financial applications, it is necessary to train the model with data and concepts in that domain, not to use general models that were not design for that matter. AI is not magical.
 
Related Gambling Guides: AU Gambling - CA Gambling - UK Gambling - NZ Gambling - Online Gambling
Top