Skip to content

antoniofrancaib/multi-armed-bandits

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

18 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

MAB algorithms

I've recently been introduced to an interesting class of algorithms known as multi-armed bandit algorithms. These algorithms tackle a well-known problem in the field of reinforcement learning and decision-making, and are particularly useful in situations where we need to balance exploration and exploitation efficiently.

Imagine walking into a casino filled with slot machines, each with a different payout rate. The goal is to maximize your earnings by pulling the arms of these slot machines, but you don't know which ones yield the highest rewards. This is the essence of the multi-armed bandit problem: How can you make the best decisions with limited knowledge while learning about the environment?

Multi-armed bandit algorithms provide a way to approach this dilemma. They offer a trade-off between exploration (trying out new actions to gain information) and exploitation (leveraging the current knowledge to maximize rewards). Some of the most popular multi-armed bandit algorithms include ε-greedy, UCB1 (Upper Confidence Bound), and Thompson sampling, each with its unique strengths and weaknesses.

These algorithms have practical applications in various domains, such as online advertising, recommendation systems, and A/B testing. They allow us to efficiently allocate resources, minimize regret, and make effective decisions under uncertainty. In this repository, every algorithm is specifically implemented for application in the realm of advertising.

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages