mab
Here are 24 public repositories matching this topic...
A Julia Package for providing Multi Armed Bandit Experiments
-
Updated
Jul 19, 2018 - Julia
Online Deep Learning: Learning Deep Neural Networks on the Fly / Non-linear Contextual Bandit Algorithm (ONN_THS)
-
Updated
Dec 11, 2019 - Python
Experiment results using MAB algorithms in Yahoo! Front Page Today Module User Click Log dataset
-
Updated
Jan 2, 2020 - Jupyter Notebook
This project implements famous MAB algorithms and evaluates them on the basis of their performance - EpsilonGreedy, UCB, BetaThompson, LinUCB, LinThompson.
-
Updated
Mar 20, 2020 - Jupyter Notebook
Multi-Armed-Bandit solutions on AWS to deliver Covid-19 test kits efficiently and effectively
-
Updated
Mar 25, 2020 - Jupyter Notebook
Typescript implementation of a multi-armed bandit
-
Updated
May 17, 2020 - TypeScript
Source code for Assignment 2 of COMP90051 (Semester 2 2020)
-
Updated
Oct 21, 2020 - Jupyter Notebook
Multi-Player Bandits Revisited [L. Besson & É. Kaufmann]
-
Updated
Jan 21, 2021 - Python
VLAN Mac-address Authentication Manager
-
Updated
Apr 5, 2021 - Python
My Little Reinforcement Learning
-
Updated
Jul 13, 2021 - Python
👤 Multi-Armed Bandit Algorithms Library (MAB) 👮
-
Updated
Sep 6, 2022 - Python
Reinforcement learning techniques applied to solve pricing problems in e-commerce applications. Final project for "Online learning applications" course (2021-2022)
-
Updated
Oct 30, 2022 - Jupyter Notebook
Implementation of Multi-Armed Bandit (MAB) algorithms UCB and Epsilon-Greedy. MAB is a class of problems in reinforcement learning where an agent learns to choose actions from a set of arms, each associated with an unknown reward distribution. UCB and Epsilon-Greedy are popular algorithms for solving MAB problems.
-
Updated
Mar 26, 2023 - Python
Python application to setup and run streaming (contextual) bandit experiments.
-
Updated
Mar 31, 2023 - Python
A Python library for all popular multi-armed bandit algorithms.
-
Updated
Apr 28, 2023 - Jupyter Notebook
Exploitation vs Exploration problem stated as A/B-testing with maximum profit per unit time.
-
Updated
Oct 4, 2023 - Mathematica
🐯REPLICA of "Auction-based combinatorial multi-armed bandit mechanisms with strategic arms"
-
Updated
Dec 17, 2023 - Python
Improve this page
Add a description, image, and links to the mab topic page so that developers can more easily learn about it.
Add this topic to your repo
To associate your repository with the mab topic, visit your repo's landing page and select "manage topics."