admin 發表於 2023-9-19 13:17:41

多臂老虎機問题

多臂山君機(Multi-Armed Bandit, MAB)問题属于經典的摸索與操纵(Exploration and Exploitation)問题,理解它可以或许帮忙咱们進修以後强化進修內容。

假如如今有 K 台山君機或一個 K 根拉杆的山君機,每台山君機都對應着一個嘉奖几率散布,咱们但愿在未知嘉奖几率散布的环境下操作 T 次山君機以後可以或许得到最大的累计嘉奖。

多臂山君機問题可以暗示為一個元组 <A, R> ,此中:

多臂山君機問题的方针是最大化 T 時候步內积累的嘉奖,即 max\sum_{t=1}^{T}{r_{t}}

對付每台山君機界說其指望嘉奖 Q(a) = E_{r\sim R(r|a)} ,最優指望嘉奖 Q^{*}=max_{a\epsilon A}Q(a) ,後悔界說為當前山君機的指望嘉奖與最優指望嘉奖的差值 R(a)=Q^{*} - Q(a) ,积累後悔為操作T次後的後悔总量,假如T時候步內操作山君機\left\{ {a_{1},a_{2},...a_{T}} \right\} ,积累後悔為 \sigma_{R} = \sum_{t=1}^{T}{R(a_{t})} 。多臂山君機問题最大化 T 時候步內积累的嘉奖的方针等价于最小化积累後悔。

為了晓得哪台山君即可以或许得到最高嘉奖,必要對每台山君機的嘉奖散布指望举行估量,估量法子是對每台山君機都反复举行 N 次拉杆操作,然後将每台山君機均匀嘉奖作為指望嘉奖的估量。 此外,增量式均匀数法子是一種不必要一次性對所有嘉奖乞降然後再除以次数的法子,其长處是可以節流內存開消,推导進程以下 。Q_{k}=\frac{1}{k} \sum_{k=1}^{K}{r_{i}}=\frac{1}{k}(r_{k}+\sum_{i=1}^{k-1}{r_{i}})=\frac{1}{k}(r_{k}+(k-1)Q_{k-1})=\frac{1}{k}(r_{k}+kQ_{k-1}-Q_{k-1}) =Q_{k-1} + \frac{1}{k}

下面先容解决MAB問题的几種法子,包含ϵ-贪默算法、上置信界算法 、汤普森采样算法

思绪:每次以 \varepsilon 几率随機選擇山君機,以1- \varepsilon 几率從現有山君機指望嘉奖估量當選擇最大山君機, \varepsilon 會跟着迭代的举行逐步缩小,可以令 \varepsilon = 1/T , T 為迭代時候步数

import numpy as np
from numpy import 中古機械買賣,random as rd
class Arms:
def __init__(self, arm_nums):
self.arm_nums = arm_nums
self.arm_probs = rd.random(arm_nums)
self.best_index = np.argmax(self.arm_probs)
self.best_prob = self.arm_probs
self.actions = []
self.steps = np.zeros(arm_nums)
self.estimates = np.ones(arm_nums)
self.regret = 0
self.regrets = []
return
def reward(self, a):
# simulate arm random
if self.arm_probs < rd.random():
return 0
else:
return 1
def update_estimates(self, a, reward=None):
if reward is None:
reward = self._reward(a)
self.actions.append(a)
self.steps += 1
# update estimates by incremental average
self.estimates += 1 / self.steps * (reward - self.estimates)
# accumulate total regrets
self.regret += self.best_prob - self.arm_probs
self.regre種植電鑽,ts.append(self.regret)
return
class EpsilonGreedy(Arms):
def __init__(self, arm_nums, iter_nums):
super(EpsilonGreedy, self).__init__(arm_nums)
self.iter_nums = iter_nums
self.eps = 0
return
def policy(self):
self.eps += 1
if rd.random() < 1 / self.eps:
return rd.randint(0, self.arm_nums)
else:
return np.argmax(self.estimates)
def run(self):
for i in range(self.iter_nums):
a = self.policy()
# update estimated expected reward
self.update_estimates(a)
print('accumulate regret:', self.regret)
plot_regret(self.regrets, 'epsilon_greedy')
return
思绪:UCB 是一種經典的基于不肯定性的计谋算法。它的思惟用到了一個很是聞名的数學道理:霍夫丁不等式(hoeffding's inequality),經由過程将選擇最大指望嘉奖估量转化為最大化上界来解决MAB問题。推导以下

P\left\{ E\geq \tilde{x}+u \right\} \leq e^{-2nu^{2}}

P\left\{ E \leq \tilde{x}+u \right\} \geq1- e^{-2nu^{2}}\Rightarrow P\left\{ Q_{t}(a)<\bar{Q_{t}}(a) +\bar{U}(a)\right\}\geq 1- e^{-2nu^{2}}

令 \frac{1}{T} = 1- e^{-2nu^{2}}\Rightarrow \bar{U}(a)=\sqrt{\frac{log T}{2(N_{t}(a)+1)}} \Rightarrow a=argmax(\bar{Q_{t}}(a) + c \cdot \bar{U}(a) )

系数 c 節制不肯定性的比重, N_{t}(a) 暗示动作 a 在 t 時刻的履行次数

class UCB(Arms):
def __init__(self, arm_nums, iter_nums, coef):
super(UCB, self).__init__(arm_nums)
self.total_steps = 0
self.iter_nums = iter_nums
self.coef = coef
return
def policy(self):
self.total_steps += 1
# upper bound
upper_bd = self.estimates + self.coef * np.sqrt(np.log(self.total_steps) / (2 * (self.steps + 1)))
return np.argmax(upper_bd)
def run(self):
for 檸檬山楂荷葉茶,i in range(self.iter_nums):
a = self.policy()
self.update_estimates(a)
print('accumulate regret:', self.regret)
plot_regret(self.regrets, 'ucb')
return
思绪:假如每一個山君機的嘉奖散布合适Beta散布,某台山君機履行了 k 次操作,此中 m 次嘉奖為 1,n 次嘉奖為 0,则该拉杆的嘉奖從命参数為(m + 1, n + 1)的 Beta 散布。從多台山君機依照Beta散布采样成果當選擇最大成果對應的山君機便可。

class ThompsonSampling(Arms):
def __init__(self, arm_nums, iter_nums):
super(ThompsonSampling, self).__init__(arm_nums)
self.iter_nums = iter_nums
self.beta_a = np.ones(arm_nums)
self.beta_b = np.ones(arm_nums)
return
def policy(self):
samples = rd.beta(self.beta_a, self.beta_b)
a = np.argmax(samples)
r = self.reward(a)
self.beta_a += r
self.beta_b += (1 - r)
return a, r
def run(self):
for i in range(self.iter_nums):
a, r = 電動清潔刷,self.policy()
self.update_estimates(a, reward=r)
print('accumulate regret:', self.regret)
plot_regret(self.regrets, 'thompson_sampling')
return
頁: [1]
查看完整版本: 多臂老虎機問题