Optimism in Reinforcement Learning and Kullback-Leibler Divergence
We consider model-based reinforcement learning in finite Markov De- cision Processes (MDPs), focussing on so-called optimistic strategies. In MDPs, optimism can be implemented by carrying out extended value it- erations under a constraint of consiste…
Authors: Sarah Filippi (LTCI), Olivier Cappe (LTCI), Aurelien Garivier (LTCI)
In reinforcement learning, an agent interacts with an unknown environment, aiming to maximize its long-term payoff [15]. This interaction is modelled by a Markov Decision Process (MDP) and it is assumed that the agent does not know the parameters of the process and needs to learn directly from observations. The agent thus faces a fundamental trade-off between gathering experimental data about the consequences of the actions (exploration) and acting consistently with past experience in order to maximize the rewards (exploitation).
We consider in this article a MDP with finite state and action spaces for which we propose a model-based reinforcement learning algorithm, i.e., an algorithm that maintains running estimates of the model parameters (transitions probabilities and expected rewards) [6,10,14,16]. A well-known approach to balance exploration and exploitation, followed for example by the well-know algorithm R-MAX [4], is the so-called optimism in the face of uncertainty principle. It was first proposed in the multi-armed bandit context by [11], and has been extended since then to several frameworks: instead of acting optimally according to the estimated model, the agent follows the optimal policy for a surrogate model, named optimistic model, which is close enough to the former but leads to a higher long-term reward. The performance of such an algorithm can be analyzed in terms of regret, which consists in comparing the rewards collected by the algorithm with the rewards obtained when following an optimal policy. The study of the asymptotic regret due to [11] in the multi-armed context has been extended to MDPs by [5], proving that an optimistic algorithm can achieve logarithmic regret. The subsequent works [2,9,3] introduced algorithms that guarantee non-asymptotic logarithmic regret in a large class of MDPs. In these latter works, the optimistic model is computed using the L 1 (or total variation) norm as a measure of proximity between the estimated and optimistic transition probabilities.
In addition to logarithmic regret bounds, the UCRL2 algorithm of [9] is also attractive due to the simplicity of each L 1 extended value iteration step. In this case, optimism simply results in adding a bonus to the most promising transition (i.e., the transition that leads to the state with current highest value) while removing the corresponding probability mass from less promising transitions. This process is both elementary and easily interpretable, which is desirable in some applications.
However, the L 1 extended value iteration leads to undesirable pitfalls, which may compromise the practical performance of the algorithm. First, the optimistic model is not continuous with respect to the estimated parameters -small changes in the estimates may result in very different optimistic models. More importantly, the L 1 optimistic model can become incompatible with the observations by assigning a probability of zero to a transition that has actually been observed. Moreover, in MDPs with reduced connectivity, L 1 optimism results in a persistent bonus for all transitions heading towards the most valuable state, even when significant evidence has been accumulated that these transitions are impossible.
In this paper, we propose an improved optimistic algorithm, called KL-UCRL, that avoids these pitfalls altogether. The key is the use of the Kullback-Leibler (KL) pseudo-distance instead of the L 1 metric, as in [5]. Indeed, the smoothness of the KL metric largely alleviates the first issue. The second issue is completely avoided thanks to the strong relationship between the geometry of the probability simplex induced by the KL pseudo-metric and the theory of large deviations. For the third issue, we show that the KL-optimistic model results from a trade-off between the relative value of the most promising state and the statistical evidence accumulated so far regarding its reachability.
We provide an efficient procedure, based on one-dimensional line searches, to solve the linear maximization problem under KL constraints. As a consequence, the numerical complexity of the KL-UCRL algorithm is comparable to that of UCRL2. Building on the analysis of [9,3,1], we also obtain logarithmic regret bounds for the KL-UCRL algorithm. The proof of this result is based on novel concentration inequalities for the KL-divergence, which have interesting properties when compared with those traditionally used for the L 1 norm. Although the obtained regret bounds are comparable to earlier results in term of rate and dependence in the number of states and actions, we observed in practice significant performance improvements. This observation is illustrated using benchmark examples (the RiverSwim and SixArms environments of [14]) and through a thorough discussion of the geometric properties of KL neighborhoods.
The paper is organized as follows. The model and a brief survey of the value iteration algorithm in undiscounted MDPs are presented in Section 2. Section 3 and 4 are devoted, respectively, to the description and the analysis of the KL-UCRL algorithm. Section 5 contains numerical experiments and Section 6 concludes the paper by discussing the advantages of using KL rather than L 1 confidence neighborhoods.
Consider a Markov decision process (MDP) M = (X , A, P, r) with finite state space X , and action space A. Let X t ∈ X and A t ∈ A denote respectively the state of the system and the action chosen by the agent at time t. The probability to jump from state X t to state X t+1 is denoted by P (X t+1 ; X t , A t ). Besides, the agent receives at time t a random reward R t ∈ [0, 1] with mean r(X t , A t ). The aim of the agent is to choose the sequence of actions so as to maximize the cumulated reward. His choices are summarized in a stationary policy π : X → A.
In this paper, we consider communicating MDPs, i.e., MDPs such that for any pair of states x, x ′ , there exists policies under which x ′ can be reached from x with positive probability. For those MDPs, it is known that the average reward following a stationary policy π, denoted by ρ π (M) and defined as
is state-independent [13]. Let π * (M) : X → A and ρ * (M) denote respectively the optimal policy and the optimal average reward: ρ * (M) = sup π ρ π (M) = ρ π * (M) (M) . The notations ρ * (M) and π * (M) are meant to highlight the fact that both the optimal average reward and the optimal policy depend on the model M. The optimal average reward satisfies the so-called Bellman optimality equation: for all x ∈ X ,
where the |X |-dimensional vector h * (M) is called a bias vector. Note that it is only defined up to an additive constant. For a fixed MDP M, the optimal policy π * (M) can be derived by solving the optimality equation and by defining, for all x ∈ X ,
In practice, the optimal average reward and the optimal policy may be computed, for instance, using the value iteration algorithm [13].
In this paper, we focus on the reinforcement learning problem in which the agent does not know the model M beforehand, i.e. the transition probabilities and the distribution of the rewards are unknown. More specifically, we consider model-based reinforcement learning algorithms which estimate the model through observations and act accordingly. Denote by Pt (x ′ ; x, a) the estimate at time t of the transition probability from state x to state x ′ conditionally to the action a, and, by rt (x, a) the mean reward received in state x when action a has been chosen. We have:
where
is the number of visits, up to time t, to the state x followed by a visit to x ′ when the action a has been chosen, and similarly,
The optimal policy in the estimated model M t = (X , A, Pt , rt ) may be misleading due to estimation errors: pure exploitation policies are commonly known to fail with positive probability. To avoid this problem, optimistic model-based approaches consider a set M t of potential MDPs including M t and choose the MDP from this set that leads to the largest average reward. In the following, the set M t is defined as follows:
and d( Pt (.; x, a), P (.; x, a)) ≤ ǫ P (x, a, t)} , where d measures the difference between the transition probabilities. The radius of the neighborhoods ǫ R (x, a, t) and ǫ P (x, a, t) around, respectively, the estimated reward rt (x, a) and the estimated transition probabilities Pt (.; x, a), decrease with N t (x, a).
In contrast to UCRL2, which uses the L 1 -distance for d, we propose to rely on the Kullback-Leibler divergence, as in the seminal article [5]; however, contrary to the approach of [5], no prior knowledge on the state structure of the MDP is needed. Recall that the Kullback-Leibler divergence is defined for all n-dimensional probability vectors p and q by KL(p, q) = n i=1 p i log pi qi (with the convention that 0 log 0 = 0). In the sequel, we will show that this choice dramatically alters the behavior of the algorithm and leads to significantly better performance, while causing a limited increase of complexity; in Section 6, the advantages of using a KL-divergence instead of the L 1 -norm are illustrated and argumented.
The KL-UCRL algorithm, described below, is a variant of the efficient modelbased algorithm UCRL2, introduced by [1] and extended to more general MDPs by [3]. The key step of the algorithm, the search for the optimistic model (Step 8), is detailed below as Algorithm 2.
1: Initialization: j = 0, t 0 = 0; ∀a ∈ A, ∀x ∈ X , n 0 (x, a) = 0, N 0 (x, a) = 0; initial policy π 0 . 2: for all t ≥ 1 do 3:
Begin a new episode: j = j + 1, t j = t,
Reinitialize: ∀a ∈ A, ∀x ∈ X , n j (x, a) = 0
Estimate Pt and rt according to (1)
Find the optimistic model M j ∈ M t and the related policy π j solving equation (2) and using Algorithm 2
Receive reward R t
Update the count within the current episode:
Update the global count:
The KL-UCRL algorithm proceeds in episodes. Let t j be the starting time of episode j; the length of the j-th episode depends on the number of visits N tj (x, a) to each state-action pair (x, a) before t j compared to the number of visits n j (x, a) to the same pair during the j-th episode. More precisely, an episode ends as soon as n j (x, a) ≥ N tj (x, a) for some state-action pair (x, a). The policy π j , followed during the j-th episode, is an optimal policy for the optimistic MDP M j = (X , A, P j , r j ) ∈ M tj , which is computed by solving the extended optimality equations: for all x ∈ X
where the maximum is taken over all P, r such that ∀x, ∀a, KL( Ptj (.; x, a), P (.; x, a))
where C P and C R are constants which control the size of the confidence balls.
The transition matrix P j and the mean reward r j of the optimistic MDP M j maximize those equations. The extended value iteration algorithm may be used to approximately solve the fixed point equation (2) [13, 1].
At each step of the extended value iteration algorithm, the maximization problem (2) has to be solved. For every state x and action a, the maximization of r(x, a) under the constraint that |r tj (x, a)r(x, a)| ≤ C R / N tj (x, a) is obviously solved taking r(x, a) = rtj (x, a) + C R / N tj (x, a), so that the main difficulty lies in maximizing the dot product between the probability vector q = P (.; x, a) and the value vector V = h * over a KL-ball around the fixed probability vector p = Ptj (.; x, a):
where V ′ denotes the transpose of V and S n the set of n-dimensional probability vectors. The radius of the neighborhood ǫ = C P /N tj (x, a) controls the size of the confidence ball. This convex maximization problem is studied in Appendix A, leading to the efficient algorithm presented below. Detailed analysis of the Lagrangian of (3) shows that the solution of the maximization problem essentially relies on finding roots of the function f (that depends on the parameter V ), defined as follows: for all ν ≥ max i∈ Z V i , with Z = {i :
In the special case where the most promising state i M has never been reached from the current state-action pair (i.e. p iM = 0), the algorithm makes a tradeoff between the relative value of the most promising state V iM and the statistical evidence accumulated so far regarding its reachability.
Input A value function V , a probability vector p, a constant ǫ Output A probability vector q that maximizes (3)
Let ν = V i and r = 1exp(f (ν)ǫ).
For all i ∈ I * , assign values of q i such that i∈I * q i = r .
For all i ∈ Z/I * , let q i = 0. 7: else
For all i ∈ Z, let q i = 0. Let r = 0.
Find ν such that f (ν) = ǫ using Newton's method. 10: end if 11: For all i ∈ Z, let q i = (1-r)qi i∈ Z qi where qi = pi ν-Vi .
In practice, f being a convex positive decreasing function (see Appendix B), Newton's method can be applied to find ν such that f (ν) = ǫ (in Step 9 of the algorithm), so that numerically solving (3) is a matter of a few iterations. Appendix B contains a discussion of the initialization of Newton's algorithm based on asymptotic arguments.
To analyze the performance of KL-UCRL, we compare the rewards accumulated by the algorithm to the rewards that would be obtained, on average, by an agent playing an optimal policy. The regret of the algorithm after T steps is defined as in [9]:
We adapt the regret bound analysis of the UCRL2 algorithm to the use of KL-neighborhoods, and obtain similar theorems. Let
where τ (x, x ′ ) is the hitting time of x ′ , starting from state x. The D(M) constant will appear in the regret bounds. For all communicating MDPs M, D(M) is finite. Theorem 1 establishes an upper bound on the regret of the KL-UCRL algorithm with C P and C R defined as
99 .
Theorem 1 With probability 1δ, it holds that for T > 5, the regret of KL-UCRL is bounded by
for a constant C ≤ 24 that does not dependent on the model.
It is also possible to prove a logarithmic upper bound for the expected regret. This bound, presented in Theorem 2, depends on the model through the constant ∆(M) defined as ∆(M) = ρ * (M)max π,ρ π (M)<ρ * (M) ρ π (M). ∆(M) quantifies the margin between optimal and suboptimal policies.
Theorem 2 For T > 5, the expected regret of KL-UCRL is bounded by
where C ≤ 400 is a constant independent of the model, and C(M) is a constant which depends on the model (see [9]).
The proof of Theorem 1 is inspired from [9,3]. Due to the lack of space, we only provide the main steps of the proof. First, the following proposition enables us to ensure that, with high probability, the true model M = (X , A, P, r) belongs to the set of models M t at each time step.
Proposition 1 For every horizon T ≥ 1 and for δ > 0,
The proof relies on the two following concentration inequalities due to [7,8]: for all x ∈ X , a ∈ A, any C P > 0, and C R > 0, it holds that P ∀t ≤ T, KL( Pt (.; x, a), P (.; x, a)) > C P N t (x, a)
Then, summing over all state-action pairs, Proposition 1 follows. Using Hoeffding's inequality, with high probability, the regret at time T can be written as the sum of a regret in each of the m(T ) episodes plus an additional term C e (T, δ)) = T log(1/δ)/2:
Let P k and π k denote, respectively, the transition probability matrix of the optimistic model and the optimal policy in the k-th episode (1 ≤ k ≤ m(T )). It is easy to show that (see [9] for details), with probability 1δ,
where h k is a bias vector, e x (y) = 1 if x = y and e x (y) = 0 otherwise. We now bound each of the three terms in the previous summation. Denote by n π k k the row vector such that
). Similarly P π k k (resp. P π k ) is the transition matrix if the policy π k is followed under the optimistic model M k (resp. the true model M). If the true model M ∈ M t k , we have for all x ∈ X , for all a ∈ A,
Using Pinsker's inequality, and the fact that h k ∞ ≤ D [9],
The third term n π k k (P π k -I)h k may be written as follows:
where e x is the all 0's vector with a 1 only on the x-th component. For all t ∈ [t k , t k+1 -1], note that ξ t = (P (.; X t , A t )-e Xt+1 )h k is a martingale difference upper-bounded by D. Applying the Azuma-Hoeffding inequality, we obtain that
with probability 1δ. In addition, Auer and al [1] proved that
Combining all the terms completes the proof of Theorem 1. The proof of Theorem 2 follows from Theorem 1 using the same arguments as in the proof of Theorem 4 in [9].
To compare the behavior of algorithms KL-UCRL and UCRL2, we consider the benchmark environments RiverSwim and SixArms proposed by [14] as well as a collection of randomly generated sparse environments. The RiverSwim environment consists of six states. The agent starts from the left side of the row and, in each state, can either swim left or right. Swimming to the right (against the current of the river) is successful with probability 0.35; it leaves the agent in the same state with a high probability equal to 0.6, and leads him to the left with probability 0.05 (see Figure 1). On the contrary, swimming to the left (with the current) is always successful. The agent receives a small reward when he reaches the leftmost state, and a much larger reward when reaching the rightmost state -the other states offer no reward. This MDP requires efficient exploration procedures, since the agent, having no prior idea of the rewards, has to reach the right side to discover which is the most valuable state-action pair. The SixArms environment consists of seven states, one of which (state From the initial state, the agent may choose one among six actions: the action a ∈ {1, . . . , 6} leads to the state x = a with probability p a (see Figure 2) and let the agent in the initial state with probability 1p a . From all the other states, some actions deterministically lead the agent to the initial state while the others leave it in the current state. Staying in a state x ∈ {1, . . . , 6}, the agent receives a reward equal to R x (see Figure 2), otherwise, no reward is received.
We compare the performance of the KL-UCRL algorithm to UCRL2 using 20 Monte-Carlo replications. For both algorithms, the constants C P and C R are settled to ensure that the upper bounds of the regret of Theorem 1 and Theorem 2 in [9] hold with probability 0.95. In the SixArms environment, the received rewards being deterministic, we slightly modify both algorithms so that the agent knows them beforehand. We observe in Figure 3 and 4 that the KL-UCRL algorithm accomplishes a smaller average regret than the UCRL2 algorithm in those benchmark environments. In both environments, it is crucial for the agent to learn that there is no action leading from some states to the most promising one: for example, in the RiverSwim environment, between one of the first four states and the sixth state.
In addition to those benchmark environments, a generator of sparse environments has been used to create 10-states and 5-actions environments with random rewards in [0, 1]. In these random environments, each state is connected with, on average, five other states (with transition probabilities drawn from a Dirichlet distribution). We reproduced the same experiments as in the previous environments and display the average regret in Figure 5.
In this section, we expose the advantages of using a confidence ball based on the Kullback-Leibler divergence rather than an L 1 -ball, as proposed for instance in [9,16], in the computation of the optimistic policy. This discussion aims at explaining and interpreting the difference of performance that can be observed in simulations. In KL-UCRL, optimism reduces to maximizing the linear function V ′ q over a KL-ball (see (3)), whereas the other algorithms make use of an
Consider an estimated transition probability vector p, and denote by q KL (resp. q 1 ) the probability vector which maximizes Equation (3) (resp. Equation ( 9)).
It is easily seen that q KL and q 1 lie respectively on the border of the convex set {q ∈ S |X| : KL(p, q) ≤ ǫ} and at one of the vertices of the polytope {q ∈ S |X| : pq 1 ≤ ǫ ′ }. A first noteworthy difference between those neighborhoods is that, due to the smoothness of the KL-neighborhood, q KL is continuous with respect to the vector V , which is not the case for q 1 .
To illustrate this, Figure 6 displays L 1 -and KL-balls around 3-dimensional probability vectors. The set of 3-dimensional probability vectors is represented by a triangle whose vertices are the vectors (1, 0, 0) ′ , (0, 1, 0) ′ and (0, 0, 1) ′ , the probability vector p by a white star, and the vectors q KL and q 1 by a white point. The arrow represents the direction of V 's projection on the simplex and indicates the gradient of the linear function to maximize. The maximizer q 1 can vary significantly for small changes of the value function, while q KL varies continuously.
Denote i m = argmin j V j and i M = argmax j V j . As underlined by [9], q 1 im = max(p imǫ ′ /2, 0) and q 1 iM = min(p 1 iM + ǫ ′ /2, 1). This has two consequences:
1. if p is such that 0 < p im < ǫ ′ /2, then the vector q 1 im = 0; so the optimistic model may assign a probability equal to zero to a transition that has actually been observed, which makes it hardly compatible with the optimism principle. Indeed, an optimistic MDP should not forbid transitions that really exists, even if they lead to states with small values; 2. if p is such that p iM = 0, then q 1 iM never equals 0; therefore, an optimistic algorithm that uses L 1 -balls will always assign positive probability to transitions to i M even if this transition is impossible under the true MDP and if much evidence has been accumulated against the existence of such a transition. Thus, the exploration bonus of the optimistic procedure is wasted, whereas it could be used more efficiently to favor some other transitions.
This explains a large part of the experimental advantage of KL-UCRL observed in the simulations. Indeed, q KL always assigns strictly positive probability to observed transitions, and eventually renounces unobserved transitions even if the target states have a potentially large value. Algorithm 2 works as follows: for all i such that p i = 0, q i = 0; for all i such that p i = 0, q i = 0 except if p iM = 0 and if f (V iM ) < ǫ, in which case q iM = 1exp(f (V iM )ǫ). But this is no longer the case when ǫ becomes small enough, that is, when sufficiently many observations are available. We illustrate those two important differences Figure 6: The L 1 -neighborhood {q ∈ S 3 : pq 1 ≤ 0.2} (left) and KLneighborhood {q ∈ S 3 : KL(p, q) ≤ 0.02} (right) around the probability vector p = (0.15, 0.2, 0.65) ′ (white star). The white points are the maximizers of equations (3) and ( 9) with V = (0, 0.05, 1) ′ (up) and V = (0, -0.05, 1) ′ (down). in Figure 7, by representing the L 1 and KL neighborhoods together with the maximizers q KL and q 1 , first if p im is positive by very small, and second if p iM is equal to 0. Figure 8 also illustrates the latter case, by representing the evolution of the probability vector q that maximizes both ( 9) and (3) for an example with p = (0.3, 0.7, 0) ′ , V = (1, 2, 3) ′ and ǫ decreasing from 1/2 to 1/500.
This section explains how to solve the optimization problem of Equation (3). In [12], a similar problem arises in a different context, and a somewhat different solution is proposed for the case when the p i are all positive. As a problem of maximizing a linear function under convex constraints, it is sufficient to consider the Lagrangian function q 2 q 3
Figure 8: Evolution of the probability vector q that maximizes both (3) (top) and ( 9) (bottom) with p = (0.3, 0.7, 0) ′ , V = (1, 2, 3) ′ and ǫ decreasing from 1/2 to 1/500
If q is a maximizer, there exist λ ∈ R, ν, µ i ≥ 0 (i = 1 . . . N ) such that the following conditions are simultaneously satisfied:
Let Z = {i, p i = 0}. Conditions (10) to (13) imply that λ = 0 and ν = 0. For i ∈ Z, Equation (10) implies that q i = λ pi ν-µi-Vi . Since λ = 0, q i > 0 and then, according to (13),
Let r = i∈Z q i . Summing on i ∈ Z and using equations ( 14) and ( 12), we have
Using ( 14) and ( 15), we can write i∈ Z p i log pi qi = f (ν)log(1r) where f is defined in (4). Then, q satisfies condition (11) if and only if f (ν) = ǫ+log(1-r) .
Consider now the case where i ∈ Z. Let I * = Z ∩ argmax i V i . Note that, for all i ∈ Z \ I * , q i = 0. Indeed, otherwise, µ i should be zero, and then ν = V i according to (10), which involves a possible negative denominator in (14). According to (13), for all i ∈ I * , either q i = 0 or µ i = 0. The second case implies that ν = V i and r > 0 which requires that f (ν) < ǫ so that (A) can be satisfied with r > 0. Therefore,
• if f (V i ) < ǫ for i ∈ I * , then ν = V i and the constant r can be computed solving equation f (ν) = ǫlog(1r); the values of q i for i ∈ I * may be chosen in any way such that i∈I * q i = r;
• if for all i ∈ I * f (V i ) ≥ ǫ, then r = 0, q i = 0 for all i ∈ Z and ν is the solution of the equation f (ν) = ǫ.
Once ν and r have been determined, the other components of q can be computed according to ( 14): we have that for i ∈ Z, q i = (1-r)qi i∈ Z qi where qi = pi ν-Vi .
In this section, a few properties of function f defined in Equation ( 4) are stated, as this function plays a key role in the maximizing procedure of Section 3.2.
Proposition 2 f is a convex, decreasing mapping from ] max i∈ Z V i ; ∞[ onto ]0; ∞[. Proof 1 Using Jensen's inequality, it is easily shown that the f function decreases from +∞ to 0. The second derivative of f with respect to ν is equal to
If Z denotes a positive random value such that P Z = 1 ν-Vi = p i , then
Using Cauchy-Schwartz inequality, we have E(Z 2 ) 2 = E(Z 3/2 Z 1/2 ) 2 ≤ E(Z 3 )E(Z).
In addition E(Z 2 ) 2 ≥ E(Z 2 )E(Z) 2 . These two inequalities show that f ′′ (ν) ≥ 0.
As mentioned in Section 3.2, Newton's method can be applied to solve the equation f (ν) = ǫ for a fixed value of ǫ. When ǫ is close to 0, the solution of this equation is quite large and an appropriate initialization accelerates convergence. Using a second-order Taylor's-series approximation of the function f , it can be seen that, for ν near ∞, f (ν) = σp,V 2ν 2 +o( 1 ν 2 ), where σ p,V = i p i V 2 i -( i p i V i ) 2 . The Newton iterations can thus be initialized by taking ν 0 = σ p,V /(2ǫ).
Original Paper
Loading high-quality paper...
Comments & Academic Discussion
Loading comments...
Leave a Comment