(презентация на русском языке)
Policy evaluation is an important instrument for the comparison of different algorithms in Reinforcement Learning (RL). Yet even a precise knowledge of the value function V^{\pi} corresponding to a policy \pi does not provide reliable information on how far is the policy \pi from the optimal one. We present a novel model-free upper-value iteration procedure (UVIP) that allows us to estimate the suboptimality gap V*(x) - V^{\pi}(x) from above and to construct confidence intervals for V*. Our approach relies on upper bounds to the solution of the Bellman optimality equation via the martingale approach. We provide theoretical guarantees for UVIP under general assumptions and illustrate its performance on a number of benchmark RL problems. The talk is based on our recent work with Denis Belomestny, Ilya Levin, Eric Moulines, Alexey Naumov and Veronika Zorina.
Speaker: Sergey Samsonov, Lecturer at the Big Data and Information Retrieval School.
June 1, 2021
HDI Lab:
7 views
637
175
9 months ago 00:08:36 1
ОБВАЛ META? Акции Мета обвалились после отчета | Обзор Акции Мета
2 years ago 00:51:53 15
Как реализовать себя в ИИ: наука, бизнес и корпорация
3 years ago 01:12:41 18
Как понимание этики ИИ повысит вашу стоимость на рынке труда?