Preprints

(2021) Léonard Blier, Yann Ollivier: Unbiased Methods for MultiGoal Reinforcement Learning
In multigoal reinforcement learning (RL) settings, the reward for each goal is sparse, and located in a small neighborhood of the goal. In large dimension, the probability of reaching a reward vanishes and the agent receives little learning signal. Methods such as Hindsight Experience Replay (HER) tackle this issue by also learning from realized but unplannedfor goals. But HER is known to introduce bias, and can converge to lowreturn policies by overestimating chancy outcomes. First, we vindicate HER by proving that it is actually unbiased in deterministic environments, such as many optimal control settings. Next, for stochastic environments in continuous spaces, we tackle sparse rewards by directly taking the infinitely sparse reward limit. We fully formalize the problem of multigoal RL with infinitely sparse Dirac rewards at each goal. We introduce unbiased deep Qlearning and actorcritic algorithms that can handle such infinitely sparse rewards, and test them in toy environments.

(2021) Léonard Blier, Corentin Tallec, Yann Ollivier: Learning Successor States and GoalDependent Values: A Mathematical Viewpoint
In reinforcement learning, temporal differencebased algorithms can be sampleinefficient: for instance, with sparse rewards, no learning occurs until a reward is observed. This can be remedied by learning richer objects, such as a model of the environment, or successor states. Successor states model the expected future state occupancy from any given state for a given policy and are related to goaldependent value functions, which learn how to reach arbitrary states. We formally derive the temporal difference algorithm for successor state and goaldependent value function learning, either for discrete or for continuous environments with function approximation. Especially, we provide finitevariance estimators even in continuous environments, where the reward for exactly reaching a goal state becomes infinitely sparse. Successor states satisfy more than just the Bellman equation: a backward Bellman operator and a BellmanNewton (BN) operator encode path compositionality in the environment. The BN operator is akin to secondorder gradient descent methods and provides the true update of the value function when acquiring more observations, with explicit tabular bounds. In the tabular case and with infinitesimal learning rates, mixing the usual and backward Bellman operators provably improves eigenvalues for asymptotic convergence, and the asymptotic convergence of the BN operator is provably better than TD, with a rate independent from the environment. However, the BN method is more complex and less robust to sampling noise. Finally, a forwardbackward (FB) finiterank parameterization of successor states enjoys reduced variance and improved samplability, provides a direct model of the value function, has fully understood fixed points corresponding to longrange dependencies, approximates the BN method, and provides two canonical representations of states as a byproduct.
Publications

(2019) Corentin Tallec, Léonard Blier, Yann Ollivier: Making Deep Qlearning methods robust to time discretization, ICML 2019.
Despite remarkable successes, Deep Reinforcement Learning (DRL) is not robust to hyperparameterization, implementation details, or small environment changes (Henderson et al. 2017, Zhang et al. 2018). Overcoming such sensitivity is key to making DRL applicable to real world problems. In this paper, we identify sensitivity to time discretization in near continuoustime environments as a critical factor; this covers, e.g., changing the number of frames per second, or the action frequency of the controller. Empirically, we find that Qlearningbased approaches such as Deep Q learning (Mnih et al., 2015) and Deep Deterministic Policy Gradient (Lillicrap et al., 2015) collapse with small time steps. Formally, we prove that Qlearning does not exist in continuous time. We detail a principled way to build an offpolicy RL algorithm that yields similar performances over a wide range of time discretizations, and confirm this robustness empirically.

(2019) Léonard Blier, Pierre Wolinski, Yann Ollivier: Learning with random learning rates, ECML 2019.
Hyperparameter tuning is a bothersome step in the training of deep learning models. One of the most sensitive hyperparameters is the learning rate of the gradient descent. We present the 'All Learning Rates At Once' (Alrao) optimization method for neural networks: each unit or feature in the network gets its own learning rate sampled from a random distribution spanning several orders of magnitude. This comes at practically no computational cost. Perhaps surprisingly, stochastic gradient descent (SGD) with Alrao performs close to SGD with an optimally tuned learning rate, for various architectures and problems. Alrao could save time when testing deep learning models: a range of models could be quickly assessed with Alrao, and the most promising models could then be trained more extensively. This text comes with a PyTorch implementation of the method, which can be plugged on an existing PyTorch model.

(2018) Léonard Blier, Yann Ollivier : The Description Length of Deep Learning Models, NeurIPS 2018
arXiv:1802.07044 Deep learning models often have more parameters than observations, and still perform well. This is sometimes described as a paradox. In this work, we show experimentally that despite their huge number of parameters, deep neural networks can compress the data losslessly even when taking the cost of encoding the parameters into account. Such a compression viewpoint originally motivated the use of variational methods in neural networks. However, we show that these variational methods provide surprisingly poor compression bounds, despite being explicitly built to minimize such bounds. This might explain the relatively poor practical performance of variational methods in deep learning. Better encoding methods, imported from the Minimum Description Length (MDL) toolbox, yield much better compression values on deep networks, corroborating the hypothesis that good compression on the training set correlates with good test performance.
For a more detailed presentation of this work, with an introduction to Kolmogorov Complexity theory and Solomonoff's induction theory, you can also read my Master Thesis : Universal Compression Bounds and Deep Learning
Open source projetcs

World Models
: Reimplementation of WorldModels (Ha and Schmidhuber 2018) in pytorch, with additional experiments. Blog post, Github 
Pyvarinf
: Python library for Bayesian Deep Learning with Pytorch. Github 
convnetskeras
: Python library proposing several trained convolutional neural networks for image classification and localisation, in Keras. Github 
selectiveinference
Software for postmodel selection statistical test with selective inference. Github