I am starting a CIFRE PhD program with Yann Ollivier. My research interests are Artificial Intelligence, Deep Learning, Reinforcment Learning, Information Theory, Kolmogorov Complexity and Solomonoff's induction theory, Learning Theory, Bayesian Statistics, ...
(2018) Léonard Blier, Pierre Wolinski, Yann Ollivier: Learning with random learning rates, Preprint.
Hyperparameter tuning is a bothersome step in the training of deep learning models. One of the most sensitive hyperparameters is the learning rate of the gradient descent. We present the 'All Learning Rates At Once' (Alrao) optimization method for neural networks: each unit or feature in the network gets its own learning rate sampled from a random distribution spanning several orders of magnitude. This comes at practically no computational cost. Perhaps surprisingly, stochastic gradient descent (SGD) with Alrao performs close to SGD with an optimally tuned learning rate, for various architectures and problems. Alrao could save time when testing deep learning models: a range of models could be quickly assessed with Alrao, and the most promising models could then be trained more extensively. This text comes with a PyTorch implementation of the method, which can be plugged on an existing PyTorch model.
(2018) Blier, L. and Ollivier, Y. : The Description Length of Deep Learning Models, NeurIPS 2018
Deep learning models often have more parameters than observations, and still perform well. This is sometimes described as a paradox. In this work, we show experimentally that despite their huge number of parameters, deep neural networks can compress the data losslessly even when taking the cost of encoding the parameters into account. Such a compression viewpoint originally motivated the use of variational methods in neural networks. However, we show that these variational methods provide surprisingly poor compression bounds, despite being explicitly built to minimize such bounds. This might explain the relatively poor practical performance of variational methods in deep learning. Better encoding methods, imported from the Minimum Description Length (MDL) toolbox, yield much better compression values on deep networks, corroborating the hypothesis that good compression on the training set correlates with good test performance.
For a more detailed presentation of this work, with an introduction to Kolmogorov Complexity theory and Solomonoff's induction theory, you can also read my Master Thesis : Universal Compression Bounds and Deep Learning
Open source projetcs
World Models: Reimplementation of World-Models (Ha and Schmidhuber 2018) in pytorch, with additional experiments. Blog post, Github
Pyvarinf: Python library for Bayesian Deep Learning with Pytorch. Github
convnets-keras: Python library proposing several trained convolutional neural networks for image classification and localisation, in Keras. Github
selective-inferenceSoftware for post-model selection statistical test with selective inference. Github