The conventional wisdom of simple models in machine learning theory misses the bigger picture, especially over-parameterized neural networks (NNs), where the number of parameters is much larger than the number of training data. Our goal is to explore the mystery behind NNs from a theoretical side.
In this talk, I will discuss the role of over- parameterization in neural networks, to theoretically understand why they can perform well. First, I will talk about the robustness of neural networks, affected by architecture and initialization in a function space theory view. It aims to answer a fundamental question: over-parameterization in NNs helps or hurts robustness? Second, I will talk about how deep reinforcement learning works well for function approximation. Potential future directions and some topics, e.g. trustworthy ML will also be briefly discussed.
Speaker: Dr Fanghui LIU
Date: 17 March 2023 (Friday)
Time: 3:30pm – 4:30pm
Poster: Click here
Dr Fanghui LIU is currently a Postdoctoral Fellow at Ecole Polytechnique Fédérale de Lausanne (EPFL), and previously was a postdoc researcher at ESAT-STADIUS, KU Leuven. He obtained his PhD degree at Institute of Image Processing and Pattern Recognition, Shanghai Jiao Tong University in 2019. His research interests include machine learning, kernel methods, and learning theory, which leads to research output at JMLR, TPAMI, NeurIPS, etc and tutorials at CVPR 2023, ICASSP 2023.