主持人:黄海平 副教授
报告简介:Learning in deep neural networks (DNNs) is implemented through minimizing a highly non-convex loss function, typically by a stochastic gradient descent (SGD) method. This learning process can effectively find good wide minima without being trapped in poor local ones. In this talk, I will present a novel account of how such effective deep learning emerges through the interactions of the SGD and the geometrical structure of the loss landscape.
Rather than being a normal diffusion process (i.e. Brownianmotion) as often assumed, I found that the SGD exhibits rich,complex dynamics when navigating through the loss landscape; initially, the SGD exhibits anomalous superdiffusion, which attenuates gradually and changes to subdiffusion at long times when the solution is reached. By adapting the methods developed for studying energy landscapes in complex physical systems, I found that such superdiffusive learning dynamics are due to the interactions of the SGD and the fractal-like structure of the loss landscape. The results thus reveal the effectiveness of deep learning from a novel perspective and have implications for designing efficient deep neural networks.
报告人简介:陈国璋,2020年博士毕业于悉尼大学物理系,主要研究领域为计算神经科学。