Suriya Gunasekar, Toyota Technology Institute at U of Chicago
Date and Time: Mar 21, 2018 (12:30 PM)
Location: Orchard room (3280) at the Wisconsin Institute for Discovery Building
In this talk, we will explore the implicit bias of generic optimization methods and its connection to generalization in ill-posed optimization problems. We will specifically study optimizing underdetermined linear regression or separable linear classification problems using common optimization methods including, mirror descent, natural gradient descent and steepest descent with respect to different potentials and norms. We ask the question of whether the specific global minimum (among the many possible global minima) reached by the specific optimization algorithm can be characterized in terms of the geometry of the updates determined by the potential or norm, and independently of hyperparameter choices such as step size and momentum?