The weekly SILO Seminar Series is made possible through the generous support of the 3M Company and its Advanced Technology Group


Regularizer design for structured learning

Amin Jalali, Graduate Student University of Washington

Date and Time: Jul 07, 2016 ( 4:00 PM)
Location: Orchard room (3280) at the Wisconsin Institute for Discovery Building


Design and analysis of tractable methods for estimation of structured models from massive high-dimensional datasets has been a topic of research in statistics, machine learning and engineering for many years. Regularization, the idea of simultaneously optimizing a data fidelity term (loss) and a structure-promoting term (regularizer), is a widely used approach in different machine learning and signal processing tasks. Appropriate regularizers can help in exploiting the prior structural information on the underlying model. My work has been focused on exploring new structures and devising convex machinery for exploiting them.

In this talk, I will discuss a class of regularized loss minimization problems where the structure in the loss and in the regularizer can be exploited for computational purposes. I will briefly discuss current approaches to regularizer design such as the atomic norm framework before proposing a rich class of regularizers, variational Gram functions (VGF), which encompasses many existing regularization functions as well as important new ones. We derive conditions for convexity, study their conjugate functions and semidefinite-representations, and present efficient algorithms for solving structured convex optimization problems involving VGFs. We also establish a general kernel trick and a representer theorem for such regularized problems. Throughout the talk, I will point to many open questions on variational Gram representations from design, statistical and computational points of view.