Date and Time: Jan 31, 2018 (12:30 PM)
Location: Orchard room (3280) at the Wisconsin Institute for Discovery Building
Learning/estimating networks from multi-variate time series data
is an important problem that arises in many applications including
computational neuroscience, social network analysis, and many
others. Prior approaches either do not scale to multiple time series
or rely on very restrictive parametric assumptions. In this talk,
I present two approaches that provide learning guarantees for large-scale
multi-variate time series. The first involves a parametric regularized GLM
framework with non-linear clipping and saturation effects that
incorporate and apply to various different low-dimensional structures.
The second involves a non-parametric sparse additive model framework.
Learning guarantees are provided in both cases and theoretical
results are supported both by simulation results and performance
comparisons on various data examples.