Ivan Dokmanic, University of Illinois
Date and Time: Apr 11, 2018 (12:30 PM)
Location: Orchard room (3280) at the Wisconsin Institute for Discovery Building
What do interstellar navigation and protein shape reconstruction have in common? They both require us to solve the quintessential distance geometry problem: find a set a points which realizes an observed set of distances. Though stars, spaceships and proteins all tend to move, however, the textbook version of the distance geometry problem remains static. It also tacitly assumes we know which points belong to which distances. In this talk, I will explore what happens when we depart from the textbook setting, motivating the departures by my previous work on room shape reconstruction from acoustic echoes.
I will first show how the room-from-echoes problem leads to a variant of simultaneous localization and mapping (SLAM) which we call EchoSLAM. Different sensor configurations give rise to different formulations, and Euclidean distance matrices (EDMs) play a key role in solving them. But vanilla EDMs fail to model the dynamics once things start to move. Enter Kinetic EDMs—our upgrade to trusty old EDMs whose entries now become functions of time. I will show how to adapt the usual EDM algorithms based on semidefinite programming to benefit from the motion, a stunt that hinges on a new result in polynomial matrix factorization.
We will then see how the tendency of echoes to sound the same regardless of where they're coming from leads to the unlabeled distance geometry problem: we lose correspondence between distances and points. Reconstruction from such unlabeled distance data is related to network tomography, and it arises in localization from multipath, protein shape reconstruction, and nanostructure determination from powder diffraction. I will discuss connections with phase retrieval and introduce our approach based on empirical measure matching—the first robust formulation for the unlabeled distance problem.
Joint work with Miranda Krekovi?, Puoya Tabaghi, Shuai Huang, and Martin Vetterli.
Video: Posted later date