Mariann Ollar and Eric Hall,
Date and Time: Mar 12, 2014 (12:30 PM)
Location: Orchard room (3280) at the Wisconsin Institute for Discovery Building
The statement that price aggregates dispersed information has been a longstanding feature underscoring the importance of markets. Yet, formalizing how exactly price may incorporate individually held information has been a challenging task. I present one particular approach to information aggregation through price.
The framework is a double auction mechanism modeled as an incomplete information game. Traders hold individual information that informs them about the relevant values. `Value' might capture many things from product quality to differences in tastes or resale opportunities. In the double auction, traders submit bid functions specifying quantities for every price. After bids are submitted, the market clearing price is centrally determined to be the price where quantities to be sold equal quantities to be bought.
The market clearing price is determined by the bid functions, therefore conveys an aggregate of all traders' information. The two facts, that bids are expressed as functions of price, and that price aggregates other traders' information; allows traders condition on other traders' information right when submitting their bids. It is worth noticing, that learning is not explicitly modeled (there are no information channels or dynamic announcements), rather it is a byproduct of optimal trade.
In the presentation, I will focus on the project `Information Aggregation with Correlated Measurement Error', also mention the ongoing projects on `Privacy Preserving Mechanism Design', all joint work with Marzena Rostek.
We describe a novel approach to online convex programming in dynamic settings. Many existing online learning methods are characterized via regret bounds that quantify the gap between the performance of the online algorithm relative to a comparator. In previous work, this comparator was either considered static over time, or admitted sublinear tracking regret bounds only when the comparator was piecewise constant or slowly varying. In contrast, the proposed method determines the best dynamical model from a parametric family to incorporate into the prediction process, and admits regret bounds which scale with the deviation of a comparator from that dynamical model. In other words, this approach can yield low regret relative to highly variable, dynamic comparators. This result is proved for loss functions corresponding to the negative log likelihood associated with an exponential family probability distribution, and several properties of the exponential family are exploited to ensure low regret. We take this formulation and show specifically how it applies to learning social network structure by learning parameters of a Hawkes Process, where instead of observing actual links between agents in the network, we can only observe actions of the agents.