The weekly SILO Seminar Series is made possible through the generous support of the 3M Company and its Advanced Technology Group

with additional support from the Analytics Group of the Northwestern Mutual Life Insurance Company

__Rob Nowak__, *Professor in ECE Department*

** Date and Time: **Mar 23, 2011 (12:30 PM)

**Location: **
Orchard room (3280) at the Wisconsin Institute for Discovery Building

Statistical testing is a ubiquitous in engineering and science. The outcomes of testing and the significance we attach to them have profound effects on society; e.g., they are the basis for many public health and other social-engineering initiatives. So, it is crucial that we all agree and understand what it means to be significant. My talk will discuss a few issues related to the statistical significance of hypothesis testing.

I'll begin on a lighter note by talking about an issue raised in the criticism of a recent research study on ESP (extrasensory perception). Here is a link to NYT article on the controversy http://www.nytimes.com/2011/01/11/science/11esp.html?_r=1 . At issue is the meaning of what constitutes a statistically significant finding, and you might be surprised that even the experts don't all agree.

Then I'll discuss some research work I've done in collaboration with Rui Castro, Jarvis Haupt, and Matt Malloy concerning high-dimensional multiple testing problems. For example, consider testing to decide which of n>1 genes are differentially expressed in a certain disease. Suppose each test takes the form H0: X ~ N(0,1) vs. H1: X ~ N(m,1), for m>0, where N(m,1) is the Gaussian density with mean m and variance 1. When n is large, reliable decisions are possible only if the "signal amplitude" m exceeds sqrt(2 log n). This is simply because the magnitude of the largest of n independent N(0,1) noises is on the order of sqrt(2 log n). Non-sequential methods cannot overcome this curse of dimensionality. Sequential methods, however, are capable of breaking this curse by focusing measurement/experimentation resources on certain components at the expense of others. I will discuss a simple sequential method, in the spirit of classical sequential probability ratio testing, that is reliable as long as the signal amplitude satisfies m > sqrt(4 log s), where s is the number of tests where the truth is H1. In many applications, s is much much smaller than n and so the gains are "significant".