Return to Seminar Listing.
Statistical Programs
College of Agricultural
and Life Sciences
University of Idaho
Seminar Announcement
"Applied Statistics in Agriculture"
Evidence, Errors, and AIC

Presented By
Dr. Brian C. Dennis
Department of Fish and Wildlife Resources
and
Department of Statistics
University of Idaho

Tuesday, February 7
3:30 P. M.
Ag. Science 62

      TThe information-theoretic indices for statistical model selection have been popularized in the sciences as a framework for selecting the best model from among a set of candidate models to describe a given data set. The approach was pioneered by Professor Hirotugu Akaike in the 1970s, and his original model selection index, known as AIC, is widely used. The idea of putting all candidate models on a level playing field, so to speak, has been attractive enough to inspire polemic journal articles in various scientific fields advocating the abandonment of Neyman-Pearson-style statistical hypothesis testing. Certainly, the information-theoretic indices have given us far more coherent alternatives to such strange contraptions as stepwise regression.
      Yet, much is not known about information-theoretic indices. It is not clear yet how scientifically persuasive we should regard an argument that is based on model selection. The various indices are built as statistically consistent estimates of the relative distances of a pair of models from (in the sense of Kullback-Leibler discrepancy) the stochastic mechanism that generated the data. In that sense, the information-theoretic indices: (A) are frequentist (and most importantly, not some new inferential principle) , and (B) have the philosophical status of point estimation due to the lack of any estimates of error rates.
      I examine the information-theoretic indices using the statistical idea of evidence functions as described by Richard Royall (1997) and extended by Subhash Lele (2004). An evidence function compares two statistical models with respect to some relative divergence-from-truth measure and estimates which model is closer. The key property is a frequentist error criterion that the mutual probabilities of misleading evidence, if either of the models generated the data, should go to zero as the sample size becomes large. The original AIC as an evidence function fails this criterion and in fact has error properties resembling statistical hypothesis testing. Some of the other information-theoretic indices pass the criterion and can therefore be regarded as evidence functions.
All interested faculty, staff, and graduate students are invited to attend.


Return to Seminar Listing.