What you see may not be what you get: a brief, nontechnical introduction to overfitting in regression-type models

MA Babyak - Psychosomatic medicine, 2004 - journals.lww.com
MA Babyak
Psychosomatic medicine, 2004journals.lww.com
Objective: Statistical models, such as linear or logistic regression or survival analysis, are
frequently used as a means to answer scientific questions in psychosomatic research. Many
who use these techniques, however, apparently fail to appreciate fully the problem of
overfitting, ie, capitalizing on the idiosyncrasies of the sample at hand. Overfitted models will
fail to replicate in future samples, thus creating considerable uncertainty about the scientific
merit of the finding. The present article is a nontechnical discussion of the concept of …
Abstract
Objective:
Statistical models, such as linear or logistic regression or survival analysis, are frequently used as a means to answer scientific questions in psychosomatic research. Many who use these techniques, however, apparently fail to appreciate fully the problem of overfitting, ie, capitalizing on the idiosyncrasies of the sample at hand. Overfitted models will fail to replicate in future samples, thus creating considerable uncertainty about the scientific merit of the finding. The present article is a nontechnical discussion of the concept of overfitting and is intended to be accessible to readers with varying levels of statistical expertise. The notion of overfitting is presented in terms of asking too much from the available data. Given a certain number of observations in a data set, there is an upper limit to the complexity of the model that can be derived with any acceptable degree of uncertainty. Complexity arises as a function of the number of degrees of freedom expended (the number of predictors including complex terms such as interactions and nonlinear terms) against the same data set during any stage of the data analysis. Theoretical and empirical evidence—with a special focus on the results of computer simulation studies—is presented to demonstrate the practical consequences of overfitting with respect to scientific inference. Three common practices—automated variable selection, pretesting of candidate predictors, and dichotomization of continuous variables—are shown to pose a considerable risk for spurious findings in models. The dilemma between overfitting and exploring candidate confounders is also discussed. Alternative means of guarding against overfitting are discussed, including variable aggregation and the fixing of coefficients a priori. Techniques that account and correct for complexity, including shrinkage and penalization, also are introduced.
Lippincott Williams & Wilkins