Assessing Model Selection Uncertainty Using a Bootstrap Approach: An Update

Gitta H. Lubke, Ian Campbell, Dan McArtor, Patrick Miller, Justin Luningham, Stéphanie Martine van den Berg

Research output: Contribution to journalArticleAcademicpeer-review

28 Citations (Scopus)
200 Downloads (Pure)


Model comparisons in the behavioral sciences often aim at selecting the model that best describes the structure in the population. Model selection is usually based on fit indexes such as Akaike’s information criterion (AIC) or Bayesian information criterion (BIC), and inference is done based on the selected best-fitting model. This practice does not account for the possibility that due to sampling variability, a different model might be selected as the preferred model in a new sample from the same population. A previous study illustrated a bootstrap approach to gauge this model selection uncertainty using 2 empirical examples. This study consists of a series of simulations to assess the utility of the proposed bootstrap approach in multigroup and mixture model comparisons. These simulations show that bootstrap selection rates can provide additional information over and above simply relying on the size of AIC and BIC differences in a given sample.
Original languageEnglish
Pages (from-to)230-245
JournalStructural equation modeling
Issue number2
Publication statusPublished - 5 Dec 2017


  • METIS-319154
  • IR-102349


Dive into the research topics of 'Assessing Model Selection Uncertainty Using a Bootstrap Approach: An Update'. Together they form a unique fingerprint.

Cite this