Meta-analysis, Part II 

 Last time I wrote about the popularity of meta-analysis for synthesizing the results of multiple studies and cited education researcher Derek Briggs, who believes that the method is used too often and sometimes incorrectly. 


 Recently, I informally re-examined the data from a published meta-analysis on reading instruction methods, running four different Bayesian models on the set of effect sizes given in the paper. All of the hierarchical Bayesian models (which varied only in the priors used and covariates included) showed that a significant amount of uncertainty was ignored by the original meta-analysis, which assumed that the effect size produced by each study was an estimate of one overall true mean.  The preliminary results from my analysis supported Briggs' position, since they did  not  show the significant results that were evident in the meta-analysis paper; in other words, none of the Bayesian analyses came close to indicating a significant effect for the reading instruction method in question. I claim no reliable conclusion for my own analysis  – I’m even not going to specify the original paper here – but re-examining the methods of meta-analyses seems worthwhile for the purpose of uncovering uncertainty, if not developing new techniques for synthesizing multiple studies. 

 The implications are nontrivial: the evidence supporting the teaching methods required by the billion dollar Reading First initiative, part of the Department of Education’s No Child Left Behind Act, is a  long collection of meta-analyses  performed by the National Reading Panel.