Who makes a good peer reviewer? 

 One of the interesting things about accruing more experience in a field is that as you do so, you find yourself called upon to be a peer reviewer more and more often (as I'm discovering).  But because I've never been an editor, I've often wondered what this process looks like from that perspective: how do you pick reviewers? And what kind of people tend to be the best reviewers? 

 A recent article in the (open-access) journal  PLoS Medicine  speaks to these questions.  Even though it's in medicine, I found the results somewhat interesting for what they might imply or predict about other fields as well. 


 In a nutshell, this study looked at 306 reviewers from the journal  Annals of Emergency Medicine .  Each of the 2,856 reviews (of 1,484 separate manuscripts) had been rated by the editors of the journal on a five-point scale (1=worst, 5=best).   The study simply tried to identify what characteristics of the reviewers could be used to predict the effectiveness of the review.    The basic finding? 

 
Multivariable analysis revealed that most variables, including academic rank, formal training in critical appraisal or statistics, or status as principal investigator of a grant, failed to predict performance of higher-quality reviews. The only significant predictors of quality were working in a university-operated hospital versus other teaching environment and relative youth (under ten years of experience after finishing training). Being on an editorial board and doing formal grant (study section) review were each predictors for only one of our two comparisons. However, the predictive power of all variables was weak.
 

 The details of the study are somewhat helpful for interpreting these results.  When I first read that younger was better, I wondered to what extent this might simply be because younger people have more time.  After looking at the details, I think this interpretation, while possible, is doubtful: the youngest cohort were defined as those that had less than ten years of experience after finishing training,  not  those who were largely still in grad school.  I'd guess that most of those were on the tenure-track, or at least still in the beginnings of their career.  This is when it's probably most important to do many many things and be extremely busy: so I doubt those people have more time.  Arguably, they might just be more motivated to do well precisely because they are still young and trying to make a name for themselves -- though I don't know how big of a factor it would be given the anonymity of the process: the only people you're impressing with a good review are the editors of the journals.  

 All in all, I'm not actually that surprised that "goodness of review" isn't correlated with things such as academic rank, training in statistics, or being a good PI: not that those things don't matter, but my guess would be that nearly everyone who's a potential reviewer (for what is, I gather, a fairly prestigious journal) would have sufficient intelligence and training to be able to do a good review.  If that's the case, then the best predictors of reviewing quality would come down to more ineffable traits like general conscientiousness and motivation to do a good review...  This interpretation, if true, implies that a good way to generate better reviews is  not  to just choose big names, but rather to make sure people are motivated to put the time and effort into those reviews. Unfortunately, given that peer review is largely uncredited and gloryless, it's difficult to see how best to motivate them. 

 What do you all think about the idea of making these sort of rankings public?  If people could put them on their CV, I bet there would suddenly be a lot more interest in writing good reviews... at least for the people for whom the CV still mattered.