Risky Research and Bad Statistics 

 A recent New York Times  piece  by Nicholas Wade makes the point that research is an extremely risky enterprise with far more failures than successes. 

  "Nature yields her secrets with the greatest unwillingness, and in basic research most experiments contribute little to further progress, as judged by the rarity with which most scientific reports are cited by others."  

 In political science, it seems like most of this risk gets passed on directly to the researcher with possibly detrimental effects on the way we do research.  In theory, if a project is a dead end, we should probably just walk away.  In practice, however, projects can become "too big to fail".  My sense is that the need to squeeze something out of a research project leads to a lot of poor statistical practice -- specification searches for something that is "significant", overly optimistic claims about causal identification, and other  shady dealings .  On the other hand, attempts to mitigate this risk by avoiding the cost of large data collection projects typically mean that we keep running model after model on the same 5 datasets that everyone else is using. 

 Is the inherent riskiness of research at the root of these problems?  How do you manage these risks?