Bayesian vs. frequentist in cogsci 

 Bayesian vs. frequentist - it's an old debate.  The Bayesian approach views probabilities as degrees of belief in a proposition, while the frequentist says that a probability refers to a set of events, i.e., is derived from observed or imaginary frequency distributions.  In order to avoid the well-trod ground comparing these two approaches in pure statistics, I'll consider instead how the debate changes when applied to cognitive science. 


 One of the main arguments made against using Bayesian probability in statistics is that it's ill-grounded and subjective.  If probability is just "degree of belief", then even a question like "what is the probability of heads or tails" can change depending on who is asking the question and what their prior beliefs about coins are.  Suddenly there is no "objective standard", and that's nerve-wracking.  For this reason, most statistical tests in most disciplines rely on frequentist notions like confidence intervals rather than Bayesian notions like the relative probability of two hypotheses.  However, there are drawbacks to doing this, even in non-cogsci areas.  To begin with, many things we want to express statistical knowledge about don't make sense in terms of reference sets, e.g., the probability that it will rain tomorrow (since it will only rain once).   For another, some argue that the seeming objectivity of the frequentist approach is illusory, since we can't ever be sure that our sampling process hasn't biased or distorted the data.  At least with a Bayesian approach, we can explicitly deal with and/or try to correct that. 

 But it's in trying to model the mind that we can really see the power of Bayesian probability.  Unlike as for other social scientists, this sort of subjectivity isn't a problem: we cognitive scientists are  interested  in degree of belief. In a sense, we study subjectivity.  In making models of human reasoning, then, an approach that incorporates subjectivity is a benefit, not a problem. 

 Furthermore, (unlike many statistical models) the brain generally doesn't  just  want to correctly capture the statistical properties of the world.  Actually, its main goal is generalization -- prediction, not just estimation, in other words -- and one of the things people excel at is generalization based on very little data.  Incorporating the Bayesian notion of prior beliefs, which act to constrain generalization in ways that go beyond the actual data, allows us to formally study this in ways that we couldn't if we just stuck to frequentist ideas of probability.