Are people Bayesian? (and what does that mean?) Part II 

 In my last post I talked about computational vs. algorithmic level descriptions of human behavior, and I argued that most Bayesian models of reasoning are examples of the former -- and thus make no claims about whether and to what extent the brain physically implements them.   

 A common statement at this point is that "of  course  your models don't say anything about the brain -- they are so complicated, how could they?  Do people really do all that math?"  I share the intuition: the models do look complex, and I am certainly not aware of doing anything like this when I think, but I don't think the possibility can be rejected out of hand.  In other words, while it's certainly  possible  that human brains do nothing like, say, MCMC [insert complicated computational technique here], it's not  a priori  obvious.  Why? 


 I have three reasons.  First of all, we really don't have any good conception of what the brain is capable of computationally -- it has billions of neurons, each of which has thousands of connections, and (unlike modern computers) is a massively parallel computing device.  State of the art techniques like MCMC  look  complicated when written out as mathematical equations -- particularly to those who don't come from that background -- but that doesn't mean, necessarily, that they are complicated in the brain.   

 Secondly, every model I've seen generally gets its results after running for at most a week, usually for only a few minutes -- much less time than a human has to go about and form theories of the world.  If you are studying how long-term theories or models of the world form, it's not at all clear how to compare the time a computer takes to the time a human takes: not only are the scales really different, so is the data they get (models generally have cleaner data, but far less) and so is the speed of processing (computers are arguably faster, but if a human can do in parallel what a computer does serially, this might mean nothing).  The point is that comparing a computer after 5 minutes to a human over a lifetime might not be so silly after all. 

 Thirdly, both the strength and weakness of studying cognitive science is that we have clear intuitions about what cognition and thinking are.  It's a strength in that it helps us judge hypotheses and have good intuitions -- but it's a weakness in that it causes us accept or reject ideas based on these intuitions when maybe we really shouldn't.  There's a big difference between conscious and unconscious reasoning, and most (if not all) of our intuitions are based on how we see ourselves reason consciously.  But just because we aren't  aware  of, say, doing Hebbian learning doesn't mean we aren't.  It's striking to me that people who make Bayesian models of vision rarely have to deal with questions like "but people don't do that! it's so complicated!"  This in spite of the fact that it's the same brain.  I think this is probably because we don't have conscious awareness of the process of vision, and so don't therefore think we know how it works.  But to the extent that higher cognition is unconscious, the same point applies.  It's just easy to forget. 

 Anyway, I'd be delighted to hear objections to any of these three reasons.  As I said in the last post, I'm still sorting out these issues to myself, so I'm not really dogmatically arguing any of this.