On communication 

 Jim's entry about the use of the word "parameter" got me thinking about a related issue I wrestle with all the time: communicating the importance and value of computational models in psychology to traditional psychologists. 


 There is a certain subset of the cognitive science community that is interested in computational/statistical models of human reasoning, stemming from the 70s and the 80s, first with Strong AI and the rise of connectionism.  Nowadays, I think more people are becoming interested in Bayesian models, though admittedly it's hard to tell how big this is because of sample bias: since it's what my lab does, I don't have a clear sense of how many people don't know or care about this approach, since they are the very people I'm least apt to converse with. 

 Nevertheless, I think I can say with some confidence that a not inconsequential number of psychologists just don't see the value of computational models.  Though I think some of that is for good reasons (some of which I share), I'm ever more convinced that a lot of this is because we, the computational and quantitative people, do such a lousy job of explaining why they are important, in terms that a non-computationally trained person can understand.  

 Part of it is word choice: as Jim says, we have absorbed jargon to the point that it is second-nature to us, and we don't even realize how jargony it might be ("parameters", "model", "Bayesian", "process", "generative", "frequentist","likelihood" - and I've deliberately tried put on this list some of the least-jargony terms we habitually use).  But I think it also relates to deeper levels of conceptualization -- we have trained ourselves to the point that when something is described mathematically, we can access the intuition fairly easily, and thus forget that the mathematical description doesn't have the same effect for other people.  I was recently at a talk geared toward traditional psychologists in which the speaker described what a model was doing in terms of coin flipping and mutation processes.  It was perfectly accurate and certainly less vague than the corresponding intuition, but I think he lost a few people right there: since they couldn't capture the intuition rapidly enough, the model felt both arbitrary  and too complicated to them.  I don't think it's a coincidence that arbitrariness and "too much" complexity are two of the most common criticisms leveled at computational modelers by non-modelers. 

 The point?  Though we shouldn't sacrifice accuracy in order to make vague, handwavy statements, it's key to accompany accurate statistical descriptions with the corresponding intuitions that they capture.  It's a skill that takes practice to develop (learning this is one of the reasons I blog, in fact), and it requires being constantly aware of what might be specialized knowledge that your listener might not know.  But it's absolutely vital if we want quantitative approaches to be taken seriously by more non-quantitative folks.