August 2010
Sun Mon Tue Wed Thu Fri Sat
1 2 3 4 5 6 7
8 9 10 11 12 13 14
15 16 17 18 19 20 21
22 23 24 25 26 27 28
29 30 31        

Authors' Committee

Chair:

Matt Blackwell (Gov)

Members:

Martin Andersen (HealthPol)
Kevin Bartz (Stats)
Deirdre Bloome (Social Policy)
John Graves (HealthPol)
Rich Nielsen (Gov)
Maya Sen (Gov)
Gary King (Gov)

Weekly Research Workshop Sponsors

Alberto Abadie, Lee Fleming, Adam Glynn, Guido Imbens, Gary King, Arthur Spirling, Jamie Robins, Don Rubin, Chris Winship

Weekly Workshop Schedule

Recent Comments

Recent Entries

Categories

Blogroll

SMR Blog
Brad DeLong
Cognitive Daily
Complexity & Social Networks
Developing Intelligence
EconLog
The Education Wonks
Empirical Legal Studies
Free Exchange
Freakonomics
Health Care Economist
Junk Charts
Language Log
Law & Econ Prof Blog
Machine Learning (Theory)
Marginal Revolution
Mixing Memory
Mystery Pollster
New Economist
Political Arithmetik
Political Science Methods
Pure Pedantry
Science & Law Blog
Simon Jackman
Social Science++
Statistical modeling, causal inference, and social science

Archives

Notification

Powered by
Movable Type 4.24-en


« May 2010 | Main | September 2010 »

30 August 2010

Rigor and modeling in economics

In a postscript, Andrew Gelman laments a general trend he notices in economics:

My only real problem with it is that when discussing data analysis, [the authors] pretty much ignore the statistical literature and just look at econometrics. In the long run, that's fine--any relevant developments in statistics should eventually make their way over to the econometrics literature. But for now I think it's a drawback in that it encourages a focus on theory and testing rather than modeling and scientific understanding.

Gelman has an idea about why this might be the case:
The problem, I think, is that they (like many economists) think of statistical methods not as a tool for learning but as a tool for rigor. So they gravitate toward math-heavy methods based on testing, asymptotics, and abstract theories, rather than toward complex modeling. The result is a disconnect between statistical methods and applied goals.

There is likely a balance here that Gelman misses between theoretical modeling and statistical modeling. Economists are in the business of testing complex theoretical models. A complex statistical model may draw attention away from that narrow goal.

Not that I necessarily endorse that viewpoint. It simply feels slightly unfair to economists to say that their spartan statistical modeling is a product of their obsession with technical rigor.

Posted by Matt Blackwell at 9:43 AM

27 August 2010

Jackman on the Australian Elections

If you enjoy Australian politics, betting markets, and sharp statistical analysis, take a look at Simon Jackman's blog. He has been killing it lately.

Posted by Matt Blackwell at 10:39 AM

The Seven Deadly Sins of Contemporary Quantitative Analysis

You may think you have good reasons to not stop what you are doing and read Phil Schrodt's essay on the "Seven Deadly Sins of Contemporary Quantitative Political Analysis". But you do not. Not only does the piece make several astute points about the current practice of quantitative social science (in a highly enjoyable way, I might add), but it also reviews developments in the philosophy of science that have led us here. The entirety is excellent, so picking out an excerpt is difficult, but here is his summary of our current philosophical messiness:

I will start by stepping back and taking a [decidedly] bird's eye (Thor's eye?) view of where we are in terms of the philosophy of science that lies beneath the quantitative analysis agenda, in the hope that knowing how we got here will help to point the way forward. In a nutshell, I think we are currently stuck with an incomplete philosophical framework inherited (along with a lot of useful ideas) from the logical positivists, combined with a philosophically incoherent approach adopted from frequentism. The way out is a combination of renewing interest in the logical positivist agenda, with suitable updating for 21st century understandings of stochastic approaches, and with a focus on the social sciences more generally. Much of this work has been done last decade or so in the qualitative and multi-methods community but not, curiously, in the quantitative community. The quantitative community does, however, provide quite unambiguously the Bayesian alternative to frequentism, which in turn solves most of the current contradictions in frequentism which we somehow--believing six impossible things before breakfast--persuade our students are not contradictions. But we need to systematically incorporate the Bayesian approach into our pedagogy. In short, we may be in a swamp at the moment, but the way out is relatively clear.

His section on "prediction versus explanation" is also quite insightful and deserves more attention. The upshot:

...the point is that distinguishing scientific explanation from mythical (or other non-scientific, such as Freudian) explanation is one of the central themes for the logical positivists. In the absence of prediction, it cannot be done.

Warning: if you truly love significance tests, you might feel a little heartbroken when reading this essay.

Posted by Matt Blackwell at 9:34 AM