October 2010
Sun Mon Tue Wed Thu Fri Sat
          1 2
3 4 5 6 7 8 9
10 11 12 13 14 15 16
17 18 19 20 21 22 23
24 25 26 27 28 29 30
31            

Authors' Committee

Chair:

Matt Blackwell (Gov)

Members:

Martin Andersen (HealthPol)
Kevin Bartz (Stats)
Deirdre Bloome (Social Policy)
John Graves (HealthPol)
Rich Nielsen (Gov)
Maya Sen (Gov)
Gary King (Gov)

Weekly Research Workshop Sponsors

Alberto Abadie, Lee Fleming, Adam Glynn, Guido Imbens, Gary King, Arthur Spirling, Jamie Robins, Don Rubin, Chris Winship

Weekly Workshop Schedule

Recent Comments

Recent Entries

Categories

Blogroll

SMR Blog
Brad DeLong
Cognitive Daily
Complexity & Social Networks
Developing Intelligence
EconLog
The Education Wonks
Empirical Legal Studies
Free Exchange
Freakonomics
Health Care Economist
Junk Charts
Language Log
Law & Econ Prof Blog
Machine Learning (Theory)
Marginal Revolution
Mixing Memory
Mystery Pollster
New Economist
Political Arithmetik
Political Science Methods
Pure Pedantry
Science & Law Blog
Simon Jackman
Social Science++
Statistical modeling, causal inference, and social science

Archives

Notification

Powered by
Movable Type 4.24-en


« September 2010 | Main | November 2010 »

29 October 2010

Can matching solve endogeneity?

I get asked this question from time to time, but when I got asked this question multiple times on Friday, I guessed that something had gone down.

What went down was Chris Blattman offering a rant (his description, not mine) about the "cardinal sin of matching" -- the belief that matching can single-handedly solve endogeneity problems. Most of the questions I got went something like "Chris says matching can't help with endogeneity. You say it can. What gives?"

First, let me say that I agree with most of Chris' rant and I think that his blog post should be required reading for anyone using matching right now. There are too many people out there that think that matching is a magical method that fixes endogeneity automatically. It's not, and reading Chris' discussion should be the first step in a 12 step process for those of us that have drunk the matching cool-aid too hard.

Now for the statistics:

Basically, matching can solve your endogeneity/selection/confounding problem if you can measure the variables that influence treatment assignment. That is a big "if" and measurement is the key here. Matching is generally a pretty smart way to condition on observables, but it doesn't buy you anything if you believe that there are unobserved variables that systematically influence treatment assignment. Thus, if you think your regression is biased because of unobservables, then matching by itself won't help you. What you really need to do is go out and measure the unobserved confounders and condition on them.

In the end, I think that people who like matching methods (and other conditioning methods) tend to believe that most confounders can be measured (perhaps with a lot of hard work) and that there aren't a lot of lurking unobservables. In contrast, people I talk to who are skeptical of matching almost always argue that there will always be problematic unobservables lurking no matter how hard you try to measure them. In general, these types of people prefer instrumental variables approaches (and tend to be economists rather than statisticians, interestingly enough).

Fair enough -- there may be lurking unobservables. Frankly, there's no way to get empirical traction on how many lurking unobservables are out there (definitionally), so I think it comes down to subjective beliefs about the nature of the world. But what always gets me is that the same people who tell me that lurking unobservables are everywhere tend to be fairly comfortable making the types of exclusion restrictions that make IV approaches work. The crazy thing is that just like matching, these assumptions rely on assumptions about unobservable causal pathways. The claim that an instrumental variable is valid is the claim that there are no unobserved (or observed) variables linking the instrument to the outcome except through the path of the instrumented variable. So it always puzzles me that the same people who think that lurking unobservables are everywhere in matching somehow think that all these lurking uobservables go away as soon as you call something an instrument and try to defend it as exogenous.

I'm pretty skeptical of most observational IV approaches -- unless you flipped the coins yourself or you can really tell me a plausible story about how nature flipped coins, I probably won't believe your instrument. So why am I falling into the reverse trap: believing that unobservables are more likely to undermine IV than conditioning approaches? Maybe I'm just wrong here and I need to become an even more extreme skeptic of most empirical research than I already am. But my sense is that the conditions for an IV to hold are more knife-edge than the ignorability assumptions. Perhaps that's wishful thinking.

But wishful thinking aside, matching can help solve endogeneity problems if you can measure the variables that influence selection (and if there happens to be sufficient overlap, yadda, yadda). All those people out there who make blanket statements like "matching can't solve endogeneity" are either making the assumption that there are always lurking confounders or else they are just plain wrong.

Posted by Richard Nielsen at 10:30 AM

24 October 2010

Stories and statistics

Lately I've been thinking a lot (and writing a little) about ways to combine the qualitative and quantitative empirical traditions in political science, so I was quite interested to read a new post on the philosophy blog at the New York Times written by mathematician John Paulos. He contrasts the logic of story-telling with the logic of statistics to draw out some interesting implications for how each mode of understanding colors the ways we think about the world.

In a sentence that could have come out of a "scope and methods" text, Paulos identifies the fundamental difference between literary and statistical traditions: "The focus of stories is on individual people rather than averages, on motives rather than movements, on point of view rather than the view from nowhere, context rather than raw data." I think this is an accurate description of how two empirical cultures in social science have developed, but I disagree that this divide is inherent.


This may be unorthodox, but I don't see statistics as inherently "quantitative" or focused on the "general" rather than the "particular". I see statistics as a relatively young field attempting to develop answers to the question "how should I go about formulating my beliefs about the world now that I've observed some part of it." Eventually, statistics will need to offer advice on how to update our picture of the world after observing any type of information -- not just information that comes from randomized experiments, fits neatly in rectangular matrices, or involves enough "N" for some central limit theorem to hold.

Narrative research seems ideally suited to work with the types of information that traditional statistics has largely ignored. Why then should statistics take up the task? Narratives are rich with data but researchers using narrative methods have little advice on how to make inferences from these data. In the richest of literary narratives this ambiguity enhances the text, allowing the reader to reach many conclusions about the meaning and implications of a work. In empirical social science, this ambiguity can become a liability. If statisticians spent more time developing ways of making appropriate inferences from data in these settings -- frankly the most common settings that we face -- it might lessen this ambiguity by offering a clear set of rules for mapping complex narrative data to inference.

My hunch is that the people who work with data that lends itself to narrative research already have ideas about the best practices for making valid inferences from these data. Perhaps we should be more interested in learning to speak statisticians' language so that we can suggest these insights to them and they in turn can suggest refinements for us. This exchange would help statisticians develop a science of inference and help us develop knowledge of social phenomenon.


Posted by Richard Nielsen at 8:10 PM

22 October 2010

Workflow Agonistes

The Setup is a site dedicated to interviewing nerdy folk about what software/hardware they use to do their jobs. It has mostly been web designers and software developers, which is interesting, yet removed from academics. Thus, I was glad to see them interview Kieran Healy, a sociologist at Duke. The whole thing is worth a read if you are interested (like me) in these sorts of things, but here is a bit of his advice:

Workflow Agonistes: I've written about this elsewhere, at greater length. Doing good social-scientific research involves bringing together a variety of different skills. There's a lot of writing and rewriting, with all that goes along with that. There is data to manage, clean, and analyze. There's code to be written and maintained. You're learning from and contributing to some field, so there's a whole apparatus of citation and referencing for that. And, ideally, what you're doing should be clear and reproducible both for your own sake, when you come back to it later, and the sake of collaborators, reviewers, and colleagues. How do you do all of that well? Available models prioritize different things. Many useful tricks and tools aren't taught formally at all. For me, the core tension is this. On the one hand, there are strong payoffs to having things organized simply, reliably, and effectively. Good software can help tremendously with this. On the other hand, though, it's obvious that there isn't just one best way (or one platform, toolchain, or whatever) to do it. Moreover, the people who do great work are often the ones who just shut up and play their guitar, so to speak. So it can be tricky to figure out when stopping to think about "the setup" is helpful, and when it's just an invitation to waste your increasingly precious time installing something that's likely to break something else in an effort to distract yourself. In practice I am only weakly able to manage this problem.

Also good advice:

I try to keep as much as possible in plain text.

On his site, Kieran has more guidance on choosing workflows for social science research. Sidenote: he has one of the best looking academic websites I have seen.

Posted by Matt Blackwell at 9:53 PM