October 2009
Sun Mon Tue Wed Thu Fri Sat
        1 2 3
4 5 6 7 8 9 10
11 12 13 14 15 16 17
18 19 20 21 22 23 24
25 26 27 28 29 30 31

Authors' Committee

Chair:

Matt Blackwell (Gov)

Members:

Martin Andersen (HealthPol)
Kevin Bartz (Stats)
Deirdre Bloome (Social Policy)
John Graves (HealthPol)
Rich Nielsen (Gov)
Maya Sen (Gov)
Gary King (Gov)

Weekly Research Workshop Sponsors

Alberto Abadie, Lee Fleming, Adam Glynn, Guido Imbens, Gary King, Arthur Spirling, Jamie Robins, Don Rubin, Chris Winship

Weekly Workshop Schedule

Recent Comments

Recent Entries

Categories

Blogroll

SMR Blog
Brad DeLong
Cognitive Daily
Complexity & Social Networks
Developing Intelligence
EconLog
The Education Wonks
Empirical Legal Studies
Free Exchange
Freakonomics
Health Care Economist
Junk Charts
Language Log
Law & Econ Prof Blog
Machine Learning (Theory)
Marginal Revolution
Mixing Memory
Mystery Pollster
New Economist
Political Arithmetik
Political Science Methods
Pure Pedantry
Science & Law Blog
Simon Jackman
Social Science++
Statistical modeling, causal inference, and social science

Archives

Notification

Powered by
Movable Type 4.24-en


« September 2009 | Main | November 2009 »

30 October 2009

Happy Halloween

It made my day when this showed up in my inbox this morning. I'm glad to see someone knows what to do if/when the zombie outbreak occurs.

Posted by Richard Nielsen at 10:05 AM

29 October 2009

Matching Markets

Rich's post on instruments the other day reminded me of a conversation that I've been having with a faculty member; although the connection may not be particularly clear, at least at first.

The setup is that there are many markets in which buyers and sellers are distinct types of actors, for example, the market for spouses has, until recently, been such a market (although I make no claim as to which side of the market is buying and which is selling). This market, in the form of college applications, was analyzed by Gale and Shapley in a famous 1962 paper in which they proved that there was a solution to this type of matching problem.


Another example, that motivated my interest in the question, is the market for medical residents (see here). Shortly before graduation medical students apply for positions with various residency programs across the country by submitting rank order lists to a central clearinghouse; residency programs enter into a similar process of ranking medical students. The clearinghouse then produces an assignment that is optimal in an economic sense.

Unfortunately, this setup does not permit the applied researcher (or poor grad student) much traction for identifying the effect of being assigned to a particular residency program.

One solution comes from some work by Morten Sorensen on matching in venture capital. His idea is to model the decision process leading to investments by venture capitalists in early stage companies and at the same time to model his outcome of interest (company goes public) thus allowing for correlations between the respective error terms of the attractiveness / matching model and the outcome equation. Sorensen makes the point that this method makes use of the characteristics of other investors and investments in the market as instruments in order to address the fact that "better" investors may invest in "better" companies.

While in principle this method is attractive, it is computationally difficult and does not convince everyone--the faculty member I was talking to agreed that in principle this method is attractive, but that the results would be more credible with an instrumental variable that affects the probability of being assigned to a particular program. However, he also pointed out the value of these structural models--they provide estimates that may be valid over a broader range of values and can be used to do policy experiments that one may not be able to do with a model identified by instrumental variables.

Posted by Martin Andersen at 4:50 PM

28 October 2009

Physics of politics

A physicist recently emailed me asking if I could help him access election data; he sent me one of his papers, which (to my astonishment) began "Most of the empirical electoral studies conducted by physicists . . .", followed by a string of citations. I had no idea physicists were studying elections! I suppose I should have known; from what my biologist friend tells me, physicists have been colonizing his field the way economists have done to much of social science. So I guess politics was next.

Reading a few articles in the "physics of politics" as a political scientist, one has the sense of observing an alternate universe. For example: a paper on the effect of election results on party membership in Germany that has no references to work outside of physics; features many exotic (to me at least) terms like Wegscheider potentials, the Sznajd model, and the Kronecker symbol; and takes a time-series approach to causation that I suspect would be unacceptable to most reviewers in political science and economics these days.

In general, it's clear that physicists doing work on political phenomena (or "sociophysics" more generally) are primarily interested in exploring the individual-level social interactions that might underpin the macro-order we observe in, e.g., regularities in turnout or vote share distributions. As such, political institutions (which are the major preoccupation of political scientists) necessarily disappear from the model and are typically not even mentioned, even when they would seem to be of first-order importance in explaining a particular phenomenon. (Another example of the alternate universe: a paper that argues that party vote shares in Indonesia follow a power law, but which does not describe or mention the electoral system.) These omissions seem foolish on first reading, but it's clear that they reflect a different choice of explanatory variable: physicists seek their explanations in micro-interactions, and we seek them primarily in political institutions. It's probably both of course, but models can only be so complex.

Despite my overall sense of disorientation in reading these papers, there were also somewhat surprising moments of familiarity. Physics heavily influenced economics in an earlier period of colonization, and much of what we read in economics and political science descended from those models. In reading these newer physics papers, there is therefore a sense of distant kinship, the knowledge of a common ancestor several generations back.

I wonder about the scope for collaboration between physicists and social scientists. Based on my admittedly very cursory reading of one area in which physicists have ventured, it's hard to know whether the potential gains from trade are sufficient to overcome the apparent difference in goals. For all I know there already is a lot of productive collaboration going on -- if you know of something interesting, share it in the comments!

Posted by Andy Eggers at 6:58 AM

26 October 2009

Tchetgen on "Doubly robust estimation in a semi-parametric odds ratio model"

This Wednesday, October 28th, the Applied Statistics workshop will welcome Eric Tchetgen Tchetgen, Assistant Professor of Epidemiology at Harvard School of Public Health, presenting his work titled "Doubly robust estimation in a semi-parametric odds ratio model." Eric has provided the following abstract for the paper:

We consider the doubly robust estimation of the parameters in a semi-parametric conditional odds ratio model characterizing the effect of an exposure in the presence of many confounders. We develop estimators that are consistent and asymptotically normal in a union model where either a prospective baseline density function or a retrospective baseline density function is correctly specified but not necessarily both. The case of a binary outcome is of particular interest, then our approach yields a doubly robust locally efficient estimator in a semi-parametric logistic regression model For general types of outcomes, we provide a strategy to obtain doubly robust estimators that are nearly locally efficient We illustrate the method in a simulation study and an application in statistical genetics. Finally, we briefly discuss extensions of the proposed method to the semi-parametric estimation of a parameter indexing an interaction between two exposures on the logistic scale, as well as extensions to the setting of a time-varying exposure in the presence of time-varying confounding.

The Applied Statistics workshop meets each Wednesday in room K-354, CGIS-Knafel (1737 Cambridge St). We start at 12 noon with a light lunch, with presentations beginning around 12:15 and we usually wrap up around 1:30 pm. We hope you can make it.

Posted by Matt Blackwell at 11:10 AM

23 October 2009

Sources of Randomness

During a recent conversation with some colleagues regarding data sources, an interesting point was made that left me pondering. One member of our group stated that he would not trust a particular source of data to provide useful estimates of population means, but he would trust it to estimate regression coefficients. This puzzled me, because a regression coefficient is a (perhaps slightly fancy) version of a mean. Why, then, would a data source that cannot be trusted for a simple average be useful for a coefficient?

I think the answer lies in the assumed source of randomness. When we make inferences from our sample data to a wider universe of cases, there are two sources of randomness involved: probabilities introduced through the sampling design and probabilities introduced through an assumed stochastic model underlying our observed data. In the first case, we are interested in the existing finite population and our outcome of interest Y is regarded as fixed; randomness is introduced through the sample inclusion probabilities. In the second case, we are interested in a broader "superpopulation" which we posit is generated through some random process, and thus our outcome Y is regarded as a random variable. In much of social science, researchers are interested in this second source of randomness. Hypotheses center around parameters associated with the probability distribution for Y - such as regression coefficients.

Identifying the sources of randomness underlying our data is important, because they have implications for our analysis. Särndal, Swensson, and Wretman show that the variance of a parameter from a ordinary regression model estimated using sample data can be decomposed into two elements, one based on the sampling design and one based on the model. In the case of a census, the extra variance introduced from the design is zero, and thus the total variance of the estimated parameter is the variance of the "BLUE" estimator. Otherwise, accounting for the sampling design in the analysis should improve inference.

Posted by Deirdre Bloome at 5:20 PM

21 October 2009

Multiple Instruments

I recently found a paper by Angus Deaton that attempts to (1) discount the usefulness of instrumental variables for making causal inferences in development economics and (2) discount the usefulness of field experiments. He has definitely stirred the pot a little and is now part of an interesting debate, although the discussion seems to be more focused on Deaton's controversial claims about experiments.

I think Deaton overlooks some of the benefits of experimental research but his criticism of instrumental variables seems dead on, especially on the use of multiple instruments (see pages 12-13). Intuitively, we might think that having many instruments makes for better causal inference -- if one doesn't work out, then the others will pick up the slack. Following this logic, studies that use multiple instruments and "test" for exogeneity with overidentification tests have become popular in the instrumental variables literature. Essentially, these tests boil down to re-estimating the model with subsets of the instruments and showing that the estimated coefficients don't change dramatically. This can mean one of two things: (a) not just one, but all of the instruments are exogenous, or (b) not just one, but all of the instruments are endogenous. Personally, I think the probability of finding even a single good instrument for a given problem is small, so when shown a research design with multiple instruments, I need some serious convincing that miraculously all of the instruments are valid.

I am probably overly skeptical and I am very sympathetic to heroic attempts to solve difficult problems of causal inference to answer important questions. Still, it seems that having multiple instruments can become an embarrassment of riches. A good instrument is so hard to come by that having too many starts to lend evidence against an empirical argument rather than for it.

Posted by Richard Nielsen at 12:41 PM

20 October 2009

Elements of Statistical Learning (Online)

In case you had not already heard, Trevor Hastie, Robert Tibshirani, and Jerome Friedman have put a PDF copy of the second edition of their excellent text Elements of Statistical Learning on the book's website. I am sure many of you already own it, but a searchable version for the laptop is incredibly useful. The second edition has a lot of new content, including completely new chapters on Random Forests, Ensemble Learning, Undirected Graphical Models, and High-Dimensional Problems.

While a copy on your computer is very handy, a desk copy of this book is essential if you are interested in machine learning or data mining. The book is also a sight to behold. You can buy a copy at Amazon or Springer.

Posted by Matt Blackwell at 10:15 AM

19 October 2009

Eggers on "Electoral Rules, Opposition Scrutiny, and Policy Moderation in French Municipalities"

Please join us this Wednesday October 21st when we will have a change in the schedule. We are happy to have Andy Eggers (Department of Government) presenting a talk titled "Electoral Rules, Opposition Scrutiny, and Policy Moderation in French Municipalities: An Application of the Regression Discontinuity Design." Andy has provided the following abstract for his talk:

Regression discontinuity design (RDD) is a powerful and increasingly popular approach to causal inference that can be applied when treatment is assigned deterministically based on a continuous covariate. In this talk, I will present an application of RDD from French municipalities, where the system of electing the municipal council depends on whether the city's population is above or below 3500. First I show that cities above the population cutoff have fewer uncontested elections and more opposition representation on municipal councils, consistent with expectations. I then trace the effect of these political changes -- which amount to a heightening of the scrutiny imposed on the mayor -- on policy outcomes, providing evidence that more opposition scrutiny leads to more moderate policy.

The Applied Statistics workshop meets each Wednesday in room K-354, CGIS-Knafel (1737 Cambridge St). We start at 12 noon with a light lunch, with presentations beginning around 12:15 and we usually wrap up around 1:30 pm. We hope you can make it.

Posted by Matt Blackwell at 7:21 PM

14 October 2009

The Fundamental Regret of Causal Inference

Tim Kreider at the New York Times has a short piece on what he dubs "The Referendum" and how it plagues us:

The Referendum is a phenomenon typical of (but not limited to) midlife, whereby people, increasingly aware of the finiteness of their time in the world, the limitations placed on them by their choices so far, and the narrowing options remaining to them, start judging their peers' differing choices with reactions ranging from envy to contempt. ...Friends who seemed pretty much indistinguishable from you in your 20s make different choices about family or career, and after a decade or two these initial differences yield such radically divergent trajectories that when you get together again you can only regard each other's lives with bemused incomprehension.

Those familiar with casual inference will recognize this as stemming from the Fundamental Problem of Causal Inference: we cannot observe, for one individual, both their response to treatment and control. The article is an elegant look at how we grow to worry about those mysterious missing potential outcomes--the paths we didn't choose--and how we use our friends' lives to impute those missing missing outcomes. Kreider goes on to make this point exactly, with a beautiful quote from a novel:

The problem is, we only get one chance at this, with no do-overs. Life is, in effect, a non-repeatable experiment with no control. In his novel about marriage, "Light Years," James Salter writes: "For whatever we do, even whatever we do not do prevents us from doing its opposite. Acts demolish their alternatives, that is the paradox." Watching our peers' lives is the closest we can come to a glimpse of the parallel universes in which we didn't ruin that relationship years ago, or got that job we applied for, or got on that plane after all. It's tempting to read other people's lives as cautionary fables or repudiations of our own.

Perhaps the only response is that, while so close to us in so many respects, friends may be poor matches for gauging these kinds of effects. In any case, "Acts demolish their alternatives, that is the paradox" is the best description of the problem of causal inference that I have seen.

Posted by Matt Blackwell at 4:19 PM

13 October 2009

An on "Bayesian Propensity Score Estimation"


We hope you can join us at the Applied Statistics workshop this Wednesday, October 14th at 12 noon, when we will be happy to have Weihua An, a graduate student in the Sociology Department here at Harvard. Weihua will be presenting "Bayesian Propensity Score Estimators: Simulations and Applications." He has provided the following abstract:


Despite their popularity, conventional propensity score estimators (PSEs) do not take into account the estimation uncertainties in the propensity score into causal inference. This paper develops Bayesian propensity score estimators (BPSEs) to model the joint likelihood of both the outcome and the propensity score in one step, which naturally incorporate such uncertainties into causal inference. Simulations show that PSEs treating estimated propensity scores as if they were known will overestimate the variation in treatment e_ects and result in overly conservative inference, whereas BPSEs will provide corrected variance estimation and valid inference. Compared to other direct adjustment methods (E.g., Abadie and Imbens 2009), BPSEs are guaranteed to provide positive variance estimation, more reliable in small samples, and more flexible to contain complex propensity score models. To illustrate the proposed methods, BPSEs are applied to evaluating a job training program.

The workshop will be in room K354 of CGIS, 1737 Cambridge St. The workshop starts at noon and usually wraps up around 1:30. There will be a light lunch. We hope you can make it.

Posted by Matt Blackwell at 12:53 AM

9 October 2009

Tom Coburn can backward induce

We are a few days late to comment on the story of Senator Tom Coburn's amendment to the Commerce, Justice and Science Appropriations Bill to cut all National Science Foundation funding for the political science program and any of its missions. Choice quote (of which there are many): "...it is difficult, even for the most creative scientist, to link NSF's political science findings to the advancement of cures to cancer or any other disease." Snap.

This has received attention from the social science community and others. Even Paul Krugman, mentioned in Coburn's press release as an example of (wasteful? political?) NSF funding, has something to say about it. There's no need to rehash the arguments here, which ever-so-nicely point out that Senator Coburn doesn't really know what he's talking about nor do his arguments make a whole lot of sense.

Regardless of the arguments, I just wanted to put a graph up to put all of this in perspective. In the 111th Congress, Coburn has had very little success with his amendments:
coburn.png
Seven of the rejections are instances when Coburn's amendment was tabled without discussion. Most of the rejections have been of proposed budget cuts or banning funds from certain projects And this is just in this year. Out of all the roll call votes on Coburn-sponsored amendments in the Senate over his tenure, only 8 out of 68 have actually passed.

I understand trying to tackle his critiques, as they track with an internal debate already in the discipline. But I think it may be a tad knee-jerk to start letter-writing campaigns to our Senators. Tom Coburn knows that putting out no-win amendments is a great way to take positions in the Senate without committing to anything. Minority amendments are a costless signal of the blandest kind--even a political scientist can see that.

Posted by Matt Blackwell at 12:21 PM

6 October 2009

Criminal tricks and sugary treats

Just in time for Halloween, a study from the British Journal of Psychiatry by Moore, Carter and van Goozen that uses data from the British Cohort Study to estimate the effect of daily candy intake on adult violent behavior.

They find that 10 year olds that ate candy daily were much more likely to be convicted of a violent crime at age 34 than those who did not eat candy daily. They cite this as evidence that childhood diet has an effect on adult behavior. One of their hypothesized mechanisms is that using candy as a reward for children (e.g. for behavior modification) inhibits the child's ability to delay gratification. And there is evidence that children that posses problems with delayed gratification tend to score lower on a host of measures, including the SATs (see also: the marshmallow studies).

The longitudinal data gives them leverage. For instance, the authors are able to control for parenting style at age 5 along with other variables, such as various scales of behavior problems or mental abilities at age 5 (some of these were discarded in the final analysis because of their variable selection rules). These ease my main concern that "problem children" might lead to a certain type of parenting and also indicate a propensity for violent adult behavior. Their controls help to eliminate this possibility (though, I will say that I am not familiar with this literature and they use fairly complicated scales to measure these concepts).

Strangely, at least to me, they do not seem to control for parental income or socio-economic class. I have a few ideas as to why this might matter. First, candy is relatively cheap compared to a good diet, thus poorer families might be forced to choose the cheaper option when feeding their children. Second, financial pressures lead to time pressures, which could force parents to take shortcuts--feeding their children junk food because it is quick or using it to induce behavior because it is easy. Thus, parental income may matter greatly for candy intake and it also may increase propensity to commit violent crimes. I am not certain this is true, but it seems plausible and unmentioned in the paper. Even if the finding is not causal, however, it is still interesting.

Posted by Matt Blackwell at 1:48 PM

5 October 2009

Robins on "Optimal Treatment Regimes"

Please join us this Wednesday, October 7th at the Applied Statistics workshop when we will be happy to have Jamie Robins, the Mitchell L. and Robin LaFoley Dong Professor of Epidemiology here at Harvard, who will be presenting on "Estimation of Optimal Treatment Strategies from Observational Data with Dynamic Marginal Structural Models." Jamie has passed along a related paper with the following abstract:

We review recent developments in the estimation of an optimal treatment strategy or regime from longitudinal data collected in an observational study. We also propose novel methods for using the data obtained from an observational database in one health-care system to determine the optimal treatment regime for biologically similar subjects in a second health-care system when, for cultural, logistical, or financial reasons, the two health-care systems differ (and will continue to differ) in the frequency of, and reasons for, both laboratory tests and physician visits. Finally, we propose a novel method for estimating the optimal timing of expensive and/or painful diagnostic or prognostic tests. Diagnostic or prognostic tests are only useful in so far as they help a physician to determine the optimal dosing strategy, by providing information on both the current health state and the prognosis of a patient because, in contrast to drug therapies, these tests have no direct causal effect on disease progression. Our new method explicitly incorporates this no direct effect restriction.

A copy of the paper is also available.

The Applied Statistics workshop meets each Wednesday in room K-354, CGIS-Knafel (1737 Cambridge St). We start at 12 noon with a light lunch, with presentations beginning around 12:15 and we usually wrap up around 1:30 pm. We hope you can make it.

Posted by Matt Blackwell at 11:31 AM

Engaging Data Forum at MIT

There is a lot of discussion around IQSS these days about managing personal data -- both best practices for researchers and what policy should be at the university and government levels. I just heard about an event at MIT that is engaging a bunch of these issues that looks very interesting; today is apparently the last day for early registration.

First International Forum on the Application and Management of Personal Electronic Information

October 12-13, 2009

Massachusetts Institute of Technology

http://senseable.mit.edu/engagingdata/

The Engaging Data: First International Forum on the Application and Management of Personal Electronic Information is the launching event of the Engaging Data Initiative, which will include a series of discussion panels and conferences at MIT. This initiative seeks to address the issues surrounding the application and management of personal electronic information by bringing together the main stakeholders from multiple disciplines, including social scientists, engineers, manufacturers, telecommunications service providers, Internet companies, credit companies and banks, privacy officers, lawyers, and watchdogs, and government officials.

The goal of this forum is to explore the novel applications for electronic data and address the risks, concerns, and consumer opinions associated with the use of this data. In addition, it will include discussions on techniques and standards for both protecting and extracting value from this information from several points of view: what techniques and standards currently exist, and what are their strengths and limitations? What holistic approaches to protecting and extracting value from data would we take if we were given a blank slate?

Posted by Andy Eggers at 8:05 AM

1 October 2009

Repeal Power Laws

A group of students from the Machine Learning department at Carnegie Mellon took to the streets last week to protest at the G20 summit in Pittsburgh. I am afraid that their issues were not being taken seriously inside the summit. There's a first hand account and a photo set on flickr. I can't decide if my favorite is "Repeal Power Laws" or "Safer Data Mining".

datamining.jpg

signs.jpg

Posted by Matt Blackwell at 3:35 PM