Sun | Mon | Tue | Wed | Thu | Fri | Sat |
---|---|---|---|---|---|---|
1 | 2 | 3 | 4 | 5 | 6 | |
7 | 8 | 9 | 10 | 11 | 12 | 13 |
14 | 15 | 16 | 17 | 18 | 19 | 20 |
21 | 22 | 23 | 24 | 25 | 26 | 27 |
28 | 29 | 30 |
« August 2008 | Main | October 2008 »
29 September 2008
Please join us on Wednesday October 1st when Gary King, the David Florence Professor of Government, will present "Matching for Causal Inference Without Balance Checking". A draft of the paper is available here , and here is the abstract:
We address a major discrepancy in matching methods for causal inference in observational data. Since these data are typically plentiful, the goal of matching is to reduce bias and only secondarily to keep variance low. However, most matching methods seem designed for the opposite problem, guaranteeing sample size ex ante but limiting bias by controlling for covariates through reductions in the imbalance between treated and control groups only ex post and only sometimes. (The resulting practical difficulty may explain why many published applications do not check whether
imbalance was reduced and so may not even be decreasing bias.) We introduce a new class of "Monotonic Imbalance Bounding" (MIB) matching methods that enables one to choose a fixed level of maximum imbalance, or to reduce maximum imbalance for one variable without changing it for the others. We then discuss a specific MIB method called "Coarsened Exact Matching" (CEM) which, unlike most existing approaches, also explicitly bounds through ex ante user choice both the degree of model dependence and the causal effect estimation error, eliminates the need for a separate procedure to restrict data to common support, meets the congruence principle, is approximately invariant to measurement error, works well with modern methods of imputation for missing data, is computationally efficient even with massive data sets, and is easy to understand and use. This method can improve causal inferences in a wide range of applications, and may be preferred for simplicity of use even when it is possible to design superior methods for particular problems. We also make available open source software which implements all our suggestions.
The applied statistics workshop meets in room K-354, CGIS-Knafel (1737 Cambridge St) at 12 noon, with a light-lunch served. The presentation will begin at 1215 and the workshop usually ends around 130. All are welcome to attend
Posted by Justin Grimmer at 8:52 PM
26 September 2008
For those of you who want to do some exercises or solve typical problems in probability theory and random processes, I strongly recommend a book by Geoffrey Grimmett and David Stirzaker, One Thousand Exercises in Probability. As the authors said in the preface, there are over three thousands of problems in the book since many exercises include several parts. Personally, I find this book very useful, partly because all exercises come with solutions, which makes it much more readable than many other counterparts, and partly because I realize some faculty here tend to adopt exercises in it and put them in class assignments and exams. (Am I here the first person who realizes this?) So I recommend this book to you and hopefully, it will help you deepen your understanding of those daunting proofs in probability theory and random processes. More luckily, you may learn how to get used to them in von Neumann's sense.
In mathematics you don't understand things, you just get used to them.
Posted by Weihua An at 7:59 PM
25 September 2008
The NBER just posted a new working paper by Steven Levitt and John List ``Field Experiments in Economics: The Past, The Present, and The Future.'' I only had a first glance and this paper looks like an easy to read history of field experiments in economics and a (short) summary of the limitations. Levitt and List also suggest that partnerships with private institutions could be the future of this field. It seems like a natural conclusion. Collaborating with the private sector should create more opportunities for good research, and the money and infrastructure will be attractive to researchers. And anyway what other sector is left to be conquered? But maybe such partnerships are only useful for certain areas of research (Levitt and List suggest the setting could be a useful laboratory for the field of industrial organization). And firms, like any institution, must have an interest to participate. This might be fine for learning about fundamental economic behavior but will we see more declarations of interest on experiments related to policy?
Levitt, S and List, J (2008) ``Field Experiments in Economics: The Past, The Present, and The Future.'' NBER Working Paper 14356, http://papers.nber.org/papers/w14356
Harvard users click here for PIN access.
This study presents an overview of modern field experiments and their usage in economics. Our discussion focuses on three distinct periods of field experimentation that have influenced the economics literature. The first might well be thought of as the dawn of "field" experimentation: the work of Neyman and Fisher, who laid the experimental foundation in the 1920s and 1930s by conceptualizing randomization as an instrument to achieve identification via experimentation with agricultural plots. The second, the large-scale social experiments conducted by government agencies in the mid-twentieth century, moved the exploration from plots of land to groups of individuals. More recently, the nature and range of field experiments has expanded, with a diverse set of controlled experiments being completed outside of the typical laboratory environment. With this growth, the number and types of questions that can be explored using field experiments has grown tremendously. After discussing these three distinct phases, we speculate on the future of field experimental methods, a future that we envision including a strong collaborative effort with outside parties, most importantly private entities.
Posted by Sebastian Bauhoff at 7:30 AM
24 September 2008
The authors of "Government Data and the Invisible Hand" provide some interesting advice about how the next president can make the government more transparent:
If the next Presidential administration really wants to embrace the potential of Internet-enabled government transparency, it should follow a counter-intuitive but ultimately compelling strategy: reduce the federal role in presenting important government information to citizens. Today, government bodies consider their own websites to be a higher priority than technical infrastructures that open up their data for others to use. We argue that this understanding is a mistake. It would be preferable for government to understand providing reusable data, rather than providing websites, as the core of its online publishing responsibility.
I've blogged here a couple of times about the role transparency-minded programmers and other private actors are playing in opening up access to government data sources. This paper draws the logical policy conclusion from what we've seen in the instances I blogged about: that third parties often do a better job of bringing important government data to the people than the government does. (For example, compare govtrack.us/opencongress.org with http://thomas.loc.gov.) The upshot of the paper is that the government should make it easier for those third parties to make the government websites look bad. By focusing on providing structured data, the government will save web developers some of the hassle involved in parsing and combining data from unwieldy government sources and reduce the time between the release of a clunky government site and the release of private site that repackages the underlying data and combines it with new sources in an interesting way.
Of course, to the extent that government data is made available in more convenient formats, our work as academic researchers gets easier too, and we can spend more time on analysis and less on data wrangling. In fact, for people doing social science stats, it's really the structured data and not the slick front-end that is important (although many of the private sites provide both).
I understand that this policy proposal is an idea that's been circulating for a while (anyone want to fill me in on the history?) and apparently both campaigns have been listening. It will be interesting to see whether these ideas lead to any change in the emphasis of government info policy.
Posted by Andy Eggers at 9:09 AM
23 September 2008
As an applied researcher, I've often come across missing data problems where my data are categorical. This can raise issues because most standard multiple imputation packages assume the multivariate normal (MVN) distribution, which may not hold for certain types of categorical and binary data.
The standard shortcut for overcoming this problem is to just impute under the MVN assumption, then use rounding to finish out the imputation. But as Yucel Recai, Yulei He, and Alan Zaslavsky point out in their May 2008 article in The American Statistician, naive rounding can bias estimates, particularly when the underlying data are asymmetric or multimodal.
So what should the applied researcher do when multiply imputing categorical data? The authors propose a method of calibration whereby one duplicates the original data but sets the observed values for the variable of interest to missing in the duplicated data. The original data and the duplicated data are then stacked and imputation is carried out on the stacked dataset. By comparing the fraction of 1's among the originally observed (but imputed) observations in the duplicated data (Y_obs(dup)) with the fraction of 1's in the original observed data (Y_obs), one can find the appropriate cutoff (c) and assign 0's and 1's using that.
This is a neat technique which benefits from the fact that it's very easy to implement in practice. In any case, check out the entire paper for more details on the method.
Posted by John Graves at 9:01 PM
Please join us tomorrow (Wednesday, 9/24) when we welcome Ben Fry to the applied statistics workshop. Ben's research explores data visualization--more details can be found here -- including details of his recently completed book "Data Visualization" and samples from his previous work .
The workshop will meet at 12 noon in room K-354, CGIS-Knafel (1737 Cambridge St) with a light lunch served. The presentation will begin at 1215 and usually ends around 130 pm. All are welcome--
Posted by Justin Grimmer at 10:39 AM
18 September 2008
From Jeff Segal via Gary King, we get the following call for papers for the Midwest Political Science Conference. An interesting bit of news here is that the conference is introducing a registration discount for people outside of the discipline.
Ask your favorite political scientist what the biggest political science conference is, and she'll tell you it's the American Political Science Association. Ask her what the best political science conference is and she'll tell you it's the Midwest Political Science Association meeting, held every April in the beautiful Palmer House in Chicago.The Midwest Political Science Association, like most academic associations, charges higher conference registration rates for nonmembers than to members. Hoping to continue to increase attendance by people outside of political science and related fields at its annual meeting, the Association will begin charging the lower (member) rate to registrants who 1) have academic appointments outside of political science or related fields (policy, public administration and political economy) and 2) do not have a PhD in political science or the same related fields.
In addition, the Association grants, on request, a substantial number of conference registration waivers for first time participants who are outside the discipline.
The call for papers for the 2009 meeting, due October 10, is at http://www.mpsanet.org/~mpsa/index.html.
Hope to see you in Chicago.
Sincerely,
Jeffrey Segal, President
Midwest Political Science Association
Posted by Andy Eggers at 6:41 AM
15 September 2008
In a working paper entitled "Can We Test for Bias in Scientific Peer Review?", Andrew Oswald proposes a method of detecting whether journal editors (and the peer review process generally, I suppose) discriminate against certain kinds of authors. His approach, in a nutshell, is to look for discrepancies between the editor's comparison of two papers and how those papers were ultimately compared by the scholarly community (based on citations). In tests he runs on two high-ranking American economics journals, he doesn't find a bias by QJE editors against authors from England or Europe (or in favor of Harvard authors), but he does find that JPE editors appear to discriminate against their Chicago colleagues.
While publication politics is of course interesting to me and other academics, I bring up this paper not so much for the results as for the technique. Since the most important decision an editor makes is whether to publish an article or not, the obvious way of trying to determine whether editors are biased would be to look at that decision -- perhaps look at whether editors are more likely to reject articles by a certain type of author, controlling for article quality. One could imagine a controlled experiment of this type, but otherwise this is an unworkable design: there is no good general way to "control for quality," and at any rate the record of what was submitted where would be impossible to piece together. Oswald's design neatly addresses both of these problems. Instead of looking at the untraceable accept/reject decision, he looks at the decision to accept two articles and place them next to each other in an issue; not only does this convey information about the editor's judgment of relative quality of articles, but it means that citations on those articles provide a plausible comparison of the quality of those articles, uncomplicated by differences in the relative impact of different journals.
Oswald's approach of course rests on the assumption that citations provide an unbiased measure of the quality of a paper (at least relative to other papers published in the same volume), which is probably not true: any bias we might expect among journal editors would likely be common among scholars as a whole and would thus be reflected in citations. Oswald's test therefore really compares the bias of editors against the bias of the scholarly community as a whole: if everyone is biased in the same way, the test would never be able to reject the null hypothesis of no bias.
This kind of issue aside, it seems like this general approach could be useful in other settings where we want to assess whether some selection process is biased. I haven't had much of a chance to think about it -- anyone have any suggestions where this kind of approach could be, or has been, applied to other topics?
Posted by Andy Eggers at 8:13 AM
11 September 2008
Here's a paper in which the authors devised a clever way to gather data and answer an interesting question:
Pigskins and Politics: Linking Expressive Behavior and VotingDavid Laband, Ram Pandit, Anne Laband & John Sophocleus
Journal of Sports Economics, October 2008, Pages 553-560Abstract:
In this article, the authors use data collected from nearly 4,000
single-family residences in Auburn, Alabama to investigate empirically
whether nonpolitical expressiveness (displaying support for Auburn
University's football team outside one's home) is related to the probability
that at least one resident voted in the national/state/local elections held
on November 7, 2006. Controlling for the assessed value of the property and
the length of ownership, the authors find that the likelihood of voting by
at least one person from a residence with an external display of support for
Auburn University is nearly 2 times greater than from a residence without
such a display. This suggests that focusing narrowly on voting as a
reflection of political expressiveness may lead researchers to overstate the
relative importance of expressiveness in the voting context and understate
its more fundamental and encompassing importance in a variety of contexts,
only one of which may be voting.
What I think is clever here is that the way the project uses observable factors (stuff in your yard, whether you vote, how much your house costs) to shed light on a fairly interesting aspect of behavior (why do people vote?). Based on a working paper version, the authors simply drove around the city of Auburn, recording whether houses displayed political signs and Auburn paraphernalia (ranging from flying an AU flag to "placing an inflated figure of Aubie (AU's school mascot) in one's yard"). They later linked this up with voter rolls and data on home prices to get their correlations.
Of course, there are some problems with using football paraphernalia as a measure of "nonpolitical expressiveness." I don't know Auburn, but are Auburn fans more likely to be Republicans (controlling for value of house)? I am guessing they are. If more enthusiastic Auburn fans are also more enthusiastic Republicans (not just more expressive Republicans, but more ideologically committed ones), then these estimates would indicate too large a role for "expressiveness," particularly since the authors don't record even the party affiliation of people living in the houses, let alone strength of party identification. But their measure may be less confounded with political commitment itself than measures of expressiveness you could find in other communities. If you were to do this in Cambridge, I suppose you could use upkeep of the garden as a measure of expressiveness and community orientation (you wouldn't get far using Harvard football signs), but attention to the garden is of course correlated with wealth, which means there would be all the more difficulty in extracting the pure economic/political factors.
Anyway, I applaud the authors for devising a measure of expressiveness that works pretty well and is so easily observable.
Posted by Andy Eggers at 8:50 AM
10 September 2008
Welcome back for the 2008-2009 academic year. The applied statistics workshop has an exciting lineup of speakers this coming semester. The workshop kicks off this coming Wednesday, September 17th, with Andrew Gelman, Department of Statistics and Political Science, Columbia University. Andrew will be presenting results from his recently released book "Red State, Blue State, Rich State, Poor State". Here is an introduction to the book from the publisher:
With wit and prodigious number crunching, Andrew Gelman and his coauthors get to the bottom of why Democrats win elections in wealthy states while Republicans get the votes of richer voters, how the two parties have become ideologically polarized, and other issues. Gelman uses eye-opening, easy-to-read graphics to unravel the mystifying patterns of recent voting, and in doing so paints a vivid portrait of the regional differences that drive American politics. He demonstrates in the plainest possible terms how the real culture war is being waged among affluent Democrats and Republicans, not between the haves and have-nots; how religion matters for higher-income voters; how the rich-poor divide is greater in red not blue states--and much more.
With the excitement surrounding the current presidential races, this presentation promises to be informative to anyone interested in separating the facts from the myths about vote choice in America. For those interested, a blog is available about the book , which is also available for purchase.
As a reminder, the applied statistics workshop meets every Wednesday in CGIS-Knafel, 1737 Cambridge St, room K-354 (Previously N-354, before the Chad Johnson/Prince-esque name change that recently swept through the north building). We start at 12 noon with a light lunch and the presentations usually begin around 1215.
To give Andrew the maximum amount of time, we will skip the normal "business" meeting that usually starts the year. If anyone has any suggestions about how the workshop could improve, or would like to present at the workshop this year, please let me know (email would probably be the quickest and most effective method, jgrimmer at fas dot harvard dot edu)
Posted by Justin Grimmer at 11:14 AM
2 September 2008
The British Medical Journal just published an great piece by Michael Law* and co-authors on the (in-)effectiveness of direct-to-consumer advertisement (DTCA) for pharmaceuticals. This issue continues to be political controversial and expensive for companies, and good studies are rare. Mike makes use of the linguistic divide in his home country Canada to evaluate the effectiveness of the ads. Canadian TV stations are not allowed to broadcast pharma ads. The French-speakers have no choice to oblige, but English-speaking Canada gets to watch ads for pharmaceuticals on US TV stations. The results suggest that for the three drugs under study, the effects of DTCA maybe very small and short-term.
An interesting fallout of this work is a wave of media attention for causal inference and identifying counterfactuals. For example the WSJ writes
[...] the new study will draw some attention because it is among the first to compare the behavior of people exposed to drug ads with people who weren't.
And the New Scientist says
However, consumer advertising is usually accompanied by other marketing efforts directly to doctors, making it difficult to tease out the effect of the ads alone.
See here for a longer list of articles at Google News.
I think it's great that the study creates so much interest (meaning it's relevant in real life) and that the media gets interested in research design. I'm curious to see the wider repercussions on both issues.
Law, Michael, Majumdar, Sumit and Soumerai, Stephen (2008) "Effect of illicit direct to consumer advertising on use of etanercept, mometasone, and tegaserod in Canada: controlled longitudinal study" BMJ 2008;337:a1055
* Disclosure: Mike is a recent graduate of the PhD in Health Policy, and a classmate and friend of mine.
Posted by Sebastian Bauhoff at 9:32 PM