Pol Meth Conf III, & GOV 2000 

 Pol Meth Conf III 
Dan Hopkins, G4, Government (guest author) 

 Continuing the discussion of the recent Political Methodology Conference, throughout its first two days the notion of the conference as the "Second Annual Conference on Matching" was a running joke, and definitely a fair joke, although the two matching papers were, well, matched by two ideal point papers.  So on to ideal points.  Michael Bailey's paper tackled an important problem: because major figures across the different institutions of the federal government are faced with different policy decisions, it is hard to make statements about how their preferences relate.  Is the Supreme Court to the left of Congress?  How would today's court rule on famous decisions from the past?  Bailey's paper sought to extend ideal points across institutions, using such things as public statements and the court briefs of the Solicitor General to compare the ideal points of not just justices but of members of all three branches of the federal government.  Bailey argued, for example, that if the first Bush administration filed a brief in support of a certain side in a court case, we could use that filing to put Bush in the same space as Chief Justice Rehnquist.  Bailey used the same sort of logic to extend ideal points back in time, focusing on statements about preferences-for instance, Clarence Thomas's statement that Roe was wrongly decided-to allow figures from different time periods to be placed on the same scale.  Especially impressive was the data collection effort this project entails, as the author tracked down public statements from a wide range of figures. 

 One of the challenges of making these kinds of cross-institutional inferences, though, is that we need to implicitly assume non-strategic behavior.  Needing to build a majority of five, justices in the Supreme Court face a task distinct from that of the President—or from that of the average member of the House.  These strategic contexts will undoubtedly affect politicians' decisions: Presidents have little incentive to make public statements that put them at odds with the majority of Americans, even if those statements reflect their preferences.  Also, if Presidents (or others in the system) are selective about the subjects of their commentary, we might wind up with a biased idea of where they actually stand.  Still, Bailey provided quite a neat paper, one that provides useful tools for tracking inter-institutional dynamics.  The substantive results were also very interesting, with the median ideal point of the Court almost always between that of the House and the Senate. 

 The next ideal point paper came from Simon Jackman, Matthew Levendusky, and Jeremy Pope.  Here, the goal was to estimate the baseline propensity of a Congressional district to support Democratic or Republican candidates—although much of the Q&A was taken up by questions about whether this was best thought of as the "natural vote" or something else.  The authors emphasized that measurement and structural modeling go hand-in-hand because inaccurate measurement may well bias the structural estimate of quantities like the incumbency advantage.  They also pointed out that in this field we are content with rough proxies of district tendencies despite the fact that in other areas we demand much more precision in our measurements.  Jackman, Levendusky, and Pope's model was a Bayesian hierarchical ideal point model that draws on information about both Congressional and Presidential results to make inferences about districts' underlying partisan preferences. 

 For me, one provocative result from this paper was that the discrimination parameter-that is, the impact of the covariates on the estimated vote share-increased over the decades.  In other words, demographic characteristics are becoming increasingly effective predictors of districts' preferences.  I would love to see the authors try to get at exactly why that is.  One possibility, which Levendusky mentioned in making his presentation, is redistricting: politicians get better at picking their constituents, districts become more homogeneous, and so district-level demographics become better predictors of aggregate vote choices.  To test this theory, one might re-estimate the model without the least populous states (because such states have less potential for gerrymandering.  Consider Wyoming: no gerrymandering there).  Another possibility is that the electorate is sorting itself into more politically homogeneous groups, something one might test in a preliminary way by running the model separately for high-mobility and low-mobility districts.  The Census gives data on how many people have lived in the same house for their entire lives, data that could help with these questions. 

  
GOV 2000 
Kevin Quinn 

 This fall I am teaching GOV 2000 Quantitative Methods for Political Science I. This course is also offered for credit through Harvard's distance learning program as GOVT E-2000. GOV 2000 is the first course in the Department of Government's methodology sequence and it is designed to introduce students to statistical modeling with emphasis on least squares linear regression. Although we will not ignore the theory underlying the linear model, much of the course will focus on practical issues that arise when working with regression models. Topics covered in the course include: data visualization, statistical inference for the linear model, assessing model adequacy, when is a regression model a causal model, dealing with leverage points and outliers, robust regression, and methods for capturing nonlinearities. We will also be working with real social science datasets throughout the course. For more information, please visit the course website  here  .