November 2006
Sun Mon Tue Wed Thu Fri Sat
      1 2 3 4
5 6 7 8 9 10 11
12 13 14 15 16 17 18
19 20 21 22 23 24 25
26 27 28 29 30    

Authors' Committee

Chair:

Matt Blackwell (Gov)

Members:

Martin Andersen (HealthPol)
Kevin Bartz (Stats)
Deirdre Bloome (Social Policy)
John Graves (HealthPol)
Rich Nielsen (Gov)
Maya Sen (Gov)
Gary King (Gov)

Weekly Research Workshop Sponsors

Alberto Abadie, Lee Fleming, Adam Glynn, Guido Imbens, Gary King, Arthur Spirling, Jamie Robins, Don Rubin, Chris Winship

Weekly Workshop Schedule

Recent Comments

Recent Entries

Categories

Blogroll

SMR Blog
Brad DeLong
Cognitive Daily
Complexity & Social Networks
Developing Intelligence
EconLog
The Education Wonks
Empirical Legal Studies
Free Exchange
Freakonomics
Health Care Economist
Junk Charts
Language Log
Law & Econ Prof Blog
Machine Learning (Theory)
Marginal Revolution
Mixing Memory
Mystery Pollster
New Economist
Political Arithmetik
Political Science Methods
Pure Pedantry
Science & Law Blog
Simon Jackman
Social Science++
Statistical modeling, causal inference, and social science

Archives

Notification

Powered by
Movable Type 4.24-en


« October 2006 | Main | December 2006 »

30 November 2006

Remembering the Baldus Study, Part II

Jim Greiner

In a previous post, I summarized the Baldus Study of the role of race in Georgia’s system of capital sentencing during the 1970s. To review, the Study concluded that the race of the victim, but not that of the defendant, was an important factor in determining whether a capital defendant received the death penalty. The Study was a pioneering effort to apply what were then cutting-edge statistical techniques (logistic regression) to questions of race discrimination, and it came within a single justice of rendering Georgia’s capital sentencing system constitutionally invalid.

As part of my dissertation research, which focuses on applying a potential outcomes understanding of causation to perceptions of immutable characteristics, I am reexamining the Baldus Study data. With the benefit of 25+ years of hindsight, I have reluctantly concluded that the Study’s findings are questionable (which is different from wrong). The Study authors collected no data on cases resulting in acquittals or convictions of crimes of severity less than voluntary manslaughter, or indeed on cases that were initially charged as murders but in which charges were reduced prior to trial. The sampling scheme, a complicated one involving some stratification on the outcome variable (imposition of the death penalty), renders calculation of standard errors difficult, and the method the Study authors used to address this problem depends on asymptotics despite (in some cases) a small number of units.

Which leads me to my big questions. Assume for the moment that I’m right about the idea that modern thinking suggests that the conclusions of the Baldus Study are questionable. What if the Supreme Court had accepted the Study and struck down Georgia’s capital sentencing system? Would we now think such a decision was based on questionable science? Or should courts accept the best statistical evidence available at the time, even if later researchers believe it questionable, because there are also costs to inaction? (After all, at most, I can “prove” is that Baldus et al. did not prove their case, not that their conclusions were wrong.) And what will happen to my own “conclusions” in 25 years?

Posted by James Greiner at 1:53 PM

29 November 2006

Remembering the Baldus Study, Part I

Jim Greiner

One of my current research interests is the application of a potential outcomes framework of causation to perceptions of what lawyers call “immutable characteristics" like race, gender, or national origin. In that vein, I’d like to pay tribute to one of the early greats in the area of quantitative analysis of race in the legal setting: the so-called “Baldus Study” of the role of race in imposition of the death penalty in Georgia. The Study authors, David Baldus, George C. Woodworth, and Charles A. Pulaski, Jr,, gathered data on over 1000 Georgia homicides from 1973-1979. Although the Study attempted to tackle a variety of questions, the most publicized was whether recent reforms to Georgia’s sentencing process (enacted in response to the Supreme Court’s decision in Furman v. Georgia) had succeeded in removing the relevance of race in the state’s capital sentencing system. The Study’s primary conclusion on this point was that the race of the victim, but not the race of the defendant, played a significant role in deciding whether death was imposed.

The Study was highly publicized, and it led to its own Supreme Court case. In McCleskey v. Kemp, four justices thought that the conclusions of the Baldus Study were sufficient to render Georgia’s capital sentencing system unconstitutional. Five justices disagreed; they thought that the capital defendant in the case had to show that race had played a role in HIS trial, not that race generally played a role in the set capital trials.

More on the Baldus Study in my next post.

Posted by James Greiner at 1:53 PM

Applied Statistics - Alan Zaslavsky

This week the Applied Statistics Workshop will present a talk by Alan Zaslavsky, Professor of Health Care Policy (Statistics) in the Department of Health Care Policy at Harvard Medical School. Dr. Zaslavsky's statistical research interests include surveys, census methodology, small area estimation, official statistics, missing data, hierarchical modeling, and Bayesian methodology. His research topics in health care policy center on measurement of the quality of care provided by health plans through consumer assessments and clinical and administrative data. Among his current major projects are (1) the Consumer Assessments of Healthcare Providers and Systems (CAHPS) survey implementation for the Medicare system, (2) methodology for surveys in psychiatric epidemiology, centered on validation of the CIDI-A (adolescent) survey in the National Comorbidity Study-Adolescent, and (3) studies on determinants of quality of care for cancer, including both the Statistical Coordinating Center and a research site for the NCI-funded CanCORS (Cancer Consortium for Outcomes Research and Surveillance) study. Other research interests include measurement of disparities in health care, and privacy and confidentiality for health care data.

He is a member of the Committee on National Statistics (CNSTAT) of the National Academy of Sciences and has served on CNSTAT panels on census methodology, small area estimation and race/ethnicity measurement, as well as the Committee on the National Quality Report on Health Care Delivery of the Institute of Medicine.

Dr. Zaslavsky received his A.B. degree at Harvard College, his M.S. at Northeastern University, and his Ph.D. at the Massachusetts Institute of Technology. He is a Fellow of the American Statistical Association.

Professor Zaslavsky will present a talk entitled "Modeling the covariance structure of random coefficients to characterize the quality variation in health plans." The presentation will be at noon on Wednesday, November 29th, in Room N354, CGIS North, 1737 Cambridge St. Lunch will be provided.

Posted by Eleanor Neff Powell at 7:59 AM

28 November 2006

Program on Survey Research - Mark Blumenthal

The Harvard Program on Survey Research is hosting a talk by Mark Blumenthal (aka the Mystery Pollster):

December 1, 2006
CGIS N-354
3:00 - 5:00 p.m. with reception to follow

December 1, 2006, Mark Blumenthal will join us to discuss surveys and polls in the 2006 elections. Blumenthal is the founder of the influential blog and website MysteryPollster.com, and one of the developers of the more recent website Pollster.com. His analysis of political polling and survey methodology is widely read and admired. Blumenthal has more than 20 years experience as a survey researcher, conducting and analyzing political polls and focus groups for Democratic candidates and market research surveys for major corporations. His experience includes work with pollsters Harrison Hickman, Paul Maslin, Kirk Brown, Celinda Lake, Stan Greenberg and the last 15 with his partners David Petts and Anna Bennett in the firm Bennett, Petts and Blumenthal (BPB).

Location:
Harvard University
CGIS N-354
1737 Cambridge St.
Cambridge, MA 02138

Posted by Mike Kellermann at 12:19 PM

27 November 2006

Designing and Analyzing Randomized Experiments in Political Science

I just read a paper by Yusaku Horiuchi, Kosuke Imai, and Naoko Taniguchi (HIT) on "Designing and Analyzing Randomized Experiments." HIT draw upon the longstanding statistics literature on this topic and attempt to “pave the way for further development of more methodologically sophisticated experimental studies in political science.” While experiments are becoming more frequent in political science, HIT observe that a majority of recent studies do not randomize effectively and still ignore problems of noncompliance and or nonresponse.

Specifically, they offer four general recommendations:

(I) Researchers should obtain information about background characteristics of experimental subjects that can be used to predict their noncompliance, nonresponse, and the outcome.

(II) Researchers should conduct efficient randomization of treatments by using, for example, randomized-block and matched-pair designs.

(III) Researchers must make every effort to record the precise treatment received by each experimental subject.

(IV) Finally, a valid statistical analysis of randomized experiments must properly account for noncompliance and nonresponse problems simultaneously.

Take a look. I agree with HIT that these issues are not new, yet too often ignored in political science (exceptions acknowledged). HIT illustrate their recommendations using a carefully crafted online experiment on Japanese elections. Statistically, they employ a Bayesian approach using the general statistical framework of randomized experiments with noncompliance and nonresponse (Angrist, Imbens, and Rubin 1996; Imbens and Rubin 1997; Frangakis and Rubin 1999, 2002). There is also interesting new stuff on modeling causal heterogeneity in this framework (a big topic in of itself).

Posted by Jens Hainmueller at 12:19 PM

22 November 2006

Business Information and Social Science Statistics, Part II

I mentioned in this earlier blog entry an interview I did with DM Review. Here's the sequel.

Posted by Gary King at 2:25 PM

21 November 2006

Back to the Drawing Board?

thumb.jpg

Have you ever been to a social science talk and heard somebody saying things like "i guess I will have to go back to the drawing board…" I always wondered what that really meant, until an engineering friend of mine suggested taking a look at this.

Maybe we can get one for the IQSS?

Posted by Jens Hainmueller at 11:34 AM

17 November 2006

Bayesian brains?

Amy Perfors

Andrew Gelman has link to a study that just came out in Nature Neuroscience whose author, Alex Pouget at the University of Rochester, suggests that "the cortex appears wired at its foundation to run Bayesian computations as efficiently as can be possible." I haven't read the paper yet, so I don't have much in the way of intelligent commentary, but I'll try to take a look at it soon. In the meantime, here is a link to the press release so you can read something about it even if you don't have access to Nature Neuroscience. From the blurb, it sounds pretty neat, especially if you (like me) are at all interested in the psychological plausibility of Bayesian models as applied to human cognition.

Posted by Amy Perfors at 11:40 AM

The "Imperial Grip" of Instrumental Variables

The Economist is agog over the increasing prominence of instrumental variables in econometrics ("Winds of Change", November 4, 2006). While it is always nice to get some square inches in a publication with a circulation greater than a few thousand, I'm afraid that I tend to sympathize more with the "instrument police" than the "instrumentalists."

For a variable to be a valid instrument, it must be (a) correlated with the variable for which we are trying to estimate a causal effect, and (b) only affect the outcome through the proposed causal variable, such that an exclusion restriction is satisfied. This is true for every estimation in which a proposed instrument is used; one must make a separate case for the validity of the exclusion restriction with respect to each analysis. Leaving aside what should be the second-order problem of actually carrying out an IV analysis, which may be a first-order problem in practice ("what do you mean it has no mean?"), our inability to verify the exclusion restriction in the case of naturally occuring instruments forces us to move from the substance of the problem we are trying to investigate to a duel of "just-so stories" for or against the restriction, a debate that typically cannot be resolved by looking at the empirical evidence.

Consider the two papers desribed in the Economist article. The first attempts to estimate the effect of colonialism on current economic outcomes. The authors propose wind speed and direction as an instrument for colonization, arguing (plausibly) that Europeans were more likely to colonize an island if they were more likely to encounter it while sailing. So far so good. Then they argue that, while colonization in the past has an effect on economic outcomes in the present, being situated in a location favorable for sailing in the past (i.e., before steam-powered ships) does not. Is this really plausible? The authors think so, I don't, and it isn't obvious that there is a way to resolve the matter. In the second example, the failure of ruling dynasties to produce an heir in Indian princely states is used as an instrument for the imposition of direct rule by the British. Here the exclusion restriction may be more plausible (or - shameless plug - maybe not, if it is the shift from a hereditary to a non-hereditary regime rather than colonialism per se that affects outcomes). One way or the other, is this really what we should be arguing about?

None of this is to say that instrumental variable models can never be useful. When we can be more confident that the exclusion restriction is satisfied (usually because we designed the instrument ourselves), then IV approaches make a lot of sense. Unfortunately (or fortunately), we can't go back and randomly assign island discoveries using something like a coin flip rather than the trade winds. Despite this, nothing seems to slow down the pursuit of more and more tortured instruments. The observation that "the instrumental variable now enjoys an almost imperial grip on the imagination of economists" carries more irony that was perhaps intended.

Posted by Mike Kellermann at 11:03 AM

16 November 2006

How to present math in talks

Since writing my last post (The cognitive style of better powerpoint), I noticed that two other bloggers wrote rather recently on the same topic. The first, from Dave Munger at Cognitive Daily, actually proposes a bit of an experiment to compare the efficacy of text vs. powerpoint - results to be posted Friday. The second, from Chad Orzel at Uncertain Principles, offers a list of "rules of thumb" for doing a good PowerPoint talk.

Given all this, you'd think I wouldn't have anything to add, right? Well, never underestimate my willingness to blather on and on about something. I actually think there's one thing neither they nor I discuss much, and that is presenting mathematical, technical, or statistical information. Both Orzel and I recommend, as much as possible, avoiding equations and math in your slides. And that's all well and good, but sometimes you just have to include some (especially if you're a math teacher and the talk in question is a lecture). For me, this issue crops up whenever I need to describe a computational model -- you need to give enough detail that it doesn't look like the results just come out of thin air, because if you don't, nobody will care about what you've done. And often "enough detail" means equations.

So, for whatever it's worth, here are my suggestions for how to present math in the most painless and effective way possible:

Abandon slideware. This isn't always feasible (for instance, if the conference doesn't have blackboards), nor even necessarily a good idea if the equation count is low enough and the "pretty picture" count is high enough, but I think slideware is sometimes overused, especially if you're a teacher. When you do the work on the blackboard, the students do it with you; when you do it on slideware, they watch. It is almost impossible to be engaged (or keep up) when rows of equations appear on slides; when the teacher works out the math on the spot, it is hard not to. (Okay, harder).

If you can't abandon slideware:

1. Include an intuitive explanation of what the equation means. (This is a good test to make sure you understand it yourself!). Obviously you should always do this verbally, but I find it very useful to write that part in text on the slide also. It's helpful for people to refer to as they try to match it with the equation and puzzle out how it works and what it means -- or, for the people who aren't very math-literate, to still get the gist of the talk without understanding the equation at all.

2. Decompose the equation into its parts. This is really, really useful. One effective way to do this is to present the entire thing at once, and then go through each term piece-by-piece, visually "minimizing" the others as you do so (either grey them out or make them smaller). As a trivial example, consider the equation z = x/y. You might first grey out (y) and talk about x. Then talk about y and grey out x: you might note things like that y is the denominator, you can see that when y gets larger our result gets smaller, etc. My example is totally lame, but this sort of thing can be tremendously useful when you get equations that are more complicated. People obviously know what numerators and denominators are, but it's still valuable to explicitly point out in a talk how the behavior of your equation depends on its component parts -- people could probably figure it out given enough time, but they don't have that time, particularly when it's all presented in the context of loads of other new information. And if the equation is important enough to put up, it's important to make sure people understand all of its parts.

3. As Orzel mentioned, define your terms. When you go through the parts of the equation you should verbally do this anyway, but a little "cheat sheet" there on the slide is invaluable. I find it quite helpful sometimes to have a line next to the equation that translates the equation into pseudo-English by replacing the math with the terms. Using my silly example, that would be something like "understanding (z) = clarity of images (x) / number of equations (y)". This can't always be done without cluttering things too much, but when you can, it's great.

4. Show some graphs exploring the behavior of your equation. ("Notice that when you hold x steady, increasing y results in smaller z"). This may not be necessary if the equation is simple enough, but if it's simple enough maybe you shouldn't present it, and just mention it verbally or in English. If what you're presenting is an algorithm, try to display pictorially what it looks like to implement the algorithm. Also, step through it on a very simple dataset. People remember and understand pictures far better than equations most of the time.

5. When referring back to your equation later, speak English. By this I mean that if you have a variable y whose rough English meaning is "number of equations", whenever you talk about it later, refer to it as "number of equations", not y. Half of the people won't remember what y is after you move on, and you'll lose them. If you feel you must use the variable name, at least try to periodically give reminders about what it stands for.

6. Use LaTeX where possible. LaTeX's software creates equations that are clean and easy to read, unlike PowerPoint (even with lots of tweaking). You don't necessarily have to do the entire talk in LaTeX if you don't want to, but at least make the equations in LaTeX, screen capture them and save them as bitmaps, and paste them into PowerPoint. It is much, much easier to read.

Obviously, these points become more or less important depending on the mathematical sophistication of your audience, but I think it's far far easier to make mathematical talks too difficult rather than too simple. This is because it's not a matter (or not mainly a matter) of sophistication -- some of the most egregious violaters of these suggestions that I've seen have been at NIPS, a machine learning conference -- it's a matter of how much information your audience can process in a short amount of time. No matter how mathematically capable your listeners are, it takes a while (and a fair amount of concentration) to see the ramifications and implications of an equation or algorithm while simultaneously fitting it in with the rest of your talk, keeping track of your overall point, and thinking of how all of this fits in with their research. The easier you can make that process, the more successful the talk will be.

Any agreements, disagreements, or further suggestions, I'm all ears.

Posted by Amy Perfors at 11:24 AM

15 November 2006

Gender as a Personal Choice

Jim Greiner

Greetings from the job market for legal academics, which combines the worst aspects of the job markets of all other fields. Apologies for being slow to bring this up, but an article in last week’s New York Times (Tuesday, November 7, 2006, page A1, by Damien Cave) is worth a look. The subject area is recording gender in New York City records. The City’s Board of Health is considering a proposal to allow persons born in the City to change the sex as documented on their birth certificates upon providing certain documentation (e.g., affidavits from doctors and mental health professionals) asserting that the proposed gender change would be permanent. Previously, the City required more physical manifestations of a sex change before it would change its records.

Question: are we moving toward a world in which sex, like race, becomes a personal choice, at least as recorded in official records? Note that in the race context, the law can’t seem to make up its mind on this. The Census Bureau records self-reports only, and many modern social scientists consider race a social construct only, with no relevant biological component. But some existing statutes still define race in terms of biology (e.g., 18 U.S.C. § 1093(6)).

Second question: suppose we are moving toward such a world; what will it do to our efforts to enforce anti-discrimination laws?

Posted by James Greiner at 1:51 PM

14 November 2006

Meta-analysis, Part II

Last time I wrote about the popularity of meta-analysis for synthesizing the results of multiple studies and cited education researcher Derek Briggs, who believes that the method is used too often and sometimes incorrectly.

Recently, I informally re-examined the data from a published meta-analysis on reading instruction methods, running four different Bayesian models on the set of effect sizes given in the paper. All of the hierarchical Bayesian models (which varied only in the priors used and covariates included) showed that a significant amount of uncertainty was ignored by the original meta-analysis, which assumed that the effect size produced by each study was an estimate of one overall true mean. The preliminary results from my analysis supported Briggs' position, since they did not show the significant results that were evident in the meta-analysis paper; in other words, none of the Bayesian analyses came close to indicating a significant effect for the reading instruction method in question. I claim no reliable conclusion for my own analysis – I’m even not going to specify the original paper here – but re-examining the methods of meta-analyses seems worthwhile for the purpose of uncovering uncertainty, if not developing new techniques for synthesizing multiple studies.

The implications are nontrivial: the evidence supporting the teaching methods required by the billion dollar Reading First initiative, part of the Department of Education’s No Child Left Behind Act, is a long collection of meta-analyses performed by the National Reading Panel.

Posted by Cassandra Wolos at 12:43 PM

13 November 2006

Applied Statistics –Joshua Angrist

This week the Applied Statistics Workshop will present a talk by Joshua Angrist, Professor of Economics at the Massachusetts Institute of Technology.

Professor Angrist received his Ph.D. in Economics at Princeton University. After which he joined the Economics Departments at Harvard University and Hebrew University before coming to MIT. He is a Fellow of the American Academy of Arts and Sciences, The Econometric Society, and has served as Co-editor of the Journal of Labor Economics. His publications have appeared in Econometrica, The American Economic Review, The Economic Journal, and The Quarterly Journal of Economics among others. His research interests include the effects of school inputs and organization on student achievement, the impact of education and social programs on the labor market, immigration, labor market regulation and institutions, and econometric methods for program and policy evaluation. Prof. Angrist also has a long-standing interest in public-policy. In addition to his academic work, he has worked as a consultant to the U.S. Social Security Administration, The Manpower Demonstration Research Corporation, and for the Israeli government after the Oslo peace negotiations in 1994.

Professor Angrist will present a talk entitled "Lead them to Water and Pay them to Drink: An Experiment in Services and Incentives for College Achievement." The presentation will be at noon on Wednesday, November 15th, in Room N354, CGIS North, 1737 Cambridge St. Lunch will be provided.

Posted by Eleanor Neff Powell at 1:08 PM

11 November 2006

Business Information and Social Science Statistics

I thought readers might be interested in an interview I did with DM Review, a widely read publication in the business world, focusing on what they call "business intelligence, analytics, and data warehousing," which is something close to what we social science statistical analysis -- including programs, open source software like R, statistical methods, informatics, etc. They are very interested in what we do, as more sophisticated methods can probably help them a great deal, and the business world certainly has some terrific data sets that would help out what we do. The interview also covers some of the ongoing research at the Institute for Quantitative Social Science.

If you're interested, see Open BI Forum Goes to Harvard ("BI" is jargon for "business intelligence").

Posted by Gary King at 4:18 PM

10 November 2006

Chernoff Faces

We haven't had much on graphics on this blog yet, partly because there are several specialized fora for this peculiar aspect of statistics: for instance, junkcharts, the R-gallery, information aesthetics, the Statistical Graphics and Data Visualization blog, the Data Mining blog, Edward Tufte's forum, Andrew Gelman's blog and others. Yet, I assume readers of this blog wouln't mind a picture every once in a while, so here are some Chernoff faces for you right there. In spirit of Mike's recent entry, they illustrate team statistics from the 2005 baseball season:

faces.png

I recently came across the Chernoff faces while looking for a neat way to display multivariate data to compare several cities along various dimensions in a single plot. Chernoff faces are a method introduced by Herman Chernoff (Prof Emeritus of Applied Math at MIT and of Statistics at Harvard) in 1971 that allows one to convert multivariate data to cartoon faces, the features of which are controlled by the variable values. So for example in the above graph, each teams winning percentage are represented by face height, smile curve, and hair styling; hits are represented by face width, eye height, nose height; etc. (for details and extensions see here).

The key idea is that human are well trained to recognize faces and discern small changes without difficulty. Therefore Chernoff faces allow for easy outlier detection and pattern recognition despite multiple dimensions of the data. Since the features of the faces vary in perceived importance, the way in which variables are mapped to the features should be carefully chosen.

Mathematica and R have canned algorithms for Chernoff faces (see here and here). I haven't seen a Chernoff plot in a social science journal yet, but maybe I am reading the wrong journals. Does anyone know articles that use this technique? Also do you think that this is an effective way of displaying data that should be used more often? Obviously there are also problems with this type of display, but even if you don't like the key idea you have to admit that they look much funnier then the boring bar-graphs or line plots we see all the time.

Posted by Jens Hainmueller at 10:29 AM

9 November 2006

The cognitive style of better powerpoint

Amy Perfors

While at the BUCLD conference this last weekend, I found myself thinking about the cognitive effects of using PowerPoint presentations. If you haven't read Edward Tufte's Cognitive Style of PowerPoint, I highly recommend it. His thesis is that powerpoint is "costly to both content and audience", basically because of the cognitive style that standard default PPT presentations embody: hierarchical path structure for organizing ideas, emphasis on format over content, and low information resolution chief among them.

Many of these negative results -- though not all -- occur because of a "dumb" use of the default templates. What about good powerpoint, that is, powerpoint that isn't forced into the hierarchical path-structure of organization, that doesn't use hideous, low-detail graphs? [Of course, this definition includes other forms of slide presentation, like LaTeX; I'll use the word "slideware" to mean all of these]. What are the cognitive implications of using slideware, as opposed to other types of presentation (transparencies, blackboard, speech)?

Here are my musings, unsubstantiated by any actual research:

I'd bet that the reliance on slideware actually improves the worst talks: whatever its faults, it at least imposes organization of a sort. And it at least gives a hapless audience something to write down and later try to puzzle over, which is harder to do if the talk is a rambling monologue or involves scribbled, messy handwriting on a blackboard.

Perhaps more controversially, I also would guess that slideware improves the best talks - or, at least, that the best talks with slideware can be as good as the best talks using other media. The PowerPoint Gettysburg Address is a funny spoof, but seriously, can you imagine a two-hour long, $23-million-gross movie of someone speaking in front of a blackboard or making a speech? An Inconvenient Truth was a great example of a presentation that was enhanced immeasurably by the well-organized and well-displayed visual content (and, notably, it did not use any templates that I could tell!). In general, because people are such visual learners, it makes sense that a presentation that can incorporate that information in the "right" way will be improved by doing so.

However, I think that for mid-range quality presenters (which most people are) slideware is still problematic. Here are some things I've noticed:

1. Adding slides is so simple and tempting that it's easy to mismanage your time. I've seen too many presentations where the last 10 minutes are spent hastily running through slide after slide, so the audience loses all the content in the disorganized mess the talk has become.

2. Relatedly, slideware creates the tendency to present information faster than it can be absorbed. This is most obvious when the talk involves math -- which I might discuss in a post of its own -- but the problem occurs with graphs, charts, diagrams, or any other high-content slides (which are otherwise great to have). Some try to solve the problem by creating handouts, but the problem isn't just that the audience doesn't have time to copy down the content -- they don't have the time to process it. Talks without slideware, by forcing you to present content at about the pace of writing, give the audience more time to think about the details and implications of what you're saying. Besides, the act of copying it down itself can do wonders for one's understanding and retention.

3. Most critically, slideware makes it easier to give a talk without really understanding the content or having thought through all the implications. If you can talk about something on an ad hoc basis, without the crutch of having written everything written out for you, then you really understand it. This isn't to say that giving a slideware presentation means you don't really understand your content; just that it's easier to get away with not knowing it.

4. Also, Tufte mentioned that slideware forces you to package your ideas into bullet-point size units. This is less of a problem if you don't slavishly follow templates, but even if you don't, you're limited by the size of the slide and font. So, yeah, what he said.

That all said, I think slideware is here to say; plus, it has many advantages over other types of presentation. So my advice isn't to not use slideware (except, perhaps, for math-intensive talks). Just keep these problems in mind when making your talks.

Posted by Amy Perfors at 11:53 AM

8 November 2006

Fixing Math Education by Making It Less Enjoyable?

Justin Grimmer

In a recent Brookings Institution report on the mathematics scores of junior high and high school students from different nations uncovers some paradoxical correlations. Using standardized test scores, the report shows that nations with the highest scores also have the students with the lowest confidence in their math ability and the lowest levels of enjoyment from learning math. This is evident in American students, with high confidence and enjoyment, but only with middle-of-the-pack scores on standardized tests.

Casting correlation/causation concerns aside, the Brookings report goes on to argue that the American mathematical education experience is perhaps too enjoyable for students. Rather than informing students about the important mathematical concepts that the foreign textbooks provide, American textbooks are characterized as trying too hard to create an enjoyable classroom experience.

The policy implication provided is to make mathematics less enjoyable in American classrooms by discarding colorful pictures and interesting story problems. At the very least, the report suggests that educator’s attention should be redirected from making math fun to making math education solely about mathematics.

Because of the study’s limited nature, any drastic policy recommendations should be avoided. After all, the report’s argument merely identifies two paradoxical relationships and then speculates a causal mechanism that provides one potential explanation for the trend. No effort is made to eliminate other alternative causal mechanisms. For example, cultural explanations could explain the discrepancy of the scores and confidence ratings, aside from differences in teaching methodologies. The study also attempts to make an ecological inference, inferring individual level behavior from aggregated data. While not damming in itself, it does weaken the strength of the conclusions.

That being said, perhaps the problem with American mathematics education does not lie in the attempt to make students happy, but in the material that is presented. Rather than providing students with an in depth understanding of concepts and introducing proof techniques, high school math assignments are often about memorization and a superficial knowledge of the techniques involved. Perhaps, if the focus were changed to make high school mathematics less like balancing a check book and more like Real Analysis, American math students would see an increase in their happiness in the classroom and also their test scores.

Posted by Justin Grimmer at 11:51 AM

7 November 2006

Election Day

As everyone must know (unless you are lucky enough to not own a television), today is Election Day in the US. I always think of analyzing elections (and pre-election polling) as the quintessential statistical problem in political science, so I'm sure that many of us are eagerly waiting to get our hands of the results. Recent elections in the U.S. have been somewhat controversial, to say the least, which is probably bad for the country but unquestionably good for the discipline (see the Caltech/MIT Voting Technology Project for one example), and my guess is that this election will continue the trend. Law professor Rick Hasen of electionlawblog.org sets the threat level for post-election litigation at orange; anyone looking for an interesting applied statistics project would be well advised to check out his site in the coming weeks. In the meantime, the Mystery Pollster (Mark Blumenthal) has an interesting post on the exit polling strategy for today's election; apparently we shouldn't expect preliminary and incomplete results to be leaked until 5pm this year.

Posted by Mike Kellermann at 12:36 PM

3 November 2006

Negative Results

Felix Elwert

In September, The Institute of Medicine released its report on “The Future of Drug Safety,” featuring some goodies on the dissemination of research findings.

One of the recommendations echoes one of the favorite hallway complaints at IQSS: that journals are perennially hung up on publishing *** alpha less than 0.05 yay-yay statistically significant results.

Says the Washington Post:

“[According to the report] manufacturers should also be required to register all clinical trials they sponsor in a government-run database to allow patients and physicians to see the outcome of all studies, not just those published in medical journals, the report said. Studies that show positive results for a drug are more likely to be published by journals than negative ones.”

Welcome to the world of publication bias. (The report is yours for a highly significant $44.)

Posted by Felix Elwert at 11:59 AM

2 November 2006

Incumbency as a Source of Contamination in Mixed Electoral Systems

Jens Hainmueller

Since the early 1990s, more than 30 countries have adopted mixed electoral systems that combine single-member districts (SMD) in one tier with proportional representation (PR) in a second tier. Political scientists like these type of electoral systems because each voter gets to cast two votes, the first vote according to one set of institutional rules and the second vote according to another. Some have argued that this allows for causal inference because it offers a controlled comparison of voting patterns under different electoral rules. But does it really?

The more recent literature on so called contamination effects undermines this claim. Several papers (Herron and Nishikawa 2001; Cox and Schoppa 2002; Ferrara, Herron, and Nishikawa 2005) have found evidence that there are interaction effects between the two tiers in mixed electoral systems. For example, small parties are able to attract more PR votes in those districts in which they run SMD candidates. The argument is that running a SMD candidate gives a human face to the party and thus enables it to attract additional PR votes.

In a recent paper, Holger Kern and I attempt to add to this debate by identifying incumbency as a source of contamination in mixed electoral systems. It is well known that incumbents that run in single-member district (SMD) races have a significant advantage compared to non-incumbents (Gelman and King 1990). It thus seems plausible to expect that this advantage carries over to the proportional representation (PR) tier, and that incumbents are able to attract additional PR votes for their party in the district. In our paper we identify such an effect using a regression-discontinuity design that exploits the local random assignment to incumbency in close district races (based on an earlier paper by Lee 2006). The RD design allows us to separate a subpopulation of district races in which treatment is assigned as good as randomly from the rest of the data that is tainted by selection effects. We find that incumbency causes a gain of 1 to 1.5 percentage points in PR vote share. We also present simulations of Bundestag seat distributions, demonstrating that contamination effects caused by incumbency have been sufficiently large to trigger significant shifts in parliamentary majorities.

Needless to say, any feedback is highly appreciated.

Posted by Jens Hainmueller at 12:00 PM

1 November 2006

An Individual-Level Story and Ecological Inference

Jim Greiner

I blogged some last year (see here) on whether an individual-level story is necessary, or useful, to ecological inference. For a review of what ecological inference is, and what I mean by an individual-level story, see the end of this entry. Last year, I stated that such a story was helpful in explaining an ecological inference technique, even if it might not be strictly necessary for modeling. Gary disagreed that such a story was at all helpful, and we had a little debate on the subject, which you can access here. Lately, though, I’ve been thinking that an individual-level story really is necessary for good modeling, not just for communication of a model. In particular, it seems like an individual-level model is required to incorporate survey information into an ecological inference model. Survey data is, after all, data collected at the level of the individual, and with only an aggregate-level model, it’s hard to see how one could incorporate it. Any thoughts from anyone out there?

To review: ecological inference is the effort to predict the values of the internal cells of contingency tables (usually assumed to be exchangeable) when only the margins are observed. A classic example is in voting, where one observes how many (say) black, white, and Hispanic potential voters there are in each precinct, and one also observes how many votes were cast for Democratic and Republican candidates. What one wants to know if, say, how many blacks voted Democrat. By an individual-level story, I mean a model of voting behavior at the level of the individual voter and a mathematical theory of how to aggregate up to the precinct-level counts.

Posted by James Greiner at 12:00 PM