Sun | Mon | Tue | Wed | Thu | Fri | Sat |
---|---|---|---|---|---|---|
1 | 2 | 3 | 4 | 5 | 6 | |
7 | 8 | 9 | 10 | 11 | 12 | 13 |
14 | 15 | 16 | 17 | 18 | 19 | 20 |
21 | 22 | 23 | 24 | 25 | 26 | 27 |
28 |
« January 2010 | Main | March 2010 »
23 February 2010
Over the course of the year we have tried to record many of the Applied Statistics workshop, but only now have we finally posted one. It is from Cassandra Wolos Pattanayak's talk on propensity score matching at the CDC from last week. You can find it here and on the seminar website.
Posted by Matt Blackwell at 3:43 PM
We hope you will join us this Wednesday, February 24th at the Applied Statistics workshop when we will be happy to have Matt Killingsworth (Department of Psychology). An abstract is below. A light lunch will be served. Thanks!
"Mind Wandering and Happiness"
Matt Killingsworth
Department of Psychology
February 24th, 2010, 12 noon
K354 CGIS Knafel (1737 Cambridge St)
You can preview the iPhone app to see how the data is collected
ABSTRACT:
Although humans spend much of their time mind-wandering, i.e., thinking about something other than what one is actually doing, little is known about mind wandering's relation to human happiness. Using novel technology to achieve the world's largest experience sampling study of people's everyday lives, we found that participants spent nearly half of their waking hours mind-wandering and that it had large effects on happiness. Mind wandering was never observed to increase happiness and often reduced happiness considerably. Although some activities and situations modestly decreased the probability of mind wandering, they generally did not buffer against negative thoughts when a person's mind did stray from the present.
Posted by Matt Blackwell at 3:36 PM
15 February 2010
I recently read The History of Statistics: The Measurement of Uncertainty before 1900, Stephen Stigler's excellent recounting of the early development of statistical theory and methods. I highly recommend this book to readers of this blog. You are sure to enjoy Stigler's engaging prose, which recount the struggles and triumphs behind techniques we now employ regularly.*
Stigler's insights are of particular interest for social scientists, as a central puzzle in his book is why social scientists waited so long to adopt statistical methods (lagging behind astronomers by almost a century). A major stumbling block for 19th century social investigators was the combination of relatively weak theories of social behaviors with "the plethora of potentially influential factors," great inherent heterogeneity, and limited data. Social scientists were reluctant to classify as error those deviations representing true but unmeasured or poorly understood patterns in human activities. In contrast, astronomers had strong theories regarding planetary motions and were able to compare their predictions with observed facts, thus gaining confidence in the use of statistics and probability to estimate quantities of interest and quantify associated uncertainty. Given continued struggles to sort through the myriad causes and consequences of social behaviors and organizations, modern social scientists will read with great interest about how influential 19th century thinkers understood their problems and attempted to solve them.
* When describing Adrien Marie Legendre's "invention" of least squares, Stigler notes that "the word minimum [makes] five italicized appearances [in Legendre's paper], an emphasis reflecting his apparent excitement" and the reader cannot help but share in this excitement -- what a great idea it was to minimize the sum of squared errors!
Posted by Deirdre Bloome at 10:31 AM
8 February 2010
Viri Rios has a great op-ed in the New York Times about mathematical social science and Mexican drug politics.
Posted by Richard Nielsen at 2:46 PM
2 February 2010
The other week, I read Jared Diamond's Guns, Germs, and Steel which managed to get me a little worked up about a pet peeve of mine: the term "natural experiment." Just when I had gotten calmed down, the Polmeth list serve alerted me to an entire issue of Political Analysis devoted to natural experiments. Arghhh...
Don't get me wrong -- in my own research I try to use observational data to make causal claims that are probably far more dubious than anything in the special issue of Political Analysis. I'm highly impressed by the research and I'm even more supportive of social scientists who are looking for "natural experiments" in political science. I just wish we could call them something else because I'm skeptical that they are really experimental.
The lead article of the PA special issue urges scholars "to use the language of experimental design in explicating their own research designs and in evaluating those of other scholars." I'm on board with using the language of experiments, but I've also seen more than a few recent papers framed as "natural experiments" that are really just observational studies with no particular claim to special status. The spread of experimental language into observational studies may have downsides as well as benefits.
Until recently, I basically assumed that when people said they had a natural experiment, what they really meant was that they had a credible instrument: a variable that breaks the link between treatment assignment and the potential outcomes for some or all of the units. However, the lead PA article places difference-in-differences, regression discontinuity, and matching methods under the tent of natural experiments. While I like (and use) these techniques and find them compelling, only some of them explicitly rely on an IV-type argument. Maybe I have more to learn.
The problem with any randomization that isn't controlled by the researcher is that extreme skeptics like me can then try to spin complicated stories about how confounding could occur. This is what I found myself doing while reading Guns, Germs, and Steel. An extremely simplified version of Diamond's argument is that geography, not genetics, determines which human societies become dominant and which are conquered or destroyed. He devotes the entirety of chapter 2 to discussing the settlement of Polynesia by people who come from essentially the same genetic stock but experienced different geographies once they settled particular islands. The random variation in geography is interpreted as the cause for significant variation in the trajectories of the peoples of each island or group of islands.
This might be a natural experiment if Diamond could show that people were somehow randomly assigned to different islands. The problem is that different types of people might chose to live on different islands. Although it may be random which islands an exploratory party reaches, the explorers can choose to stay or move on for reasons that might be related to genetic variation. Similarly, explorers and colonists are probably not a random sample of the population, so the types of people that reach a far off island might have different genetic traits than those that remain in already established population centers. You get the idea.
I should reiterate that these reservations are just my gut reactions rather than a well thought-out assault on the use of natural experiments. I'm interested to read more: Jared Diamond and our very own James Robinson have a new book out on the subject that I'm excited to read. Thad Dunning has written on the topic, as have others.
Bottom line: I'm thrilled (and jealous) whenever social scientists find some plausibly exogenous variation to exploit for causal inference. I think it should happen more. I just worry that by attaching the "experimental" label to these studies, we endow them with undue credibility.
Posted by Richard Nielsen at 11:30 AM