Sun | Mon | Tue | Wed | Thu | Fri | Sat |
---|---|---|---|---|---|---|
1 | 2 | 3 | 4 | 5 | ||
6 | 7 | 8 | 9 | 10 | 11 | 12 |
13 | 14 | 15 | 16 | 17 | 18 | 19 |
20 | 21 | 22 | 23 | 24 | 25 | 26 |
27 | 28 | 29 | 30 | 31 |
« November 2009 | Main | January 2010 »
28 December 2009
A recent paper in Nature documents power-law patterns (i.e. scale invariance) in the distribution of events within insurgencies: The number of casualties per insurgent event, and the number of insurgent events per day, apparently follow striking regularities across an array of insurgencies. Power laws everywhere!
What makes the paper especially notable around IQSS is that our own Arthur Spirling is cited in the first sentence:
The political scientist Spirling and others have correctly warned that finding common statistical distributions (for example, power laws) in sociological data is not the same as understanding their origin.
The citation is to Arthur's unpublished paper The Next Big Thing: Scale Invariance in Political Science, which provides a breezy overview of scale invariance as a concept and documents a few previously unremarked examples from political science.
Part of the point of Arthur's paper is that political science (and social science more broadly) has mostly ignored research in natural sciences that, like the Nature article, examines emergent patterns in social phenomena. As he points out, it's not how we "do business." The hard scientists chasing power laws attempt to explain an underlying random process starting from the distribution of outcomes; we're more accustomed to starting from the joint density of outcomes and covariates.
In a way, the fact that Arthur's paper was cited at all highlights the lack of interest in this style of work in social science. The authors of the Nature piece wanted to cite social science work on power laws, and they ended up with Arthur's piece, which is, for all its merits, several years old and unpublished.
I admit I've been a bit of a power-law curmudgeon, like other social scientists, but lately I've come to better understand the value of this approach. I don't expect that I'll be focusing on this kind of work myself, but, like Arthur, I believe it is a growth industry.
Posted by Andy Eggers at 9:20 AM
25 December 2009
Merry Christmas, everyone!
I was amused to read about the hoopla involving this online "study" in the BMJ entitled "Santa Claus: A Public Health Pariah." The tongue-in-cheek article, written by Australian epidiomiologist Nathan J. Grills, contends that the corpulent Mr. Claus and his "rotund sedentary image" set a bad example for kids and adults alike.
The joke has apparently been lost on media organizations, bloggers, and radio personalities, who have suggested that the article is just another example of of politically correct, killjoy academic research. Grills has subsequently received harassing emails, and he was called a "Scrooge" by a blogger for the Atlanta Journal-Constitution.
Yikes!
Grills has since affirmed that he is, in fact, a "Santa believer and lover" and that he has even "donned the red and white garb a number of times to bring cheer at school concerts in rural Victoria." "To clarify," he said, "I am not a Santa researcher - the article was written in my spare time for a bit of comic relief."
More about the media hoopla surrounding Grills's article is here and here.
Posted by Maya Sen at 10:54 AM
22 December 2009
This morning the New York Times alerted me to a Science piece written by two economists working on measuring happiness. Their basic finding is that objective measures of quality of life (nice climate, etc) are pretty highly correlated with subjective, self-reported measures of how satisfied people are with their lives. They provide a ranking of US states by happiness level, accessible here, which shows Louisiana first and New York last, with Massachusetts falling to 43rd. Go figure -- I like living in MA.
I really want to see some cross-national comparisons but I doubt anyone will be moving on to that unless the World Bank picks up Bhutan's Gross National Happiness measure as one of their development indicators.
Happy holidays to all!
Posted by Richard Nielsen at 9:25 AM
19 December 2009
I've taken my fair share of standardized tests, so I sat up and took notice of a recent study published in Psychological Science about what happens to SAT scores when students take the test in crowded versus empty rooms. What the two researchers, Stephen Garica (UMich) and Avishalom Tor (Haifa University), are out to study is the impact that increased competition can have on one's performance. The results of this line of research are pretty interesting.
Here's how they did it. First, Garcia & Tor took state-by-state mean SAT scores as their outcome of interest. (This unfortunately restricts their analysis to an n of 50, somewhat ironic in an article about the "N effect.") Second, to measure "competition," Garcia & Tor created a "density" variable by dividing the number of test-takers by the number of state testing venues. This, they contend, captures the likelihood that a student would be sitting in a jam-packed classroom (teeming with potential competitors) or in a relatively empty competition-free environment. Third, Garcia & Tor controlled for a host of potential confounders, among these state funding for elementary and secondary education, per capita income, population density, the percent of students taking the SAT, the percent of test-takers reporting having a college-educated parent, and the percent of test-takers self-identifying as a racial minority.
The results suggest that the denser a test-taking environment is, the lower the state mean SAT scores are. This, the authors contend, is evidence in favor of the idea that as the number of potential competitors increases, one's motivation to compete dwindles.
I'm actually not 100% convinced that the two researchers have controlled for all possible confounding variables. Teacher quality, student-to-teacher ratios, strength of parental involvement, spending per student -- to name just a few -- could be factors that would affect both the probability of treatment (density of the venue) as well as the potential outcome (SAT scores). In addition, the coarseness of the measurements (which are all at the state level) makes it impossible to include confounders and other interesting variables at the city or student level. For example you would think that ambitious parents might encourage their children to take exams in a more comfortable (more rural) setting; this parental ambition would also in turn translate into higher SAT scores. When you measure things at the state level, however, you make it difficult to examine things like this.
The Psychological Sciences paper is actually more of a compendium of studies undertaken by Garcia & Tor on the topic. Other studies are more persuasive. For example, the authors administered a timed online test to a group of Michigan undergraduates. A subset of the group were told that they were competing against ten other students and that the quickest and most accurate 20% would get a $5 prize. The other subset were told that they were competing against 100 students and that, similarly, the quickest and most accurate 20% would get a $5 prize. What happened? The group competing against 10 students finished the online test faster than the other group (although note that there was no difference in the accuracy of the two groups).
This whole N-effect is interesting to think about -- are we more likely to sell ourselves short when it appears that we're facing stiff competition? In my experience, this seems true. (Although, under this rationale, my own best performance would have been on the GRE, where I sat alone at a computer while taking the test; sufficed to say, it wasn't a very strong showing.)
I'd also be interested in how this line of research contradicts or complements the social networks type of research being conducted by James Fowler, Nicholas Christakis, and the like. I am guessing here, but those folks would maybe counter that it's not competition that makes people perform more poorly -- rather, one might think that it's the camaraderie of being in a small group (a "we're-all-in-this-together" kind of attitude) that could positively influence people working in more intimate environments.
I imagine that others have more experience with this kind of research, and I would be interested in hearing thoughts on this.
11 December 2009
Among the new working papers at NBER is this interesting paper by Ofer Malamud, an education economist at Chicago's Harris School.
Malamud is interested in the relative benefits of specializing early or late in one's academic career: specializing early presumably allows you to accumulate more skill in your specialization, but it also probably results in a less optimal match between the individual and the specialization. In the new working paper, he compares the rate of switching careers between graduates of English and Scottish universities, who have similar educational backgrounds and enter a fairly integrated labor market but are required to specialize at different points in their educational careers: students in English universities typically must choose a specialization before entering the school, while students in Scottish universities typically specialize after two years of general education.
Malamud does sensible things to address possible differences between the two groups of students and the labor markets they entered, and his placebo tests effectively validate the design. (For example, to confirm that the students attending English and Scottish universities don't have a basically different propensity to switch careers, he shows similar switching rates between students receiving graduate degrees in English vs Scottish universities.) Ultimately he finds that switching is lower among Scottish university graduates (about 6 percentage points lower, where the mean rate of switching is about 42%), which he takes as confirmation that students are better off in their chosen field when they have a longer time to pick the field (and thus achieve a better match), even if it means they have less time to develop specific skills in that field.
Among working papers I've seen recently, I thought this one was unusually good in applying reasonable identification to a genuinely interesting substantive question.
Posted by Andy Eggers at 7:30 AM
7 December 2009
I read with some interest a recent NYTimes article about how cities are increasingly making public reams of municipal data. Basically, what the article noted was an increasing trend among US municipalities in making public data available and easy to digest. Among the cities taking the lead are San Francisco (with its DataSF website), New York (Data Mine), Washington DC (D.C. Data Catalog). The Federal Government has long hosted its own data site, Data.gov.
This trend doesn't seem to be limited to just US governments.
Over in the UK, Gordon Brown's government is hard at work on a new data site, data.gov.uk, which it hopes to launch early in 2010. In fact, the prime minister just today delivered a speech in which he extolled the virtues of data availability:
Releasing data can and must unleash the innovation and entrepreneurship at which Britain excels - one of the most powerful forces of change we can harness.When, for example, figures on London's most dangerous roads for cyclists were published, an online map detailing where accidents happened was produced almost immediately to help cyclists avoid blackspots and reduce the numbers injured.
And after data on dentists went live, an iphone application was created to show people where the nearest surgery was to their current location.
And from April next year ordnance survey will open up information about administrative boundaries, postcode areas and mid-scale mapping.
All of this will be available for free commercial re-use, enabling people for the first time to take the material and easily turn it into applications, like fix my street or the postcode paper
For social scientists, having access to more data is never a bad thing. But, more importantly, perhaps having access to this otherwise mundane data will lessen our dependence on (notoriously unreliable) public opinion surveys. Instead of asking people how much they feel crime is affecting their particular neighborhood, we could measure it using the data provided by DataSF, data.gov.uk, and others. Instead of asking people how reliable or safe are their local hospitals, we'll be able to measure it using the same resources.
My point here is that it's often more useful for social scientists to see how people actually behave rather than to ask people how they say they will behave.
Of course, all this depends on having access to data at its rawest form. From just the quick look I did at some of the websites, I saw a lot of data in processed form (for example, available only through iPhone apps or through summary statistics in PDF form). This kind of processing makes things more accessible to the casual data consumer, but vastly less useful for the social scientist ready and willing to do her own data analysis.
The other thing, too, is that it will be interesting to see how governments, private companies, and academic institutions work together (or fail to work together) to make data available. Will Google step in to provide a search engine to search these databases? Will governments make their data available on something like IQSS's Dataverse? In general, what's the best way to make data available both to researchers and to the public?
It seems like an exciting time for data availability. If folks have other thoughts on this -- or leads or tips on other municipalities or governments increasingly making their data available -- I'd been keen to hear them.