Misunderestimating Sampling Error with CNN's Poll of Polls 

 The good folks at CNN are hot on the trail of a swing to McCain in Ohio, a crucial battleground state.  CNN's headline claims, "Ohio Poll of Polls: McCain Gains Some Ground in Tight Race".  From the   story  , we learn that,  

 "CNN's new Ohio poll of polls shows Barack Obama leading McCain by three points, 49 to 46 percent. Five percent of the state's voters were unsure about their presidential pick. 
The network's last Ohio poll of polls, released October 9, showed Obama leading McCain by four points, 50 to 46 percent. In the September 21 poll of polls, Obama led McCain by a single point, 47 to 46 percent." 

 This is the smallest possible shift that a network would be willing to report: a one-percentage point decrease in support for Obama and no-change in support for McCain.  The survey design and the poll of polls would have to be incredibly powerful to detect this subtle shift in the electorate's preferences.   

 With my interest piqued, I read further.  As it turns out, CNN's analysis of the poll of polls is based on some claims that are   suspect  :  

 "The Ohio general election "poll of polls" consists of four surveys: Ohio Newspaper Poll/University of Cincinnati (October 4-8), ARG (October 4-7), CNN/Time/ORC (October 3-6) and ABC/Washington Post (October 3-5).  The poll of polls does not have a sampling error ." 

 What?  No sampling error?   

 If CNN thinks that averaging four polls removes all variability, then I have a bridge in Alaska up for sale (and I'll throw in some  oceanfront property in Arizona  , which also seems appropriate).   

 It is more likely that the author meant that the margin of error would be hard to calculate.  This is not equivalent to the margin of error not existing at all.  For example, it is hard to calculate when the   Cubs are going to win another World Series  .  But I pray that this does not mean that the date is undefined (which seems infinitely worse than never).   

 Of course, news networks want to justify covering politics as a horse race and want to ignore the warnings that small changes in polls are not real, even when you average over four surveys.  But this seems like a particularly egregious abuse of polling numbers to make a race seem more fluid than reality (or reasonable statistics) seems to permit.