Diatribes of Jay

This blog has essays on public policy. It shuns ideology and applies facts, logic and math to social problems. It has a subject-matter index, a list of recent posts, and permalinks at the ends of posts. Comments are moderated and may take time to appear.

16 September 2016

Uncertainty and Error Bars in “News”


[For a recent post on why Hillary needs to increase her empathy rating, click here. For a brief comment on the quickest and easiest way to restore our democracy, click here.]

One of the hardest things for non-scientists to stomach is that everything we humans think we know is uncertain, at least to some degree. Understanding the fact of that uncertainty is crucial to understanding ourselves and our world. Quantifying the extent of that uncertainty is the beginning of wisdom.

Scientists and engineers know uncertainty. It’s part of what they do every day. In reading a paper in science or engineering, more often than not you will see numbers and graphs presented with so-called “error bars,” indicating their uncertainty.

For example, consider an “exoplanet” thought to be orbiting a distant star. Astronomers infer its existence from minute variations in the light coming from that star. So you might see the planet’s radius expressed as 3 ± 0.5 ER. That means we think it has about three times our Earth’s radius, plus or minus one-half. As far as we really know, it could have anywhere from 2.5 to 3.5 Earth radii.

The quantity ± 0.5, read “plus or minus oh point five,” we call the “error bars.” The upper and lower limit it expresses is that within which we expect the actual value to fall. Anything within that range is consistent with our current human knowledge, at least as expressed in that paper.

The same concepts apply to the “dismal science” of economics. For example, quarterly GDP growth projections often come with error bars, such as 0.5 ± 1 %. That means the next quarter’s GDP could grow by up to 1.5% or slow as much as 0.5%—and perhaps begin a weak recession. (An “official” recession requires two consecutive quarters of slowing GDP growth.)

Another key application of uncertainty is in public-opinion polling. Pollsters can’t possibly poll all voters, let alone everybody in the nation. Even if they tried, the poll would take so long to conduct that many people would change their opinions in the interim. So pollsters only contact small “samples” of the public, typically one or two thousand subjects. That’s out of a nation of 319 million, or at most 0.006%.

When you have a sample that small compared to the total polling “universe,” you have to work hard to be sure your sample is “representative” of the whole universe. If not, your poll might be meaningless.

It wouldn’t be too hard, for example, to find 1,000 or 2,000 people all of whom want to vote for Trump, Hilary, or any of the two fringe candidates. Then you’d have a 100% polling result that would be 100% meaningless.

The mathematics of statistics and probability help us avoid this sort of gross error. They help us find samples that we think are representative of the whole polling “universe” and so give useful results. The same math also helps us calculate the error bars for polling results, i.e., the amount of error we can expect simply because our polling sample is a small fraction of all the voters whose opinions, in theory, we seek.

The error bars in polling typically depend primarily on the size of the sample (number of voters) relative to the size of the whole “universe,” for example state voters in a state election or federal voters in a national election. We calculate the error bars based on the notion that our small sample is chosen randomly from the universe. In other words, we use probability theory to calculate what our most likely range of error would be if we did the same polling over and over again with different samples of the same small size, all chosen randomly from the same universe.

Error bars in public-opinion polling are typically around ±3%. Why? Because most pollsters tend to use roughly the same sample size. To use bigger samples would cause delay and raise costs as compared competitors’ polling. So everyone seems to have settled on sample sizes of, at most, a few thousand people.

When you review the results of presidential election polling, you need to keep this all in mind. When the newspapers say the race is “neck and neck,” that could mean anything from Hillary wins by 3% to Trump wins by 3%.

But even that’s far from all the uncertainty. Statistics is not a ouija board. It doesn’t have magic powers. It predicts error bars correctly only when all errors in polling are random—arising only from the small size of the polling sample compared to the polling universe.

But suppose the errors are not random, but systematic? What then?

Students of history will recall that great photo of Democrat Harry Truman, as president elect, holding up a newspaper headlined “Dewey Defeats Truman.” That bad call from the wire services arose from systematic, not random, error. The pollsters had reached their conclusions by telephoning their subjects. But at that time, in 1948, more well-off people, who were Republicans, had telephones than poorer Democrats. So their polling sample was biased systematically toward Republicans, not just off-base randomly.

This year’s bizarre presidential election has several big sources of systematic error. Some are technological and some are social.

There are two sources of technological error. The first is the explosive rise of smart phones and the consequent abandonment of land lines, especially by youth. The second is the equally explosive rise of robo-marketing and various strategies to avoid it. Smart phones and caller ID now allow consumers to screen calls and avoid taking those from unknown sources. The explosion of robo-calling and the annoyance and waste of time it involves give them big incentives not to answer calls from unknown sources.

Under these circumstances, busy workers in mid-career (and maybe raising families) are unlikely even to respond to telephone polls. Those most likely to pick up the phone are, in this order: the unemployed, the elderly, and youth out of jobs and school with time on their hands. Think these groups might favor Trump?

In this unprecedented and bizarre election, there are also unique social causes of systematic bias. All come from strong incentives for voters not to reveal, or to reveal, their tentative choices. All arise also from strong feelings about each “mainstream” candidate.

One I call the “Yes, Dear” factor. There must be many women eager to vote for the first serious female candidate for president but reluctant to let their husbands, cohabitants, friends or even children know. If they get a call at home while doing household chores in the presence of family or friends, they might say “undecided,” or even “Trump,” rather than provoke an argument. But we know for whom they’ll vote.

Similar factors apply in Trump’s favor. He is so controversial and so despised by many that some polled voters will undoubtedly say “undecided” or “Hillary” just to avoid an argument over voting for him.

The opposite may be true as well. Trump’s entire candidacy is little more than a shout of anger and angst on the part of the white lower middle class. The angry ones are quite likely to answer “Trump!” vehemently to any pollster. But, after they’ve done so, who’s to say they might not reconsider, weeks later, before going to the polls? Having gotten that off their chests to the pollster, who’s to say they might not simmer down and take a closer look?

No math or probability theory can take these social and psychological factors into account. There is no sample (but one too large to poll practically) that can probe them. This is why polls from reputable polling organizations vary so widely and so consistently. In this election, polls’ real error bars are probably ±6% or ±7%.

They may be as high as ±10%, making them useless except as a marketing tool for news media. As sage observers have said for months now, the only poll that matters is the one on November 8. Until that day is done, no one should be sure or complacent about this election.

In any event, we ought not delude ourselves that error bars are the last word in uncertainty. Not all important information is quantitative, and not all quantitative information lends itself to easy calculation of error bars.

It’s vital that news media report the real uncertainty of non-quantitative information, too. A stellar example of doing so is today’s New York Times article on North Korea and its recent big nuclear test. (Choe Sang-Hun, “Always on Alert for Nuclear Tests and Unprintable Rumors,” NYT Sept. 16, 2016, at A4.)

At first glance, the article provides little new information about that test or North Korea’s nuclear program. But that’s not the article’s purpose. Its main point is to explain why news from North Korea is so uncertain and why that fact is unlikely soon to change. In essence, North Korea bars news media and restricts reporters inside it, and so it, the South and pols and reporters in both places spread unsubstantiated rumors to advance their own agendas. Thus it’s hard, if not impossible, for a reporter adhering to the New York Times’ standards of verification to get much news at all.

That short article on North Korea was one of the best non-quantitative treatments of uncertainty in news reporting that I have ever read. It should be required reading in every school of journalism, worth at least an entire class session of discussion. For it illustrates an essential truth: uncertainty is often the only story worth telling. To ignore it in North Korea or this election would be as bad a failure of journalism as falsely equating the “negatives” of Hillary Clinton and Donald Trump.

Endnote: For anyone interested in the subject of uncertainty, a recent book available for electronic download from Amazon is must reading. It’s The Signal and the Noise: Why So Many Predictions fail but Some Don’t, by Nate Silver. A sometime New York Times reporter now doing mostly his own thing, Silver has written the definitive modern work for non-specialists on unceratinty and our feeble attempts to deal with it using statistics and probability theory. It’s a good read for anyone involved in public affairs, whether new to these subjects or an old but rusty hand like myself.

permalink

0 Comments:

Post a Comment

<< Home