Friday, September 09, 2005

Bad Science (reporting)

Bad Science is a column in the Guardian, written by Ben Goldacre. This isn't something I generally read: however, Clifford, a physicist writing at Cosmic Variance does, and links us to the a recent column about bad science reporting.

I have some thoughts, but as an aspiring scientist I'm all about standing on the shoulders of giants, so go read Clifford's thoughts first.

I agree with most of what Clifford, and the article, said. However, I think the many of the problems with science journalism are exactly the same as problems infecting other types of journalism. It's striking the extent to which a lot of the complaints about science reporting mirror complaints I see about regular news, most obviously with political reporting. For example, this issue is pretty universal:

So how do the media work around their inability to deliver scientific evidence? They use authority figures, the very antithesis of what science is about, as if they were priests, or politicians, or parent figures. “Scientists today said … scientists revealed … scientists warned.” And if they want balance, you’ll get two scientists disagreeing, although with no explanation of why (an approach at its most dangerous with the myth that scientists were “divided” over the safety of MMR). One scientist will “reveal” something, and then another will “challenge” it. A bit like Jedi knights.

I've seen literally hundreds of complaints about this. Candidate A says something about candidate B (ie "my opponent's plan will nationalize health care"), candidate B denies it, and the media "impartially" cover the story by seeking quotes from partisans on both sides, without ever going through the trouble of actually reading candidate B's plan to see if "nationalization" is an accurate description. If reporters won't bother to figure out the truth with simple factual issues like this, it's no surprise that they won't probe deeply into scientific controversies either. He said/she said reporting is one of the media's greatest vices, and is one that's actually encouraged within journalism. Even when it really would not be hard to get at the truth, but reporters simply don't do it, whether out of concern for "objectivity" or laziness or something else.

I think another important part of the problem is that the media actually thinks that dumbed-down science is actually more interesting. It's not that they think the reader won't get it--it's that they think that the reader enjoys reading about conflict and controversy and not boring old facts. Again, let's look at political reporting. Political reporting is full of "horse-race" coverage. How many times do we see something like: "In an effort to appeal to union workers, John Kerry gave a speech about his health care plan in Ohio today," followed by quotes that boil down to, "I like it!" or "I don't like it!" All the frickin' time. How often do we read an analysis of how Kerry's health care plan would actually affect those union workers? Practically never. And we expect these people to discuss the content of a scientific controversy? Why should they, when it's so much easier to just pile a bunch of quotes together and call it a news story?

I don't mean to imply that this tendency is something exclusive to political reporting. It's just most pronounced there.

There are other similarities. The "scare" story that Goldacre complains about is easy to find outside of the science pages--seen any abduction stories lately? Or shark attack stories? Of course you have--and the complaints raised against it in the "regular" news are just as valid. The risk is hugely distorted in order to sell papers to an alarmed public, and scared people take unnecessary or counterproductive measures to reduce their perceived risk. Even the "breakthrough" category, which at first glance seems to be pretty specific to science reporting, has a lot in common with your typical nonscience reporting. First, the indictment of "breakthrough" stories:

these stories sell the idea that science, and indeed the whole empirical world view, is only about tenuous, new, hotly-contested data. Articles about robustly-supported emerging themes and ideas would be more stimulating, of course, than most single experimental results, and these themes are, most people would agree, the real developments in science. But they emerge over months and several bits of evidence, not single rejiggable press releases.

What gets more press, a few murders or the fact that the crime rate is dropping? Minor disputes about trade, or the steady rise of the Chinese superpower? Grisly photos of traffic accidents, or trends in auto safety?

I don't want to get carried away here: science journalism really is much worse than other journalism. It has some unique problems, the most important being that reporters don't understand science. I don't really have much to say about that beyond what Clifford wrote. But we scientists should be aware that many aspects of the problem transcend science journalism.