The mainstream media has a deeply flawed style of reporting science stories. They take a single scientific study, not a trend in the literature, and reports it as almost indisputable fact. This is a huge problem. To observe this phenomenon just watch your local news. I don’t know if drinking red wine will let me live forever, or kill me tomorrow. They just jump on the latest, single study from a peer reviewed journal and report it as settled science. The public does not know enough about the pillars of the scientific method to understand. They just rely on the reporting of their local information gate-keepers. The news just throws up a tease before a commercial break saying something like, “How eating chocolate may affect your health. Coming up in 30 seconds.” This then is followed by a report that is far too short for even the study’s abstract to be read aloud. This is unbelievably irresponsible.
In light of the above rant I wrote and re-wrote on my iPhone at work all week, I found on the web today a terrific article in The Atlantic that backs me up. The Thursday piece is about a Pew Research poll that shows people have no doubt in science’s progress and usefulness, yet they still disagree with some specific findings. These include hot-button issues like global-warming, genetically modified food, and opinions on vaccines’ effectiveness and safety. So let me quote something from the article I found that contributes to my argument:
For their part, scientists in the Pew survey faulted the media and the public itself for the existence of these gaps. The “public doesn’t know much about science” was reported as a major problem by 84 percent of scientists, and 79 percent considered “news reports don’t distinguish well-founded findings” a major problem. About half of scientists said oversimplification by the media and a public that expects solutions too quickly were major problems.
Fair enough. The translating of dense, precise scientific studies into digestible, clickable news stories is a tricky business. When a publication mistakenly says a single study “proves” something, or, heaven forbid, implies causation where there is merely correlation, those who know better are eager to jump in and point out the mistake. And it probably doesn’t help the publications’ reputations as legitimate sources of information. Of course, no matter how careful a writer is to say “associated with,” to transparently point out small sample sizes, to repeat the scientists’ claim that “more research is needed,” you’ll still get commenters crying “pseudoscience.”
So we must be vigilant. The misrepresented news of peer-reviewed publications’ studies and experiments need to be reported as part of a larger conversation. And that includes the work of many researchers over a usually lengthy amount of time, not just a 20-second news bite or Yahoo! article.