journalistic rigor in… the blog?

President Obama said in 2009:

I am concerned that if the direction of the news is all blogosphere, all opinions, with no serious fact-checking, no serious attempts to put stories in context, that what you will end up getting is people shouting at each other across the void but not a lot of mutual understanding.

Is this fair?

blog

It’s true that there is a lot of chaff out there: some estimates put the number of blogs in the tens of millions, with the number of new posts a day at 500,000. Not all of these are active of course — many blogs don’t survive past the first few months of their existence. Of those that do, most are of limited interest to anyone but the creator and his or her friends and relations. And for those few that actually do get a decent following, a good portion are escapist — humour, celebrity gossip, TV-show fansites, and the like. I have no quarrel with any of that: give me a good dose of Crappy Pictures (crappy baby!), Hyperbole and a Half (love pathetic dog), Go Fug Yourself (“look into pants“), or The Bloggess, (“knock, knock, motherfucker“) and most days I’m perfectly happy.**

But I think Obama has it completely wrong. Bloggers are upending journalism and research in ways that truly benefit the consumer of content on the internet.They are the new fact-checkers, in a journalistic environment where news desks are being cut, where consumers expect real-time news, and where a timely story is valued more highly than an edited one.

Consider the following.

News-analyst bloggers

Blogger Carol Waino of mediaculpa discovered and exposed the alleged plagiarism of Globe and Mail star columnist Margaret Wente (among others). Her side-by-side comparisons of Wente’s “work” and the work of others are astounding. It hasn’t resulted in Wente losing her job (yet), but it did prompt the Globe to suspend her, issue an apology (of sorts), put its content behind a pay-wall, and block commenting on some of its articles. The CBC removed Wente from its Q Media panel. So why wasn’t the Globe‘s editorial desk taking care of this sort of fact checking, the sort of fact checking that someone with free time, some motivation, and an internet connection can do with ease? Waino has stepped into a vacancy at the Globe all right, but she’s not being paid for it, and she’s certainly not being thanked for it. On the contrary, in her words, the Globe’s response to her work has been “frosty.” But as news consumers, Carol, I tell you: we are grateful.

And an example of a blogger unearthing even more important and embarrassing falsehoods: During the 2004 American presidential election, Charles Johnson and other bloggers exposed documents regarding purported irregularities in George W. Bush’s service in the National Guard as forgeries.

Scientist-bloggers

Mark LibermanRosie Redfield, and Ben Goldacre analyse and critique scientific studies and scientific journalism, usually by actually looking at and evaluating the data and research methodology.

Liberman, a linguist at the University of Pennsylvania and co-helmsman of the popular Language Log blog, a makes mincemeat of scientific “scholarship” on a near-weekly basis. His blog includes countless examples of this sort of investigation; a recent one is his skeptical assessment of claims that young people today are less empathetic than those in a bygone age.

Redfield, a microbiologist at the University of British Columbia and author of the RRResearch blog, is an open-science advocate who entered the public eye when she made a justified stink in the #arseniclife affair, which cast serious doubts on NASA claims  to have discovered arsenic-based lifeforms on Earth.

And Goldacre, a medical doctor from the United Kingdom who has used his BadScience blog to expose sloppy (at best) and near criminal (at worst) research methods and reporting, has done an enormous amount to educate the public on how to assess what they read in scientific journals and the popular press. See what he has to say about Andrew Wakefield and the MMR-autism hoax, for example.

This is not “all opinions”

This is not “all opinions.” This is not “people shouting at each other across the void.” On the contrary, this is careful and considered work that holds traditional journalists and researchers accountable. It gives consumers some assurances that someone is taking a look. And, most importantly (I think), it shows us that regular people can take a look too…. we learn from writers like Waino, Liberman, and Goldacre how to read critically.

Time to reflect in a time of instant information?

Why have bloggers taken on this role? It might have something to do with the rise of Twitter and Facebook — platforms that allow for people to communicate an instant reaction to events around the world. This means that the traditional journalists have to get content together in a real hurry too — the competition for a scoop, the race to get there first, must be brutal in this digital age. And, unfortunately, it’s at the expense of quality. (“Report a Typo” forms are de rigueur these days –– news sites expect you to find mistakes. Think about that for a moment.) In contrast, bloggers can take their time. They can read the tweets, the status updates, the hastily-put together news articles, and reflect. Analyse. Fact-check.

In a 2010 piece for Wired, Clive Thompson states

[Twitter and status updates have] already changed blogging. Ten years ago, my favorite bloggers wrote middle takes—a link with a couple of sentences of commentary—and they’d update a few times a day. Once Twitter arrived, they began blogging less often but with much longer, more-in-depth essays. Why?

“I save the little stuff for Twitter and blog only when I have something big to say,” as blogger Anil Dash put it. It turns out readers prefer this: One survey found that the most popular blog posts today are the longest ones, 1,600 words on average.

What do you think of this? What blogs do you read? How do you rate blogs against more traditional news outlets like The Globe and Mail or the New York Times? Have bloggers influenced the way you read? Add your thoughts to the comments below.

_____

** I only wish that The Bruni Digest were still active. Believe me, it’s worth reading the archives.

assessing quality in scientific research and reporting

Below is an abridged version of a paper I submitted on 15 Sept. 2012 as part of my LIBR 501 class at UBC.

Problem! “Almighty Echo Chamber for Lies and Falsehoods”

Dr. Richard Cox observes that “For many, the computer and the resultant Information Age heralds a time when every person, with a modicum of cost, effort, and education, can harness more information in practical ways than ever before.” Nevertheless, such a wealth of information can lead to problems. Science journalist Jim Giles argues that “as well as acting as an almighty echo chamber for lies and falsehoods, the internet has given a more powerful voice to those who wish to sow confusion and conspiracy” (44). And even when there is no intent to mislead, poor reasoning, flawed research methods, and sloppy journalism contribute to the confusion.

Examples abound. In debates on important subjects such as autism and vaccines, climate change, the efficacy of homeopathic medicine, or, in an example from Giles, whether Obama’s healthcare reform will bring in so‐called “death panels” (46), it is difficult, in this internet echo chamber, to know which side to believe. Faced with conflicting data, internet users can become frustrated and misinformed at best, and risk actual physical danger at worst. Particularly in the area of scientific journalism, misinformation can be a serious problem.

Information professionals must be aware of the colliding perspectives on the internet and the media’s inclination to proliferate inaccuracies. At the same time, we should be aware of the grassroots efforts by bloggers and the like to address these issues, and how various new software tools are making this work easier and more inclusive.

Solution? “The Internet: Peer Reviewed”

Some promising software tools are starting to appear, and their arrival is of great interest not only to information professionals but also, indeed, to any consumer of content on the internet. “There’s a way to cut through the piles of nonsense on the internet,” says Giles in a recent article in The New Scientist (44). In “Truth Goggles,” he reports on various products that are, or soon will be, available to help readers separate truth from falsehoods. Among these is Hypothes.is, open‐source software that will allow users to annotate anything found online without fear that the content owner can revise or remove the comment (46). Hypothes.is inventor Dan Whaley calls it “the Internet, peer reviewed” (qtd. in Giles 46).

A key piece of the software is its ranking system, whereby users can rate each other’s contributions and, therefore, their credibility (Giles, 46). Just as you can assess the reputation of an eBay seller by checking how other customers have assessed her, read praise for or complaints about a pseudonymous Wikipedia editor by checking his talk page, and easily find the top‐ranked stories on Reddit, you will be able to see how peers rate a given Hypothes.is contributor. This rating, in conjunction with other factors such as her productivity and how she has been rated by randomly‐selected moderators, helps rank the quality of her contributions (Giles, 46).

Bad Science Bad

A big reason why we need tools like Hypothes.is is that the media are not sufficiently interested in bringing us the truth. This is not just an issue for gossip columns, which most of us already read with a healthy dose of scepticism; even so‐called science journalists are failing us. Science news with a sensationalistic spin gets attention, regardless of whether the journalist has properly read and understood the research, and regardless of the quality of the research itself. Ben Goldacre, who maintains the blog Bad Science, wrote a piece for The Guardian on the subject of the media’s role in spreading misinformation about the supposed link between the MMR vaccine and autism in children. He examined the take‐up of researcher Andrew Wakefield’s controversial findings:

Wakefield was at the centre of a media storm about the MMR vaccine and is now being blamed by journalists as if he were the only one at fault. In reality, the media are equally guilty. Even if it had been immaculately well conducted—and it certainly wasn’t—Wakefield’s “case series report” of 12 children’s clinical anecdotes would never have justified the conclusion that MMR causes autism, despite what journalists claimed: it simply didn’t have big enough numbers to do so.

But the media repeatedly reported the concerns of this one man, generally without giving methodological details of the research, either because they found it too complicated, inexplicably, or because to do so would have undermined their story. (“The Media Are Equally Guilty” 17)

In another article, this one for Significance, Goldacre describes how a study that found no statistically significant increase in cocaine use in children was misinterpreted and misrepresented in the popular press to such an extent that by the time it got to The New York Times, the headline read, “Cocaine Floods the Playground” (“When the Facts Get in the Way of a Story” 84).

No wonder people are confused.

Open Science Good

Hypothes.is and similar tools arrive at a time when we are already seeing an increased interest in openness, bottom‐up investigation, and the groomed assessments of the populace, thanks in large part to the wide adoption of social media. In the same way that open‐source software makes computer code available for anyone to review, use, and build upon for non‐commercial purposes, open science advocates want to make scientific research data available for examination by anyone, for the sake of the public good: “The more data is made openly available in a useful manner, the greater the level of transparency and reproducibility and hence the more efficient the scientific process becomes, to the benefit of society” (Molloy). The Hypothes.is software certainly sounds like it will support this goal. As one openscience.com writer puts it:

In terms of Open Science, my guess is that we’ll start to see authors publishing their articles straight onto a homepage or library repository, allowing for their work to be peer reviewed [through Hypothes.is] almost instantaneously. Meanwhile, journals will likely operate in a post‐peer review niche, whereby they collect the most valuable articles and publish them in a context where they increase said article’s reputation value. (Winters)

Rosie Redfield and #arseniclife

We have already seen how an open peer‐reviewing model might work. One of the best‐known recent examples is in the work of the University of British Columbia’s Rosie Redfield. In 2010, when NASA‐funded scientists reported that they had found arsenic‐based life in California, Redfield, a microbiologist, took to her blog to express concerns about the data and the quality of the findings. Thanks to social media, her assessment spread rapidly among her scientific colleagues and in the press, casting doubt on NASA’s results (Zimmer).

What became known as the #arseniclife affair “is one of the first cases in which the scientific community openly vetted a high‐profile paper, and influenced how the public at large thought about it” (Zimmer). In 2011, Nature magazine named Redfield among the top 10 “people who mattered” for that year (Hayden).

Conclusion

Tools that allow internet users to more accurately assess information quality are a positive addition to our online lives. In the realm of scientific research and reporting, Hypothes.is and similar products will inspire scientists to be forthcoming about their data and research methods, encourage journalists to take more care when describing and disseminating scientific findings, and give information professionals and consumers in general more confidence that what we are reading has been through some sort of transparent review and assessment process. I look forward to seeing this in practice.

[PS: Sign-up now for your Hypothes.is username!]

—–

Works Cited
Cox, R. “The Information Age and History: Looking Backward to See Us.” Ubiquity Sept. 2000. Web. 9 Sept. 2012. http://d-scholarship.pitt.edu/2698/1/r_cox_1.html

Giles, J. “Truth Goggles.” New Scientist 15 Sept. 2012: 44‐47. Print. http://www.newscientist.com/article/mg21528821.700-reality-checker-how-to-cut-nonsense-from-the-net.html

Goldacre, B. “The Media Are Equally Guilty.” Guardian 20 Jan. 2010: 4. Web. 24 Sept. 2012. http://www.guardian.co.uk/science/2010/jan/28/mmr-vaccine-ben-goldacre

‐‐‐. “When the Facts Get in the Way of a Story.” Significance 4.2 (7 June 2007): 84‐85. Web. 24 Sept. 2012. http://www.significancemagazine.org/details/magazine/868989/When-the-facts-get-in-the-way-of-a-story.html

Hayden, E. C. “365 Days: Nature’s 10. Ten People Who Mattered this Year. Rosie Redfield, Critical Enquirer.” Nature 480 (22 Dec. 2011): 437–445. Web. 29 Sept. 2012. http://www.nature.com/news/365-days-nature-s-10-1.9678

Molloy, J.C. “The Open Knowledge Foundation: Open Data Means Better Science.” PLOS Biology Dec. 2011. Web. 24 Sept. 2012. http://www.plosbiology.org/article/info%3Adoi%2F10.1371%2Fjournal.pbio.1001195

Winters, J. “Hypothes.is: The Future of the Internet and Peer Review.” 11 July 2012. Web. 1 Oct. 2012. http://openscience.com/category/open-access/peer-review/

Zimmer, C. “The Discovery of Arsenic‐Based Twitter: How #arseniclife Changed Science.” Slate 27 May 2011. Web. 29 Sept. 2012. http://www.slate.com/articles/health_and_science/science/2011/05/the_discovery_of_arsenicbased_twitter.html