Another Perspective on How “News” “Diffuses”: The Francisville 4 from Inside the Newsroom
Posted by chanders on July 13, 2009
Today, Jon Kleinberg, Jure Leskovec, and Lars Backstrom probably experienced every serious scholar’s fondest wish and worst nightmare — their path-breaking article, “Meme-tracking and the Dynamics of the News Cycle,” [pdf] was written up in the New York Times. The Times article was pretty good, as these things go, but I imagine that the authors are now in the process, as Scott Rosenberg put it, of watching their nuanced and complex scholarship become a meme itself … a “the news media leads the blogs by 2.5 hours in reporting news” meme.
The best thing to do is read the report itself, though the Rosenberg post is a great summary with some cogent criticisms, and the New York Times article is, all and all, a good summary. Rather than rehashing the discussion sor far, I want to talk a little bit about some of my own findings which I think complicate the Cornell research.
This May, I presented my own research on news diffusion and the new news cycle at the International Communications Association (ICA) conference in Chicago. The research comes out of my dissertation fieldwork, and I’m quite proud of it. It’s also as different from the Leskovec et. al. research as it is possible to be when you’re addressing the same subject matter. The paper is also, as luck wold have it, in peer-review hell, which means that (as far as I’m concerned) while its publishable and public, put the powers that be haven’t decided that yet.
But after reading the paper, the New York Times article, and some caveats about the paper, I wanted to weigh in with a summary my own findings, which I feel stand toe to toe with the “Meme-Tacking” paper — even though you may not think so, because there were no computers involved.
I so I want to talk a little bit about what I did and what I found, and then talk a bit about quantitative and qualitative research.
What I Looked At
My case study was the first (to the best of my knowledge) academic study to analyze the diffusion of a single news story from the moment it was reported to the moment that it died, from within the newsroom itself in the context of the new media ecosystem. In other words, I followed the diffusion of the fairly small story of the Francisville Four, a few left-leaning Philadelphia homeowners who were illegally evicted from their home after posting “anti-surveillance” fliers in their neighborhood.
What did I find? Several things, all of which I think add complexity to the Cornell study.
1. The story of the Francisville 4 was “broken” by an activist-affiliated news website, the Philadelphia Independent Media Center, along with a progressive news and discussion board, Young Philly Politics. Breaking the story in this case, though, amounted to little more than re-posting a press release, which led to discussion and debate in the online mediasphere about what was an appropriate level of reporting in order for this story to count as “news.” (This online debate should also prompt us to think about what it means to “lead” the news cycle in the era of the web.)
2. The first piece of “serious” reporting on the story was done, not by one of the major dailies, but by the local alt-weekly, the City Paper … and they put up on their blog. Media organizations like the City Paper are often overlooked in our discussions of the new news cycle. It also complicates the entire notion of what “counts” as “news media” and what counts as a blog (is the blog of an alt. weekly part of the MSM? is it a blog? is it something else? Its tough to say).
3. The daily newspapers first weighed in on the story several days after it broke. They did, however, do the most original reporting. They also started covering the story, not because they had seen it online, but because the activists involved had mounted a deliberate and old-fashioned media campaign to publicize the arrests (sending emails and faxes, calling press conferences, etc). They also started covering it because folks at the City Paper sent out emails touting their scoop to all the daily papers.
4. Almost 2.5 hours after the daily newspapers published their news –and here’s where the Cornell study comes in– the “traditional” blogosphere weighed in on the arrests. How should we talk, however, about the days of media work that occurred before the “blogosphere” picked up the news? Does it count? And how does that early work relate to the 2.5 hours meme?
5. Related: what did the local bloggers do? In some ways, they played the role of “commentators.” But they also played a key part in the news cycle by both linking to, and adding a ton of context to the Francisville story. They didn’t all do original reporting per se, but they provided new information via heavy linking out to older news coverage and other blogs. They also “reframed” the story in a way that helped it travel more easily through the global blogosphere. Indeed, this new information helped it reach the top of the Technorati hierarchy via coverage in Boing Boing.
6. The story died, not because of events in the media, but because the activists and the press decided it was in their best interest to no longer pursue a confrontation.
There’s a lot more in the paper itself … but hopefully, this gives you an idea of some of my findings. To me, they provide a much more complex picture of how the new news cycle is unfolding than the large-sale, quantitative Cornell study.
Qualitative and Quantitative Research
… more complex and interesting findings, yes. But perhaps less generalizable. And this is how I want to conclude this blog post. One of the trends I think we’re witnessing with the publication Cornell paper is that, rather than the old techniques of sampling media coverage, we now have the tools to analyze the whole damn thing media system— in this case, 1.6 million mainstream media sites and blogs and 90 million articles. The value of this research is undeniable, and I expect it to become ever more common, because we a) have the tools to do this kind of work and b) don’t need “permission” from anyone to analyze their content.
It’s interesting to note that the New York Times article on the “meme-tracker” study argues that “social scientists and media analysts have long examined news cycles, though focusing mainly on case studies instead of working with large Web data sets.” This is both true and not true. Indeed, its quite possible that research like the one carried out by the Cornell and Stanford researchers will become the academic norm, as I noted above. All the while, its harder and harder to get permission to go inside newsrooms, especially as the companies running them get ever more paranoid and bankrupt. Like I noted earlier, I have not found a single article that analyzes the new dynamics news diffusion from inside the newsroom, even though the web revolution is more than 15 years old.
This is a problem, because if we rely solely on the quantitative number crunching of huge data sets, we’re going to miss a lot. As the Times article notes, “The Cornell research, like so much of the data mining on the Web, does raise the issue of whether something is necessarily significant just because it can be measured by a computer — especially when mouse clicks are assumed to represent broad patterns of human behavior.” And some of Rosenberg’s very cogent criticisms (that memes here are a stand-in for news, that distinguishing between blogs and the MSM is increasingly difficult if not impossible, that the study misses the actual interplay between the media and the “blogosphere”) lend themselves better to ethnographic study than computer driven analysis.
Going back and reading my paper next to the Leskovec et. al paper, its hard to imagine two more different studies. Theirs is 9 pages of densely paced analysis, math, and graphs. Mine is 30 sprawling pages of, well, mostly personal observations and quotes. And while this might seem to lead us back to the bad-old-world of perennial conflict between quantitative and qualitative research, I have hope that it won’t. As long as we value both kinds of research equally, and understand what they are and are not able to do, we ought to be fine. We should keep in mind, however, that rather than living in a world where ethnographic case study is common and quantitative crunching is rare, we might be approaching a world that is exactly the opposite.
In an ideal world, the kind of nuance that comes out of ethnographic analyses like the Francisville study will help serve as building blocks for richer, more complex quantitative work that can ask ever more interesting questions. Here’s hoping it does.