Expert Commentary

What’s new in digital scholarship: June 2013

June 2013 roundup of research. Topics include drone journalism, surveillance, Twitter in conflict zones, crowdsourced information platforms, and remix culture.

As part of our ongoing collaboration with Nieman Journalism Lab, we’ve rounded up the latest in digital- and media-oriented scholarship — picking highlights from disciplines such as computer science, political science, journalism research and communications. (Note: this article was first published at Nieman Lab, and is now archived here in full.)

This month’s edition of What’s New In Digital Scholarship rounds up the findings on various hot topics, including: drone journalism; surveillance and the public; Twitter in conflict zones; Big Data and its limits; crowdsourced information platforms; remix culture; and much more. We also suggest some further “beach reads” at bottom. Enjoy the deep dive.

_______

“Reuters Institute Digital News Report 2013: Tracking the Future of News”
Paper from University of Oxford Reuters Institute for the Study of Journalism, edited by Nic Newman and David A. L. Levy.

This new report provides tremendous comparative perspective on how different countries and news ecosystems are developing both in symmetrical and divergent ways (see the Lab’s write-up of the national differences/similarities highlighted.) But it also provides some interesting hard numbers relating to the U.S. media landscape; it surveys news habits of a sample of more than 2,000 Americans.

Key U.S. data points include: the number of Americans reporting accessing news by tablet in the past week rose, from 11 percent in 2012 to 16 percent in 2013; 28 percent said they accessed news on a smartphone in the last week; 75 percent of Americans reported accessing news online in the past week, while 72 percent said they got news through television and 47 percent reported having read a print publication; TV (43 percent) and online (39 percent) were Americans’ preferred platforms for accessing news. Further, a yawning divide exists between the preferences of those ages 18 to 24 and those over 55: among the younger cohort, 64 percent say the Web is their main source for news, versus only 25 percent among the older group; as for TV, however, 54 percent of older Americans report it as their main source, versus only 20 percent among those 18 to 24. Finally, 12 percent of American respondents overall reported paying for digital news in 2013, compared to 9 percent in 2012.

 

“The Rise and Fall of a Citizen Reporter”
Study from Wellesley College, for the WebScience 2013 conference. By Panagiotis Metaxas and Eni Mustafaraj.

This study looks at a network of anonymous Twitter citizen reporters around Monterrey, Mexico, covering the drug wars. It provides new insights into conflict zone journalism and information ecosystems in the age of digital media, as well the limits of raw data. The researchers, both computer scientists, analyze a dataset focused on the hashtag #MTYfollow, consisting of “258,734 tweets written by 29,671 unique Twitter accounts, covering 286 days in the time interval November 2010-August 2011.” They drill down on the account @trackmty, run by the pseudonym Melissa Lotzer, which is the largest of the accounts involved.

The scholars reconstruct a sequence in which a wild Twitter “game” breaks out — obviously, with life-and-death stakes — involving accusations about cartel informants (“hawks,” or “halcones”) and citizen watchdogs (“eagles,” or “aguilas”), with counter-accusations flying that certain citizen reporters were actually working for the Zetas drug cartel; indeed, @trackmty ends up being accused of working for the cartels. Online trolls attack her on Twitter and in blogs.

“The original Melissa @trackmty is slow to react,” the study notes, “and when she does, she tries to point to her past accomplishments, in particular the creation of [a group of other media accounts] and the interviews she has given to several reporters from the US and Spain (REF). But the frequency of her tweeting decreases, along with the community’s retweets. Finally, at the end of June, she stops tweeting altogether.” It turns out that the real @trackmty had been exposed — “her real identity, her photograph, friends and home address.”

Little of this drama was obvious from the data. Ultimately, the researchers were able to interview the real @trackmty and members of the #MTYfollow community. The big lessons, they realize, are the “limits of Big Data analysis.” The data visualizations showing influence patterns and spikes in tweet frequency showed all kinds of interesting dynamics. But they were insufficient to make inferences of value about the community affected: “In analyzing the tweets around a popular hashtag used by users who worry about their personal safely in a Mexican city we found that one must go back and forth between collecting and analyzing many times while formulating the proper research questions to ask. Further, one must have a method of establishing the ground truth, which is particularly tricky in a community of — mostly — anonymous users.”

 

“Undermining the Corrective Effects of Media-Based Political Fact Checking? The Role of Contextual Cues and Naïve Theory”
Study from Ohio State University, published in the Journal of Communication. By R. Kelly Garrett, Erik C. Nisbet, and Emily K. Lynch.

As the political fact-checking movement — the FactChecks and Politifacts, along with their various lesser-known cousins — has arisen, so too has a more hard-headed social science effort to get to the root causes of persistent lies and rumors, a situation made all the worse on the web. Of course, journalists hope truth can have a “corrective” effect, but the literature in this area suggests that blasting more facts at people often doesn’t work — hence, the “information deficit fallacy.” Thus, a cottage psych-media research industry has grown up, exploring “motivated reasoning,” “biased assimilation,” “confirmation bias,” “cultural cognition,” and other such concepts.

This study tries to advance understanding of how peripheral cues such as accompanying graphics and biographical information can affect how citizens receive and accept corrective information. In experiments, the researchers ask subjects to respond to claims about the proposed Islamic cultural center near Ground Zero and the disposition of its imam. It turns out that contextual information — what the imam has said, what he looks like and anything that challenges dominant cultural norms — often erodes the positive intentions of the fact-checking message.

The authors conclude that the “most straightforward method of maximizing the corrective effect of a fact-checking article is to avoid including information that activates stereotypes or generalizations…which make related cognitions more accessible and misperceptions more plausible.” The findings have a grim quality: “The unfortunate conclusion that we draw from this work is that contextual information so often included in fact-checking messages by professional news outlets in order to provide depth and avoid bias can undermine a message’s corrective effects. We suggest that this occurs when the factually accurate information (which has only peripheral bearing on the misperception) brings to mind” mental shortcuts that contain generalizations or stereotypes about people or things — so-called “naïve theories.”

 

“Crowdsourcing CCTV Surveillance on the Internet”
Paper from the University of Westminster, published in Information, Communication & Society. By Daniel Trottier.

A timely look at the implications of a society more deeply pervaded by surveillance technologies, this paper analyzes various web-based efforts in Britain that involve the identification of suspicious persons or activity. (The controversies around Reddit and the Boston Marathon bombing suspects come to mind here.) The researcher examines Facewatch, CrimeStoppers UK, Internet Eyes, and Shoreditch Digital Bridge, all of which had commercial elements attached to crowdsourcing projects where participants monitored feed from surveillance cameras of public spaces. He points out that these “developments contribute to a normalization of participatory surveillance for entertainment, socialization, and commerce,” and that the “risks of compromised privacy, false accusations and social sorting are offloaded onto citizen-watchers and citizen-suspects.” Further, the study highlights the perils inherent in the “‘gamification’ of surveillance-based labour.”

 

“New Perspectives from the Sky: Unmanned Aerial Vehicles and Journalism”
Paper from the University of Texas at Arlington, published in Digital Journalism. By Mark Tremayne and Andrew Clark.

The use of unmanned aerial vehicles (UAVs, or “drones”) in journalism is an area of growing interest, and this exploration provides some context and research-based perspective. Drones in the service of the media have already been used for everything from snapping pictures of Paris Hilton and surveying tornado damaged areas in Alabama to filming secret government facilities in Australia and protestor clashes in Poland. In all, the researchers found “eight instances of drone technology being put to use for journalistic purposes from late 2010 through early 2012.”

This practice will inevitably raise issues about the extent to which it goes too far. “It is not hard to imagine how the news media, using drones to gather information, could be subject to privacy lawsuits,” the authors write. “What the news media can do to potentially ward off the threat of lawsuits is to ensure that drones are used in an ethical manner consistent with appropriate news practices. News directors and editors and professional associations can establish codes of conduct for the use of such devices in much the same way they already do with the use of hidden cameras and other technology.”

 

“Connecting with the User-generated Web: How Group Identification Impacts Online Information Sharing and Evaluation”
Study from University of California, Santa Barbara, published in Information, Communication & Society. By Andrew J. Flanagin, Kristin Page Hocevar, and Siriphan Nancy Samahito.

Whether it’s Wikipedia, Yelp, TripAdvisor, or some other giant pool of user-generated “wisdom,” user-generated platforms convene large, disaggregated audiences who form loose memberships based around apparent common interests. But what makes certain communities bond and stick together, keeping online information environments fresh, passionate, and lively (and possibly accurate)?

The researchers involved in this study perform some experiments with undergraduates to see how adding small bits of personal information — the university, major, gender, or other piece of information — to informational posts changed perceptions by viewers. Perhaps predictably, the results show that “potential contributors had more positive attitudes (manifested in the form of increased motivation) about contribution to an online information pool when they experienced shared group identification with others.”

For editors and online community designers and organizers, the takeaway is that information pools “may actually form and sustain themselves best as communities comprising similar people with similar views.” Not exactly an antidote to “filter bubble” fears, but it’s worth knowing if you’re an admin for an online army.

 

“Selective Exposure, Tolerance and Satirical News”
Study from University of Texas at Austin and University of Wyoming, published in the International Journal of Public Opinion Research. By Natalie J. Stroud and Ashley Muddiman.

While not the first study to focus on the rise of satirical news — after all, a 2005 study in Political Communication on “The Daily Show with Jon Stewart” now has 230 subsequent academic citations, according to Google Scholar — this new study looks at satirical news viewed specifically in a web context.

It suggests the dark side of snark, at least in terms of promoting open-mindedness and deliberative democracy. The conclusion is blunt: “The evidence from this study suggests that satirical news does not encourage democratic virtues like exposure to diverse perspectives and tolerance. On the contrary, the results show that, if anything, comedic news makes people more likely to engage in partisan selective exposure. Further, those viewing comedic news became less, not more, tolerant of those with political views unlike their own.” Knowing Colbert and Stewart, the study’s authors can expect an invitation soon to atone for this study.

 

The Hidden Demography of New Media Ethics”
Study from Rutgers and USC, published in Information, Communication & Society. By Mark Latonero and Aram Sinnreich.

The study leverages 2006 and 2010 survey data, both domestic and international, to take an analytical look at how notions of intellectual property and ethical Web culture are evolving, particularly as they relate to ideas such as remixing, mashups and repurposing of content. The researchers find a complex tapestry of behavioral norms, some of them correlated with certain age, gender, race or national traits. New technologies are “giving rise to new configurable cultural practices that fall into the expanding gray area between traditional patterns of production and consumption. The data suggest that these practices have the potential to grow in prevalence in the United States across every age group, and have the potential to become common throughout the dozens of industrialized nations sampled in this study.”

Further, rules of the road have formed organically, as technology has outstripped legal strictures: “Most significantly, despite (or because of) the inadequacy of present-day copyright laws to address issues of ownership, attribution, and cultural validity in regard to emerging digital practices, everyday people are developing their own ethical frameworks to distinguish between legitimate and illegitimate uses of reappropriated work in their cultural environments.”

Beach reads:

Here are some further academic paper honorable mentions this month — all from the culture and society desk:

 

Tags: communication, Facebook, research roundup, Twitter, news

About The Author