Given the focus of this monthly review, there’s no avoiding at least a brief, final mention of the now-infamous and endlessly dissected “Facebook manipulates your emotions” study — formally known as “Experimental Evidence of Massive-scale Emotional Contagion through Social Networks,” published in the Proceedings of the National Academy of Sciences (PNAS). The critics of that study and its methods — modifying a Facebook algorithm to see how different users responded to different kinds of emotional messages — have largely won the battle in the court of public opinion, embarrassing associated researchers and Facebook itself.
But it’s worth noting that, among the community of researchers devoting their life’s work to Internet-related inquiry, it has prompted some soul-searching and reflection about the way forward. What now are the boundaries for leading-edge Internet research? See commentary from Jason Hong at Carnegie Mellon and from Michael Bernstein at Stanford. Both wonder whether ethical constraints designed for laboratory experiments should be strictly imposed on Internet experimentation, and they ponder the nature of research in an era where commercial A/B testing is ubiquitous — including in the game of love.
On a related note, a new paper from the Berkman Center for Internet & Society and a host of other research institutions, “Integrating Approaches to Privacy across the Research Lifecycle,” identifies urgent problems relating to “sensitive data describing the health, socioeconomic, or behavioral characteristics of human subjects.”
Here’s a sampling of new studies and papers from the academic journal world over the past couple of months (note: this article was first published at Nieman Journalism Lab):
_______
“Online and Uncivil? Patterns and Determinants of Incivility in Newspaper Website Comments”
University of Utah and University of Arizona, published in the Journal of Communication. By Kevin Coe, Kate Kenski and Stephen A. Rains.
This paper contributes to the emerging literature on what some scholars have called the “nasty effect” of online user-generated comments — an area that until recently has been much neglected by social scientists. Coe, Kenski and Rains set out to examine the relative frequency of uncivil comments, whether there are certain contexts in which they are more prevalent, and how they affect the quality of debate. They analyze data from the Arizona Daily Star during late 2011; more than 6,400 comments, attached to 706 articles, were examined.
More than one in five comments (22%) contained incivility of some kind, and as a whole “55.5% of the article discussions contained at least some Incivility”; further, “The most prevalent form of incivility was name-calling, which took place in 14.0% of all comments.” Those who commented only once over the period were more likely to demonstrate incivility than those commenting most frequently. Looking at associations with article content, the researchers found that “serious, ‘hard news’ topics appear to garner greater incivility. For example, articles about the economy, politics, law and order, taxes and foreign affairs all received roughly one uncivil comment for every four comments posted.” One-third of articles containing a quotation from President Obama had an uncivil comment attached. However, when incivility was present, it was also more likely that someone in the discussion thread would cite evidence for her argument, suggesting that incivility can push debate in constructive ways, too.
“[C]ontrary to popular perceptions,” Coe, Kenski and Rains state, “those individuals who commented most frequently were not the ones proportionally most inclined to make uncivil remarks. Our data suggest that stereotypes of frequent posters dominating news sites with barrages of incivility are, if not unfounded, at least overstated.”
“Fact Checking the Campaign: How Political Reporters Use Twitter to Set the Record Straight (Or Not)”
University of Texas at Austin, published in the International Journal of Press/Politics. By Mark Coddington, Logan Molyneux and Regina G. Lawrence.
The study analyzes how some 400 reporters used Twitter to assess campaign claims during the 2012 U.S. election cycle. Coddington, Molyneux and Lawrence narrow the sample down to about 1,900 randomly selected tweets from those journalists, with 1,700 relating directly to campaign claims. For fans of the emerging fact-checking approach, the results are disappointing: “Among the tweets that referenced claims made by the presidential candidates, at least some of which were eligible for fact checking, almost two-thirds (60%) reflected traditional practices of ‘professional’ objectivity: stenography — simply passing along a claim made by a politician — and ‘he said, she said’ repetition of a politician’s claims and his opponent’s counterclaim. A small but not insignificant portion (15%) reflected the ‘scientific’ approach to objectivity that underlies the emergent fact-checking genre, by referencing evidence for or against the claim and, in a few cases, rendering an explicit judgment about the validity of the claim — though such tweets were more likely to come from commentators than from news reporters.”
However, a quarter of the tweets showed neither the traditional “he said, she said” approach nor evidence-citing scientific fact-checking; instead, these “either passed judgment on a claim without providing evidence for that judgment or pushed back against the politician’s claim with the journalist’s own counterclaim, again without reference to external evidence.”
Coddington, Molyneux and Lawrence conclude that the “findings suggest that the campaign was hardly ‘dictated by fact checkers,’ as the Romney campaign famously suggested, because most political reporters on Twitter relied mostly on traditional ‘stenography’ and ‘he said, she said’ forms of coverage and commentary — even during presidential debates that were identified as the most-tweeted and the most fact checked in history.”
“Can We ‘Snowfall’ This? Digital longform and the Race for the Tablet Market”
University of Iowa, published in Digital Journalism. By David Dowling and Travis Vogan.
In a thoughtful and deep examination of a new genre being born, Dowling and Vogan look at three case studies in innovative story treatment — the New York Times’ “Snow Fall,” ESPN’s “Out in the Great Alone” and Sports Illustrated’s “Lost Soul” — to see how each outlet leveraged new opportunities in digital long-form storytelling.
The researchers note that, as with New Journalism in the 1960s, we are seeing a new form that breaks significantly with journalism’s past. The visual attributes, multimedia features and layout of each — as well as branding strategy and overall outcomes for the media companies involved — are reviewed in detail. These long-form pieces “function as opportunities for these prominent media organizations to build a branded sense of renown in an increasingly competitive market,” Dowling and Vogan write, noting that they are as much story-as-advertising as story-as-story. Indeed, such dramatically appealing and elaborately produced stories “encourage reader-driven circulation via social media, a process that expands the products’ reach and allows consumers to cultivate their own identities by associating with such artifacts.” Moreover, “Digital long-form…represents a major shift away from brief breaking news toward a business model built on a carefully crafted multimedia product sensitive to users’ appreciation of multimedia narrative aesthetics.”
“What Is a Flag for? Social Media Reporting Tools and the Vocabulary of Complaint”
Microsoft Research and Cornell University, published in New Media & Society. By Kate Crawford and Tarleton Gillespie.
This paper critiques the common mechanism for reporting (or flagging) offensive content on social networking sites, calling the flag feature “a complex interplay between users and platforms, humans and algorithms, and the social norms and regulatory structures of social media.” Crawford and Gillespie worry that the available options for flagging are too limited, thus inhibiting the robust and fair governance of social platforms of all kinds — “Facebook, Twitter, Vine, Flickr, YouTube, Instagram and Foursquare, as well as in the comments sections on most blogs and news sites.” They note that the “vocabulary” that users can employ to express concerns varies according to the site — some have only “thin” features, while others allow for the designation of mature content, abusive content, self-harm/suicide, or copyright infringement, etc.
YouTube earns praise for allowing users to provide granular feedback about offending sequences within uploaded videos; Facebook is singled out for having the best “process transparency.” Crawford and Gillespie explore the possibility of more sites creating “backstage” records of why things are deleted, i.e., Wikipedia-style discussion threads: “[V]isible traces of how and why a decision was made could help avoid the appearance that one perspective has simply won, in a contest of what are in fact inherently conflicting worldviews. A flag-and-delete policy obscures or eradicates any evidence that the conflict ever existed.” Ultimately, the authors have concerns that our global discourse is being diminished: “The combination of proprietary algorithms assessing relevance, opaque processes of human adjudication and the lack of any visible public discussion leaves critical decisions about difficult content in the hands of a few unknown figures at social media companies.”
Note: This summer, New Media & Society has published a series of studies about Facebook-related issues and themes.
“New Media, New Civics?”
MIT, published in Policy and Internet. By Ethan Zuckerman.
Adopted from Zuckerman’s lecture last year at the Oxford Internet Institute, this 16-page essay is well worth reading as a new perspective on where democratic participation is going in the digital age. It attempts to get past the now familiar debate about online activism versus traditional activism (the argument over “slactivism”) and to look more broadly to what Zuckerman calls “participatory civics.” By this, he means “forms of civic engagement that use digital media as a core component and embrace a post-’informed citizen’ model of civic participation.”
This includes the trend of direct participation through online campaigns — crowdfunding and the like — and embodies an increased desire on the part of the public, especially young people, to get personally closer to the causes about which they are passionate. Zuckerman sets out a useful analytical matrix/framework for looking at activism and participation — “thick” versus “thin,” “instrumental” versus “voice,” and the whole spectrum in between. “[I]f we believe in the importance of deliberation,” he concludes, “not just about individual issues but about what issues merit deliberation, we need original thinking about how millions of points of individual and group interest resolve into an intelligible picture.”
This issue of Policy and Internet also contains a variety of responses to and critiques of Zuckerman’s ideas; these include essays by Jennifer Earl, Henry Farrell, Zeynep Tufekci and Deen Freelon, among others.
Further recommendations: For a take-no-prisoners approach to securing the media future, see Robert W. McChesney’s “Be Realistic, Demand the Impossible: Three Radically Democratic Internet Policies,” published in Critical Studies in Media Communication. Seth C. Lewis and Oscar Westlund propose deeper inquiry into how technology itself plays a role in media production and decision-making in their new Digital Journalism article “Actors, Actants, Audiences and Activities in Cross-Media News Work.” Further, Amy Schmitz Weiss’s article in Digital Journalism looks at the increasing importance of place and geography for journalism and our understanding of it in the 21st century, while Daniel Kreiss in The Sociological Quarterly explores the nature of politics and party networks in the new millennium and discusses the “virtues of participation without power.”
Keywords: Twitter, Facebook, technology
Expert Commentary