As part of our ongoing collaboration with Nieman Journalism Lab, we’ve rounded up the latest in digital- and media-oriented scholarship — picking highlights from disciplines such as computer science, political science, journalism research and communications.
In terms of empirical research that can inform media practice, it’s worth reviewing some important articles published this month. Among the works discussed: a Berkman Center for Internet & Society report on the technical limitations of how we measure online activity and a thoughtful warning about many forms of web-generated data we take for granted; an NPR Digital Services analysis of their own Local Stories Project’s social media data, making a case that “serious” stories are as shareable as “fun” ones; a deep dive into the cognitive science behind news engagement on social media from Sonya Song, Knight-Mozilla Fellow at The Boston Globe; a Foundation Center report highlighting the rise of philanthropic money in media generally and the sharp increases in mobile and applications/tools investments; and a Pew/Knight report examining the demographics of Twitter news consumers (the same survey data that was based on continues to yield new Pew analyses, including on news use across all social platforms).
The academic journal world also produced some noteworthy articles this month, including:
The study analyzes 2,731 tweets from journalists (26 men, 25 women) at 51 different newspapers during 2011. The problems in this area are persistent and well-documented, and Artwick reviews the prior literature on gender imbalances in news stories. In her sample, she finds sources named in about 19 percent of tweets (507 sources quoted overall). Just 11 percent of those quoted were women, thus “women’s voices were relatively silent in the quotes on these reporters’ Twitter streams.” Further, at larger papers, “less than 8 percent of female reporters’ quotes featured women, and male reporters quoted no women at all.” Female reporters at larger news outlets quoted fewer women within the sample, compared to their counterparts at smaller newspapers.
Through the use of “@” mentions, however, reporters were “engaging with a more diverse community”: Nearly four of every 10 “@” mentions were women. Artwick concludes: “Reshaping the old rules and hegemonic structures that dominate story content and push-through onto Twitter may be needed to make way for the diversity of voices that can better serve democracy.”
Part of a growing cohort of academics pioneering the subfield of online politics, Karpf provides a short, useful summary of the state of research in this area. For journalists, the works cited page alone is a valuable who’s who — fill up that contact list for campaigns 2014 and 2016 — but the narrative also underscores some basic truths: The web has not changed many forms of participatory inequality; polarizing candidates frequently win the small donations race; the “culture of testing” and analytics are changing how campaigns allocate resources; liberals and conservatives typically use technology differently for campaigns.
One striking insight: “We are potentially moving from swing states to swing individuals, employing savvy marketing professionals to attract these persuadables and mobilize these supporters with little semblance of the slow, messy deliberative practices enshrined in our democratic theories.” But definitive answers remain elusive on many other fronts. “There is still, frankly, a lot that we do not know,” Karpf writes. For more insights in this area, see Kathleen Hall Jamieson’s response, “Messaging, Micro-Targeting and New Media Technologies.”
Xu looks at how different aspects of Digg (the old, pre-relaunch Digg) influenced perceptions of credibility related to media content. The study explores the consequences of the “bandwagon effect” — whereby attention to content frequently clustered around certain items — by performing an experiment on 146 undergraduates. It’s a small sample, but it underscores some important ideas. Variables included number of diggs, source credibility, and recency of content. The results are perhaps predictable: People not only tend to go with the crowd, but they tend to think the crowd must be wise in its judgment.
“Social recommendation, in the form of the number of diggs, was found to have major influences on a variety of outcomes, such as attention and click likelihood toward the feed, evaluation of news credibility and newsworthiness, as well as news sharing behavioral intention,” Xu writes. But the big theoretical takeaway relates to how news organizations need to rethink their approach: “The determining role of social recommendation might present a big challenge for news organizations relying heavily on traditional editorial selection. Whether the news was published by a highly credible source might no longer matter in individuals’ selective exposure to news. Individuals may rely more on social means of information searching and filtering rather than resorting to experts for suggestions.”
This think piece analyzes Facebook’s attempts to create a “more social experience of the Web” and, among other things, its use of like and share buttons that distribute engagement outside the platform and across the web. Gerlitz and Helmond explore “how the launch of social buttons has reintroduced the role of users in organising web content and the fabric of the web — and how the infrastructure of the Open Graph is turning user affects and engagement into both data and objects of exchange.” Their discussion looks at how subtle technical shifts are changing whole paradigms and conceptions of digital life and commerce.
But the paper is not without some tough critiques, particularly given Facebook’s refusal to include a “Dislike” social button option or other ways of registering negative sentiment and data: “[T]he Like economy is facilitating a web of positive sentiment in which users are constantly prompted to like, enjoy, recommend and buy as opposed to discuss or critique — making all forms of engagement more comparable but also more sellable to webmasters, brands and advertisers. While Social Plugins allow materialising and measure positive affect, critique and discontent with external web content remain largely intensive and non-measurable.”
However, as the scholars note, this is all growing more complicated: “The absence of negative affects has until the autumn of 2012 marked the limits of Facebook’s understanding of sociality. The introduction of new activity apps, however, has complicated the affective space of Facebook, allowing for differentiated and even negative activities in relation to web objects, such as to hate, disagree and criticise — while the action ‘dislike’ remains blocked.” Overall, the paper looks at precisely how Facebook’s general filtering of the web is being constructed and the implicit values embedded in the decisions of its developers.
This paper blends in-the-field ethnographic work with bleeding-edge academic theory. Anderson and Kreiss take as their case studies two experiences: One involves the making of voter maps for internal use within the Obama 2008 campaign; the other involves Philadelphia-based newsrooms and their difficult experiences with the quirks of content management systems. In the past, social scientists could look at major societal shifts through much more obvious and observable macro technological advances — the rise of assembly lines, cars, highways, suburbs, computers. Now, many of the important trends are micro. Thus, these researchers explore how Actor-Network Theory (ANT) might help us get a better sense of what’s really going on — how technologies are shaped by people, and shape what people do. This theory highlights how technology is itself an actor and helps shape knowledge and builds communities with common understandings. It requires research of the tech-human interaction at a granular level.
At any rate, in both cases studied certain technological forms — the voter map and the news content system — grow up to embody assumptions about what is important and how information should be understood: Which people should be targeted, and which ignored? What stories should be told, and who should control that agenda? These questions are often decided in subtle ways by the technical organization of information. Anderson and Kreiss conclude that “to understand power and reform social institutions, and even uproot them, requires attention not just to theories of participation, deliberation, and the public sphere, but the socio-technical engineering of democratic publics and the cultural presuppositions that guide it.”
The researchers look at hundreds of instances of digital activism over two decades, categorizing practices and outcomes. Their evidence dispels some myths: “Frequent news stories about cyberterrorists, cybercrime, and hackers make digital activism seem like a pretty dark art, whereas close comparative analysis of campaign strategies, successes, and failures reveals that persuasion features more highly than violence.” Yes, Facebook and Twitter are common tools, but in different regions of the world other social and communications technologies are also deployed frequently. E-petitions are most common in North America, while microblogging is most common in South America and Asia. Edwards, Howard, and Joyce add some nuance to debates in this area: “No single digital tool in this study had a clear relationship with campaign success. This is consistent with received wisdom. Experienced activists will tell you that using Facebook or Twitter or an e-petition will not guarantee success. Now there is data to demonstrate that using these tools does not even make success more likely, when that is the only factor being analyzed.” Finally, the evidence “challenges cyber-pessimist hypotheses about repressive governments becoming more savvy about digital activism, and thus better able to defeat digital campaigns. This study suggests that there is not a clear change in the rate of campaign success or failure between 2010 and 2012.”
This is a useful paper that might be explored by journalists looking for the latest angle on holiday shopping. The study’s data were generated by a field experiment and survey of 200 shoppers, focusing on their experience with advice from networks while browsing for items. Shoppers took pictures and used text messages, Facebook and Mechanical Turk to get feedback about clothing choices. They not only found friends’ advice useful but also valued feedback from anonymous crowdsourcing.
The findings “indicate that seeking input from remote people while shopping is a relatively commonplace occurrence, but that most people currently rely on simple voice or text-based interactions to accomplish this,” the researchers write. “Our experiment demonstrated that users found value in using richer media (photos) as well as using emerging social platforms (social networking sites and crowd labor markets) to meet these needs, and that such platforms’ performance characteristics (particularly Mechanical Turk) were generally suitable for such interactions. Based on these findings, we suspect that consumers would find value in a smartphone app designed specifically to support seeking remote shopping advice.” Morris, Inkpen and Venolia ultimately offer some thoughts about what the next generation of shopping advice systems might look like.
(Although not mentioned, it also suggests a possible avenue for media looking to provide more direct, targeted value to consumers, and generate revenue, in an evolving social media ecosystem.)
A super-deep meditation on how language translation operates within Internet-enabled marketplaces at sites such as ProZ.com (based in Syracuse, N.Y.), the study contemplates how the rise of algorithms and globalization are affecting our ideas about work and culture. French philosophy is invoked as Kushner takes a sweeping theoretical look at the future of labor: “The freelancers who become ProZ.com users can instrumentalize the interface even as it instrumentalizes them: by internalizing the logics of entrepreneurialism and learning to operate the ProZ.com interface, they ‘introduce economy’ (Foucault, 1991: 92) into their operations, increasing productivity to the benefit of the industry and extracting some degree of compensation. Those who produce quality translations, receive high ratings, develop relationships with outsourcers and come to attract new clients will easily justify next year’s US$129 membership fee.” In all this, we get a glimpse of the global labor future — and see how the human mind and the computer will be increasingly intertwined in the performance of tasks.