As part of our ongoing collaboration with Nieman Journalism Lab, we’ve rounded up the latest in digital- and media-oriented scholarship — picking highlights from disciplines such as computer science, political science, journalism research and communications. (Note: this article was first published at Nieman Lab, and is now archived here in full.)
This month’s edition of What’s New In Digital Scholarship rounds up the findings of eight reports and studies that touch on many of the major themes scholars are exploring: how the media business can survive financially and maintain editorial integrity; how standards are shifting with respect to the use of non-professional sources for news; and how newsrooms are still feeling their way toward best practices in an online world that has different cultural expectations. And there’s some fresh data about how Americans are engaging with political news on social media. Also featured here are studies that relate to some technical issues — Internet surveillance and the mobile “revolution” in the developing world — that are of general concern to global media.
The authors take a nuanced look at the history of indirect government supports that have allowed the press to flourish in past eras and imagine new ways that the state can help a media industry now going through disruptive change. “Given the state’s continued role in realizing and fostering the public sphere,” the authors write, “it is time to move beyond the debate of whether the state should subsidize the press to consider how we can better design supportive policies appropriate for the digital age.” They argue that the state can and should play a role in supporting journalism, while at the same time preserving editorial independence and journalism’s “watchdog” role.
The scholars propose: making more information and data available for the press, in effect providing a “subsidy” by furnishing more material to report and add value to; redoubling support for public broadcasting; helping more nonprofit news organizations such as ProPublica come into being through tax and regulatory policies; and funding more internships and training experiences for young journalists. Interestingly, they also float the idea of rebooting normal copyright procedures in order to help the press: “One idea is to provide financial compensation to journalists and news outlets that allow others in the public sphere to access, use, and remix it as they wish. For example, if the intent of copyright is ‘to promote the Progress of Science and useful Arts,’ we should reverse its specific mechanism of granting creators exclusive rights to control the use, dissemination, and derivations of their work and provide fiscal incentives instead for journalism that is produced for the public domain.”
How did it come to be proper “etiquette” to provide an outbound link to an external source when referencing other online media? And why do we now basically accept this as a best practice? This study looks at how such norms developed; it is based on 21 in-depth interviews in 2011 with political bloggers affiliated with traditional news organizations and non-traditional outlets. The practice, it seems, is rooted not only in notions of “courtesy” but in ideas that links build and strengthen communal ties and establish credibility, according to the study’s sources. Still, within news organizations, content management systems have sometimes made the practice difficult, and journalists and bloggers are not always sure there are institutional guidelines and best practices for linking within their own media outlets, it turns out.
Of course, there is the issue of ensuring maximum time-on-site, but philosophies and values are now changing: “Several journalists described a cultural resistance to linking in newsrooms in past years built around a desire to keep readers within the organization’s website. But all of them also said that deep-seated resistance to linking had begun to fall away, largely because of two factors: the infusion of the Web’s cultural values…and a concerted effort by particular editors to institutionalize linking by incorporating it into the workflow of writing for the Web.” At root, it’s a story of how digital norms have changed newsrooms: “In the case of linking, professional journalism has shown a real willingness to adopt and absorb Web-based cultural values, using links as tools for transparency and networked connection.”
The paper reviews various legal precedents and laws such as the 1996 Communications Decency Act (CDA) and extends their implications to the microblogging platform Twitter. The author concludes firmly that “it could not be more clear that the ‘naked retweet’ — that is, pushing the ‘Retweet’ button to circulate somebody else’s tweet to one’s own followers … would not trigger republisher liability for defamation.” Previous court rulings on aspects of the CDA suggest the law “protects social media users when they share defamatory information with others.” However, there is also the issue of modifying retweets or providing additional commentary on the original tweet. Here there is some legal gray area. It may be the case that “retweets with added content would be protected as long as the republisher does not add new content that is independently defamatory,” but there is a 2008 Ninth Circuit decision, in the Roommates.com case, that could open the door to a libel prosecution in certain situations. Twitter users should be aware that a “hat tip” (h/t) technique, when “preceded by the Twitter user’s own thoughts, comments, or assertions, is less likely to be granted immunity under” the CDA. Much of this comes down to whether a court would consider a given Twitter user a true “content provider” or merely a “user,” though that distinction has yet to be fully developed in legal theory and case law.
Based on a survey of more than 2,000 U.S. adults conducted in mid-2012, Pew finds that engagement with social networking sites has nearly doubled over the past four years among those who are already online (33 percent in 2008, 69 percent in 2012.) Many more people say they posted links to political news on social sites: 17 percent of all adults in 2012, compared to just 3 percent in 2008. And among online adults in 2012, 28 percent reported posting political stories or articles on social networking sites, compared to 11 percent among that population in 2008. The data generally show an income gap, with higher-income persons reporting higher levels of engagement with and on social media. Partisan affiliation did not strongly predict levels of online political engagement in most cases, though liberals were more slightly more likely to report social networking site usage and engagement with issues because of social media chatter. The report has a range of useful data for anyone interested in how Americans engage in politics.
Surveying the practices of various European technology security firms as well as information from a variety of research papers and interest groups, the author takes a broad look at the practices of observation and analysis of content data passing through the tubes and nodes of the Internet (called Deep Packet Inspection, or DPI.) Of course, this is being carried out in an increasingly security-conscious era in which private firms are empowered to perform some state-security functions, the paper notes. There is a certain potential creepiness that is spelled out by the author — the idea that there may be DPI “function creep,” as more and more entities want to conduct this type of surveillance. Issues of net neutrality, political repression, overly intrusive advertising, and file-sharing are discussed. The technological possibilities are only increasing, the author asserts: “The security-industrial complex on the one hand wants to make a business out of developing military and surveillance technologies and on the other hand advances the large-scale application of surveillance technologies and the belief in managing crime, terrorism and crises by technological means. DPI Internet surveillance is part of this political economic complex that combines profit interests, a culture of fear and security concerns, and surveillance technologies.” The author advocates that, in order to combat these Big Brother-oriented dangers, a “paradigm shift is needed from the conservative ideology of crime and terror and the fetishism of crime fighting by technology towards a realist view of crime that focuses on causes that are grounded in society and the lived realities of humans and power structures…”
The author conducted interviews with 25 journalists from outlets around the globe who are based in Britain. She compares her data to that of the last substantial study of London-based foreign correspondents, which took place 1978-81. Many correspondents now are younger and operate without an office; many are “one-man bands” — they operate without news organization peers in-country; and most do not report having stable contracts with a single news outlet. As you would expect, technology has changed this game to some extent: “All correspondents mention that the development of communication technologies makes their work easier, even if it is problematic to sift through the information tide to check its accuracy.” Although the article doesn’t dispute the fact that there are challenges and likely a thinning of the ranks, the role of the foreign correspondent remains a relatively more creative one, as reporters still must find distinctive story angles that connect with their home country audiences and can’t do mere lazy “churnalism”: “While all journalists need information to feed into their reports, the interviews suggest that there is possibly a high degree of reinterpretation of the material journalists get from their sources. This happens to a greater extent than in domestic journalism and is related to the very nature of the foreign correspondent’s assignment.”
Distinctive perhaps for its coining the term “metasourcing” — the new role of confirmation that mainstream media often play in the social media sphere — the paper examines the interplay of elite and non-elite sources during the Libyan conflict, using Gaddafi’s death as a case study. The researchers sample Danish media following these breaking events. The authors describe what is by now a familiar set of dynamics: “Firstly, information comes from a variety of non-institutionalized source (amateurs/participants/ eyewitnesses), who more or less become the reporters of the event, while the institutionalized media, to some extent, are relegated to disseminating this multitude of visual fragments and bits of information rather than synthesize it into a coherent narrative. Secondly, speed appears at times to come before verification, and as a response to the constant flow of incoming unconfirmed information, various and even contradictory versions of an event are reported.” The study then attempts to provide a new vocabulary for all of this: “Elite sources and self-referential media positioned in a new role as metasources use their authority, expertise and experience to comment on the validity of the non-conventional sources, and put them into political and social perspective. While amateur sources bring authenticity, immediacy and proximity to war reporting by documenting events as they unfold, metasources are used as sources-on-other-sources.”
This research highlights the developing world’s patterns of Internet access and examines the tradeoffs in them. The report notes that “most research on mobile Internet access and usage to date has lacked comparative analyses of any type in which the characteristics or usage patterns of mobile platforms are assessed relative to PC-based platforms.” The scholars comprehensively survey relevant studies to provide a critical framework for evaluating the mobile “revolution” globally. They note that mobile devices are simply not able to store or process as much data as PCs, and this has a variety of consequences: “Mobile-ready Web sites often represent streamlined or watered down versions of the standard Web site. Thus, mobile users often find themselves with access to less information and less functionality than PC-based users when forced to rely on mobile-tailored Web sites.” Further, because mobile devices are typically a much less open platform for Internet access — they often create a “walled garden” environment of apps and design a more constrained experience — the “opportunities, therefore, for mobile users to tap into the full economic potential of the Internet are much more limited. Consider, for instance, the dramatic entrepreneurial opportunities that have been facilitated by PC-based Internet access to develop and launch new online applications, platforms, and services that simply cannot be approximated if a user is limited to access via a mobile device.”
The report’s authors say they are hoping to “inject into the policy conversation a more thorough understanding of how effective such efforts can really be in terms of providing mobile users with the same kind of opportunities to access, produce, and disseminate information as PC users; and to raise a note of caution about the implications of abandoning efforts to promote PC diffusion in light of the potential for mobile leapfrogging. It is important to recognize the potentially significant compromises and shortcomings that come from a policy approach to the digital divide that emphasizes mobile access and largely abandons any emphasis on PC-based access, particularly in light of the fundamental requirement for technology leapfrogging discussed at the outset — that the leapfrogging technology be clearly superior to available alternatives.”