What’s new in digital and social media research, May 2014: Crowdsourcing, analytics, Twitter patterns, product ratings
From the perils of analytics-obsessed journalism to the promise of micro crowdsourcing, new research papers have furnished a wealth of insights to ponder over the past month.
Below is a sample of new thinking from various corners of the research world.
(Note: This article was first published at Nieman Journalism Lab.)
“Twitch Crowdsourcing: Crowd Contributions in Short Bursts of Time”
UC Santa Cruz and Stanford University HCI Group, for the 2014 ACM CHI Conference on Human Factors in Computing Systems. By Rajan Vaish, Keith Wyngarden, Jingshu Chen, Brandon Cheung, and Michael S. Bernstein.
The researchers present a crowdsourcing platform for Android devices — Twitch — that allows users to make meaningful contributions to things like reporting on local activity (e.g., is the local cafe busy?), ranking stock photos, and helping to structure bits of text with which computers have trouble. Each time a smartphone is unlocked, users are given a 1- to 3-second-long task. As the paper’s authors explain, many crowdsourcing projects are undermined by the fact that it takes too much time for people to contribute; they require too much “cognitive load.” Making the task required maximally efficient is crucial. Observing the results from 82 users who adopted the Twitch platform — resulting in some 11,000 microtasks completed — the researchers conclude that tapping into brief, unused moments of users’ time can be an effective strategy. Responses proved to be reasonably accurate. The task completion rate was 37.4 percent. And people got better at the tasks: After the first 10 unlocks, users’ median completion time fell from 2.7 to 1.6 seconds.
“Twitch crowd sourcing can extend beyond the three applications outlined in this paper: opportunities include citizen science, accessibility, education, and scientific discovery,” the paper concludes. “Users might participate in micro-tutoring sessions by marking an English-language sentence as correct or incorrect to help language learners from the developing world and K-12 education.” Further, the authors assert, “We suggest that small contributions made in short bursts during spare seconds can be aggregated to accomplish significant, customized tasks without placing an undue burden on volunteers. We envision that this approach could bring experts into the crowdsourcing fold, overcoming their historical tendency to stay out because of large time commitments. If we succeed in involving a broader set of participants and topic experts, they could unlock many new opportunities for research and practice in crowdsourcing.”
“The Ethics of Web Analytics: Implications of Using Audience Metrics in News Construction”
Nanyang Technological University, Singapore, and Missouri School of Journalism, published in Digital Journalism. By Edson C. Tandoc Jr. and Ryan J. Thomas.
A skeptical thinkpiece on the rise of audience segmentation and customization, this essay laments how market forces may disrupt journalistic and ultimately democratic values. Tandoc and Thomas claim that relying on a purely analytics-driven approach runs the “risk of a media ecosystem that panders to, rather than enlightens and challenges, its audience, and thus poses a barrier to the formation of community around shared ideals and collective subscription to the success of democracy.”
They question whether we should celebrate more wholesale audience engagement in driving the news agenda. “[D]espite the somewhat sunny optimism of many journalism scholars,” they write, “we contend that this narrative portends a drift away from journalism ethics, toward an audience-centered free for-all governed by market logic. If we re-center journalism and its democratic obligations at the heart of the debate, the optimism about the reversal of top-down power structures can instead be read as a deep pessimism about the promise of journalism itself and of journalists’ capability to execute their role-related responsibilities.” The techno-centric view of journalism’s future neglects the crucial role that journalism has in deliberately bringing people together around a shared sense of vital issues, the authors suggest. In the era of web analytics, the news media’s “role should be about understanding what the audience wants and…balance[ing] this against what the audience needs.”
Also see Tandoc’s study on a similar theme in New Media & Society, “Journalism is twerking? How web analytics is changing the process of gatekeeping,” which draws on interviews and observational data (and twerking).
“From Organizational Crisis to Multi-platform Salvation? Creative Destruction and the Recomposition of News Media”
University of Glasgow, published in Journalism. By Philip Schlesinger and Gillian Doyle.
The study looks at the transition by the Financial Times and the Telegraph from print to digital. The authors conducted a series of in-depth interviews with senior executives and managing editors. For the FT, there have been myriad aggressive changes to adapt to the web, and the staff is now about 50/50 print versus web-focused. Staff hours are allocated differently around the news cycle, though problems remain: “One senior manager conceded that whereas the number of stories published every hour at FT.com throughout a 24-hour period typically increases very markedly in the early evening when the print edition of the newspaper is approaching its production deadline, the known peak periods in usage of FT.com occur elsewhere in the day.” Web analytics are playing a much bigger role, leading to some other frictions: “Significant tensions have begun to emerge between enhancing the discoverability of the FT’s content in order to build the relationship with the customer and subtly shifting editors’ news judgements about what matters — all in the cause of growing the revenue stream deriving from increased purchases of digital content.” In terms of the Telegraph, many of the same changes — and tensions — were present, Schlesinger and Doyle found. “The Telegraph is still addressing,” they write, “the matter of how best to achieve a balance between the drive to update stories and the need to offer added value to reinforce reader ‘engagement’, seen as essential to growing the subscriber base.”
The researchers conclude that “as the present transformation places ever-greater emphasis on data-driven models within newsrooms, inevitably new questions will be posed concerning the nature of journalistic practice and its legitimations.”
“Focus on the Tech: Internet Centrism in Global Protest Coverage”
American University, published in Digital Journalism. By Deen Freelon, Sarah Merritt and Taylor Jaymes.
The study feeds into a running debate about whether or not news media tend to overemphasize the role and effectiveness of technology in human events and give it greater focus because of its perceived novelty, e.g., the endless media chatter over “Facebook revolutions.” The researchers examine news coverage patterns around the Arab Spring protests in Tunisia and Egypt of early 2011 and the Occupy movement protests in September-November 2011, focusing on the content of the top 10 newspaper sites, judged by circulation, and the 10 top English-language tech blogs. They then determined how many of the 795 representative articles displayed Internet “centrism,” whereby “stories had to discuss the role of either a specific online tool or the internet in general in a protest context.” Example: Discussions of “social media revolutions.” The data show that Internet centrism was present in 505 of the 795 articles; when the articles indicated a position on technology’s social role, most held that the Internet was helpful to the protestors.
Freelon, Merritt, and Jaymes conclude that Internet centrism within newspapers was “more general and positive for the Arab Spring, and more focused on specific uses of online media for Occupy.” By contrast, “tech blogs emerged as more Internet centrist than newspapers within Occupy articles largely because the movement was often used in the former as a catchy hook upon which to hang broader points about the role of technology in society.” In any case, this all suggests that Internet centrism is a “recurring feature of protest coverage.”
“Social Dynamics of Twitter Usage in London, Paris and New York City”
University College, London, and Brunel University, published in First Monday. By Muhammad Adnan, Paul A. Longley and Shariq M. Khan.
The researchers looked at geotagged tweets from a period in late 2012, analyzing the patterns of communication across tens of thousands of users in each city. In each place, Twitter usage was at its peak during midday (roughly 10 a.m. to 1 or 2 p.m.) and again between 7 and 11 p.m. Across all three cities, tweeting was also at its peek mid-week — Wednesday and Thursday. Using software to predict ethnicity based on user names, authors Adnan, Longley, and Khan identified 69 distinct ethnic groups in London, 67 in New York, and 69 in Paris. “Brooklyn,” they note, “has [a] high concentration of Scottish, Italian, Jewish, Chinese users and [the] Bronx hosts many Irish, Italian, and Portuguese. The tweets of English and Spanish users are concentrated all around the city.” About 45 percent of users were identified as male and 30 percent female (some names are unisex and thus ambiguous), suggesting a “dominance of male users on Twitter.” They conclude that the “majority of Twitter users in London and New York City are male and have Anglo–Saxon roots. In Paris, the majority consists of male French users.”
“Rising Tides or Rising Stars? Dynamics of Shared Attention on Twitter during Media Events”
University of Pittsburgh, Northeastern University and Cornell University, published in PLoS One. By Yu-Ru Lin, Brian Keegan, Drew Margolin and David Lazer.
Nieman Lab’s Joshua Benton has spotlighted this important study and its conclusions, and it’s worth saying just a bit more. The study, which looks at Twitter use during major media events in the 2012 presidential election season, concludes that concentrated periods of public attention often favor users with large existing audiences. As you might expect, general network activity increases as do retweeting and sharing behavior. But there tends to be less peer-to-peer communication among average users, and much more sharing and rebroadcasting around elite voices: “References to users or tweets through retweets or replies became significantly more centralized during media events without correspondingly large changes in the average behavior of users,” the authors write. “Crucially, the beneficiaries of this newfound attention were not distributed throughout users with different numbers of followers, but concentrated among users with the largest pre-existing audiences.”
This cuts against some speculation that mass events would be exactly the moment of greater digital democracy, when less-known users might be able to communicate with larger audiences and see their messages, through hashtags, bounce across network clusters. Other research work has suggested as much. In any case, what it indicates is that deriving “natural laws” on social platforms will be difficult, as Twitter behaves differently at the network level based on varying external events. Still, the authors acknowledge, the study’s data are limited and only pertain to eight domestic events across a small window of time in 2012.
“What If We Ask a Different Question? Social Inferences Create Product Ratings Faster”
Georgia Institute of Technology, for the 2014 ACM CHI Conference on Human Factors in Computing Systems. By Eric Gilbert.
Gilbert investigates whether social reviews on products and services across the web might be improved by asking users for a different type of feedback, namely for a “social inference.” Instead of asking site visitors how they would personally rate something, they might be asked how they thought other people would rate the item in question. Indeed, it turns out that the question wording matters: Asking people what they believe others will say meaningfully reduces statistical noise in social ratings. Using Mechanical Turk, Gilbert recruited more than 500 people to participate in an online experiment reviewing movies. People were asked both “How do you rate this movie?” and “How do you think other people would rate this movie?”; they could also assign stars for both questions. The goal was to show how question wording affects “variance” — the degree to which there are outliers and idiosyncratic responses that don’t usefully contribute to a representative mean rating. Gilbert notes that the findings, which showed the benefits of the social inference approach over the personal question, have “widespread applicability. Sites all over the Internet ask users to rate all manner of things: products, experiences, posts, comments, news stories, etc.” This is especially true for the myriad sites that see relatively few ratings, requiring higher degrees of accuracy for the ratings to be collectively useful.
We welcome feedback. Please contact us here.