Concerns about the decline in personal privacy have long troubled citizens, scholars and politicians. The issue was most famously raised in “The Right to Privacy,” published in the Harvard Law Review in 1890 by jurists Samuel D. Warren and Louis Brandeis, the future Supreme Court justice.
While Web 2.0 has empowered users to chat freely with friends, speak directly to customer service reps, check store stock online and a host of other innovations, these transactions typically leave a digital trail that can compromise a user’s privacy and security. Online users now routinely access the names of friends and family members, work histories, relationship status, credit card information, and bank statements via the Internet. Even one’s reading materials — once considered a bulwark of intellectual freedom — are now in the public domain as articles are circulated and books are recommended on sites like Facebook and Amazon.
Julia Angwin of the Wall Street Journal has written the “What They Know” series since 2010, documenting “new cutting-edge uses of tracking technology and analyzes what the rise of ubiquitous surveillance means for consumers and society.” Much of Angwin’s work focuses on digital surveillance and related concerns, including the dangers web tracking, corporate resistance to privacy regulations and ways individuals can protect themselves from digital “prying eyes.” Angwin spoke about her work at the Shorenstein Center in February 2013:
As technologies advance — and as the mobile world rapidly emerges as a central arena — there are worries that laws and policies are not keeping up. In February 2013, the U.S. Federal Trade Commission issued a report, “Mobile Privacy Disclosures: Building Trust Through Transparency,” that raised serious questions about data collection and mobile apps. Regulators note that, because so much commerce is moving to mobile, increased oversight is necessary in this space.For more on the future of federal policy see the 2013 paper “The Next Generation Communications Privacy Act,” by Orin S. Kerr of George Washington University.
Of course, young persons are a group of particular concern. A 2013 report from Harvard’s Berkman Center on Internet & Society and the Pew Internet & American Life Project examines how teens seek online privacy information. In an September 2013 survey, Pew found that 86% of people said they had “taken steps online to remove or mask their digital footprints—ranging from clearing cookies to encrypting their email, from avoiding using their name to using virtual networks that mask their internet protocol (IP) address.” Fully one-fifth of respondents said they had experienced having a social media or email account compromised.
According to another survey, more Americans are expressing concern about protecting their privacy online, but they continue to share more personal data than ever. Meanwhile, data mining and other aggressive information-capturing techniques have become commonplace for businesses large and small. Facebook not only uses personal information shared by users to deploy targeted advertising, but it also sells it to external vendors. The company also introduced Facebook Graph Search, which allows users to capitalize on its site data to conduct complex searches of people, places, interests and other data. A 2013 study in the Proceedings of the National Academy of Sciences showed that surprisingly accurate guesses regarding an individual’s gender, sexuality and ethnic origin can be made from Facebook data.
The damage from pirated information can range from lost job prospects to serious financial troubles if Social Security and credit card numbers are stolen. Businesses and governments can be forced to fund costly security upgrades to infrastructures to fend off cybercriminals. Research has shown that security breaches depress user engagement online, which is bad news for businesses and governments alike.
Of course, revelations about the U.S. National Security Agency (NSA) and its practice of data mining Americans’ phone records and examining some citizen data from major Internet companies — including Google, Microsoft, Facebook, Skype, Apple and Yahoo — complicate this picture even further.
For a sweeping overview, see “The Public and the Private at the United States Border with Cyberspace,” by John Palfrey, then of Harvard Law School. The article, published in the Mississippi Law Journal, explains both the technical details of surveillance and raises broad theoretical questions about the changing nature of privacy. The article serves as an accessible but comprehensive primer.
What are the ways that online privacy can be breached, and what are some strategies individual users and companies use to protect online privacy? In the struggle to maintain online privacy, who has the greater responsibility, users or companies? The following are recent academic research studies and reports that address issues relating to digital privacy:
Park, Yong Jin. Communication Research, April 2013, Vol. 40, No. 2, 215-236. doi: 10.1177/0093650211418338
Abstract: “In this paper, we address the important issue of privacy in pervasive communities by experimentally evaluating the accuracy of an adversary-owned set of wireless sniffing stations in reconstructing the communities of mobile users. During a four-month trial, 80 participants carried mobile devices and were eavesdropped on by an adversarial wireless mesh network on a university campus…. Our results provide empirical evidence about the two distinct levels of community information leakage to external observers, who may be able to infer with high accuracy the different social groups and generic communities of people in pervasive networks, while being much less accurate in determining the affiliation of any particular individual to a community.”
Abstract: “The scope of this literature review is to map out what is currently understood about the intersections of youth, reputation, and privacy online, focusing on youth attitudes and practices. We summarize both key empirical studies from quantitative and qualitative perspectives and the legal issues involved in regulating privacy and reputation. This project includes studies of children, teenagers, and younger college students. For the purposes of this document, we use “teenagers” or “adolescents” to refer to young people ages 13-19; children are considered to be 0-12 years old. However, due to a lack of large-scale empirical research on this topic, and the prevalence of empirical studies on college students, we selectively included studies that discussed age or included age as a variable. Due to language issues, the majority of this literature covers the United States, the United Kingdom, the European Union, and Canada.”
Abstract: “Due to the ability of cell phone providers to use cell phone towers to pinpoint users’ locations, federal E911 requirements, the increasing popularity of GPS capabilities in cellular phones, and the rise of cellular phones for Internet use, a plethora of new applications have been developed that share users’ real-time location information online… We find that although the majority of our respondents had heard of location-sharing technologies (72.4%), they do not yet understand the potential value of these applications, and they have concerns about sharing their location information online. Most importantly, participants are extremely concerned about controlling who has access to their location. Generally, respondents feel the risks of using location-sharing technologies outweigh the benefits. Respondents felt that the most likely harms would stem from revealing the location of their home to others or being stalked. People felt the strongest benefit were being able to find people in an emergency and being able to track their children. We then analyzed existing commercial location-sharing applications’ privacy controls (n = 89). We find that while location-sharing applications do not offer their users a diverse set of rules to control the disclosure of their location, they offer a modicum of privacy.”
“Parents, Teens and Online Privacy” Madden, Mary; Cortessi, Sandra; Gasser, Urs; Lenhart, Amanda; Duggen, Maeve. Pew Internet and American Life Project and the Berkman Center for Internet and Society at Harvard University, November 7, 2012.
Findings: “Eighty-one percent of parents of online teens say they are concerned about how much information advertisers can learn about their child’s online behavior, with some 46% being “very” concerned. Seventy-two percent of parents of online teens are concerned about how their child interacts online with people they do not know, with some 53% of parents being “very” concerned. Sixty-nine percent of parents of online teens are concerned about how their child’s online activity might affect their future academic or employment opportunities, with some 44% being “very” concerned about that. Sixty-nine percent of parents of online teens are concerned about how their child manages his or her reputation online, with some 49% being “very” concerned about that. Some of these expressions of concern are particularly acute for the parents of younger teens; 63% of parents of teens ages 12-13 say they are “very” concerned about their child’s interactions with people they do not know online and 57% say they are “very” concerned about how their child manages his or her reputation online.”
Overview: “Social network users are becoming more active in pruning and managing their accounts. Women and younger users tend to unfriend more than others. About two-thirds of Internet users use social networking sites (SNS) and all the major metrics for profile management are up, compared to 2009: 63% of them have deleted people from their ‘friends’ lists, up from 56% in 2009; 44% have deleted comments made by others on their profile; and 37% have removed their names from photos that were tagged to identify them. Some 67% of women who maintain a profile say they have deleted people from their network, compared with 58% of men. Likewise, young adults are more active unfrienders when compared with older users.”
Abstract: “This study investigated the association between trust in individuals, social institutions and online trust on the disclosure of personal identifiable information online. Using the Internet attributes approach that argues that some structural characteristics of the Internet such as lack of social cues and controllability are conducive to a disinhibitive behavior it was expected that face-to-face trust and online trust will not be associated…. In contrast with the Internet attribute approach, the effect of trust in individuals and institutions was indirectly associated with the disclosure of identifiable information online. Trust in individuals and institutions were found to be associated with online trust. However, online trust only was found to be associated with the disclosure of personal identifiable information. While trust online encourages the disclosure of identifiable information, perception of privacy risks predicted refraining from posting identifiable information online. The results show a complex picture of the association of offline and online characteristics on online behavior.”
Abstract: “For years employers have used social networking sites (SNS) such as Facebook, Twitter, MySpace, Google and LinkedIn to dig up incriminating evidence on prospective or current employees. Now credit reporting agencies (CRA) may conduct ‘social media background checks’ on employees as well. The Federal Trade Commission (FTC) has given companies, like Social Intelligence, the stamp of approval to rummage around the Internet for anything a potential job candidate has done or said online in the past seven years. Both CRAs and employers must comply with the Fair Credit Reporting Act (FCRA). This article addresses the legal ramifications of social media background checks and the difficulty in applying the FCRA to this new employment practice.”
Abstract: “The popular media has reported an increase in the use of social networking sites (SNSs) such as Facebook by hiring managers and human-resource professionals attempting to find more detailed information about job applicants. Within the peer-reviewed literature, cursory empirical evidence exists indicating that others’ judgments of characteristics or attributes of an individual based on information obtained from SNSs may be accurate. Although this predictor method provides a potentially promising source of applicant information on predictor constructs of interest, it is also fraught with potential limitations and legal challenges. The level of publicly available data obtainable by employers is highly unstandardized across applicants, as some applicants will choose not to use SNSs at all while those choosing to use SNSs customize the degree to which information they share is made public to those outside of their network. It is also unclear how decision makers are currently utilizing the available information. Potential discrimination may result through employer’s access to publicly available pictures, videos, biographical information, or other shared information that often allows easy identification of applicant membership to a protected class.”
Findings: “This article has reviewed several privacy risks related to personalization and discussed technologies and architectures that can help designers build privacy-preserving personalization systems. While no silver bullet exists … there are technologies and principles that can be used to eliminate, reduce and mitigate privacy risks. Furthermore, existing approaches are not mutually exclusive and should be considered as complementary in protecting users’ privacy in personalized systems. Pseudonymous profiles and aggregation can be used when personalization information need not be tied to an identifiable user profile. Client-side profiles are useful when personalization services can be performed locally. User controls should always be considered on top of other technical approaches as they will likely make the personalized system more usable and trustworthy. We envision advances in all of these areas and more systems that incorporate multiple techniques in their privacy protection mechanisms.”
Abstract: “This study examines the effects of Real Name Verification Law in several aspects. By applying content analysis to abundant data of postings in a leading discussion forum that is subject to the law, the results suggest that Real Name Verification Law has a dampening effect on overall participation in the short-term, but the law did not affect the participation in the long term. Also, identification of postings had significant effects on reducing uninhibited behaviors, suggesting that Real Name Verification Law encouraged users’ behavioral changes in the positive direction to some extent. The impact is greater for Heavy User group than for Light and Middle User groups. Also, discussion participants with their real names showed more discreet behaviors regardless of the enforcement of the law. By analyzing the effect of this policy at the forefront of Internet trends of South Korea, this paper can shed light on some useful implications and information to policy makers of other countries that may consider certain type of Internet regulations in terms of privacy and anonymity.”
Abstract: “This research examines the responses of online customers to a publicized information security incident and develops a model of retreative behaviors triggered by such a security incident. The model is empirically tested using survey data from 192 users of a recently compromised website. The results of the data analyses suggest that an information security incident can cause a measurable negative impact on customer behaviors, although the impact seems to be largely limited to that particular website. The tested model of retreative behaviors indicates that perceived damage and availability of alternative shopping sources can significantly increase retreative behaviors of victimized customers, while perceived relative usefulness and ease-of-use of the website show limited effects in reducing such behaviors.”
Abstract: “Online Behavioral Advertising (OBA) is the practice of tailoring ads based on an individual’s activities online. Users have expressed privacy concerns regarding this practice, and both the advertising industry and third parties offer tools for users to control the OBA they receive. We provide the first systematic method for evaluating the effectiveness of these tools in limiting OBA. We first present a methodology for measuring behavioral targeting based on web history, which we support with a case study showing that some text ads are currently being tailored based on browsing history. We then present a methodology for evaluating the effectiveness of tools, regardless of how they are implemented, for limiting OBA. Using this methodology, we show differences in the effectiveness of six tools at limiting text-based behavioral ads by Google. These tools include opt-out webpages, browser Do Not Track (DNT) headers, and tools that block blacklisted domains. Although both opt-out cookies and blocking tools were effective at limiting OBA in our limited case study, the DNT headers that are being used by millions of Firefox users were not effective.”
Abstract: “This work seeks to understand what ‘they’ (Web advertisers) actually do with the information available to them. We analyze the ads shown to users during controlled browsing as well as examine the inferred demographics and interests shown in Ad Preference Managers provided by advertisers. In an initial study of ad networks and a focused study of the Google ad network, we found many expected contextual, behavioral and location-based ads along with combinations of these types of ads. We also observed profile-based ads. Most behavioral ads were shown as categories in the Ad Preference Manager (APM) of the ad network, but we found unexpected cases where the interests were not visible in the APM. We also found unexpected behavior for the Google ad network in that non-contextual ads were shown related to induced sensitive topics regarding sexual orientation, health and financial matters. In a smaller study of Facebook, we did not find clear evidence that a user’s browsing behavior on non-Facebook sites influences the ads shown to the user on Facebook, but we did observe such influence when the Facebook Like button is used to express interest in content. We did observe Facebook ads appearing to target users for sensitive interests with some ads even asserting such sensitive information, which appears to be a violation of Facebook’s stated policy.”
Findings: “It was found that 63% of users agreed with a statement of concern for third parties monitoring activities, about half of the respondents agreed with a concern for knowledge about a user’s location and a little more than half agreed to concern about inference of demographic information. It was found that females are more concerned about these issues than males. In terms of possible actions, a majority of users report using an ad blocker tool and even more delete cookies at least some amount of time. Using an opt-out mechanism or removing browser history is done by less than 20% of users. Despite expressing more concern for information known by third parties, females are not significantly more likely to take actions that may limit what is leaked to these third parties. A contributor to this discrepancy is that females were much less likely to know their settings for many of the actions, indicating less familiarity with them.”
Summary: “Existing privacy research analyses transactions between individuals and organisations. The expanded model presented in this paper includes the other organisations that are parties to those transactions. The model also allows for a new aspect of PIP — that personal data have a life of their own. After its movement from first party individual to second party vendor/provider, data move to third party integrators who develop an individual history that incorporates significant public and private data. The model highlights interorganisational data sharing and enables discussion of shortcomings of current privacy practices. Emerging technologies, demonstrate how new nano-sized technologies for location awareness and programmable remote action continue to evolve privacy issues…. The perspective is that personal privacy is important but it must counterbalance realities of escalating terrorism and a need for some personal privacy erosion in the interest of social good. However, maintaining a balance between individual control of personal information and protection of societal needs should be a public discussion informed by further privacy research.”