Why we need to standardize the privacy consent process and set an expiration date on data use
Updated: Mar 22, 2020
Services that monetize user information, such as Facebook, face constant public scrutiny for their perceived inability to control the distribution of the user data they collect. In an effort to restore public confidence in Internet services, legislation has been previously proposed to mandate that companies adopt safe data storage practices and provide users with the ability to opt-out of data sharing. In this paper, I argue that distrust in a service’s ability to protect user data will only continue to increase so long as the data consent process is designed in such a way that it causes an information asymmetry between a user and the service. Furthermore, I argue that correcting for this asymmetry—through the introduction of a standardized privacy consent form—will also curtail an adversary’s ability to spread misinformation on an Internet platform. This consideration is important because it is paramount that we create a sustainable digital ecosystem where users feel confident that they are not the target of predatory online behavior.
Restoring public trust in Internet services
In Internet governance, we currently face two major challenges: 1) establishing standards for the protection of user information on Internet services and 2) combatting the spread of misinformation. In most discussions of social media regulation, we treat these two challenges as entirely separate issues when, in fact, both challenges reflect the same underlying issue: users are being betrayed by platforms that are designed to make them feel safe. The protection of user information on Internet services, especially in the case of social media platforms, is intricately linked with the misinformation campaign being conducted by our adversaries through the use of malicious design on the service’s platform. Malicious design is defined as the creation of a user-interface that implicitly hides information or tricks a user through deliberate design choices, known as dark patterns. Both challenges need to be addressed in order to improve the public’s trust in the institutions that make up the Internet. For example, the CONSENT Act proposed by Senators Markey (D-Mass) and Blumenthal (D-Conn) seeks to comprehensively address the first challenge by mandating consent between a user and Internet service . However, legislation like this must go one step further and address the second challenge by standardizing the privacy consent process that is often manipulated to favor the Internet service over the user.
It is important to consider how Internet services use malicious design because it deteriorates the trust that an individual has in the institutions that influence their daily lives, such as large tech firms, the government, and media outlets. Furthermore, it amplifies the ability of adversarial organizations to influence how a country chooses to govern its citizens. Consider the following information asymmetry that arises from the use of dark patterns on an Internet service’s platform and how it leaves users vulnerable to adversarial manipulation. Regulation in this space—through the introduction of a privacy “nutrition label” as initially proposed by Kelley et al.—is paramount to maintaining public trust in our ability to govern the Internet .
The shortcomings of legislation that merely mandates opt-in consent on Internet services
The goal of legislation in this space should be to improve a user’s confidence that the Internet services they use can protect their personal information by promoting data collection transparency and safe data storage practices. With this goal in mind, the following regulations have been proposed to comprehensively address the issue of informed data-sharing consent. An Internet service that monetizes user data would be required to:
Notify a user of all data collection and distribution practices, including what types of third party organizations are using the data being shared and do this whenever they sign up or the terms of service are updated.
Require opt-in consent before distributing data and continue to allow users on the platform even if they do not consent to sharing of information.
Implement a policy to anonymize and protect user information that has been previously removed at the request of the user.
Develop suitable data security practices and inform users of any breach.
The above list sufficiently identifies every action a company must take so they are in line with the “best practices” when it comes to protecting consumers in a market based on data monetization. One glaring issue, however, is that what has been proposed does not take into account that many companies have already adopted most of these requirements, yet user-distrust in these services continues to increase . A majority of Internet services operate in Europe. Thus, as a blanket policy decision, they have decided to make their service mostly GDPR-compliant in non-European states as well. The current status-quo of data sharing in the US illustrates how these proposed regulations are easily circumvented by a company.
I propose the following regulation: 1) mandate a standardized consent form that each Internet service must use with an expiration date (akin to the food nutrition label) and 2) open a platform for users to report companies that they believe are being intentionally deceptive with the way they design their data-consent process. These recommendations are underpinned by the assumption that malicious design leads to an asymmetric information dynamic on the market. In Section 3, I will discuss how previously proposed legislation does not meet the goal of restoring user-confidence when Internet services are at liberty to design their data consent form however they choose. Then, in Section 4, I will use Facebook as a generalizable example to argue that the platform’s use of malicious design creates a dynamic of asymmetric information between not only the user and Facebook, but also the user and an adversary intent on spreading misinformation (The propagation of misinformation is just one example of a consequence from the information imbalance caused by the data-consent process. For another example (the pervasive tracking of Internet behavior by insurance companies and loan sharks) refer to Senator Mark Warner’s 2018 white paper on social media regulation ). Finally, in Section 5 I will present the “nutrition label” as a solution and address two counterpoints that explain why design is not the issue.
Dark patterns create imperfect consent
A key component of malicious design is that an Internet service only wants its users to see what they emphasize on their platform. For example, in the case of advertisement through email, commercial services are required by law to include directions on how to unsubscribe from future emails with each message they send . Inevitably, companies will put the “Unsubscribe” link at the very bottom of each email sandwiched between other links, usually in a light-gray color with the smallest font size possible. In his 2018 white paper on social media regulation, Senator Mark Warner highlights an example of this design type appearing in Facebook Messenger . When the app requests permission to access your contacts, it does not directly ask you. Instead, it displays the message “Text Anyone in Your Phone” and gives you the option of clicking “Learn More” or “Okay.” Clicking “Okay” is equivalent to approving and clicking “Learn More” will take you to another pop-up.
This example illustrates a couple subversive design tactics. First, it is common for Internet services to mislead users with statements indicating that they will not get the full user experience if they choose to opt out. Second, if the user is adamant on opting out, then they will add several extra steps until a user can fully opt-out, giving the user an opt-in button at every step. Another common method is the modification of text size, color, and button location on the consent form. It is evident that Internet services design their consent forms in such a way that users either do not notice that they have the choice of opting out. Users do not possess full knowledge of how Internet services decide to collect and use their data, even though their agreement indicates otherwise.
But even if users had complete access to information about how a digital service uses their data at the time of consent, they often lack foresight for what that exactly means. Several studies have concluded that this tactic of flooding users with information about how their data is used and having them click an opt-in button, known as notice and choice, does not adequately represent consent because of this reason [6, 7, 8, 9]. Therefore, we need design legislation to account also account for the issue of user foresight.
Dark patterns create an information asymmetry
Between a user and the company
Imperfect consent results in a situation where users understand neither the scope to which their data is being distributed nor how vital it actually is to their experience on the platform. Conversely, Internet services have (in theory) perfect knowledge of the situation because they control both the user data and its distribution. According to Pew Research, only 9% of social media users believe they have “a lot of control” over their information, but 74% of users describe being in control of their information as “very important” . By implicitly withholding information from users whose control they view as paramount to their personal safety, user distrust in Internet services will increase indefinitely until regulation corrects for this asymmetry.
Mounting distrust in a service as a result of asymmetric information is not a new phenomenon in the US. For example, the FDA was established in 1906 after the publishing of Upton Sinclair’s The Jungle, which described the conditions of the Chicago meatpacking industry . Customers who wanted to purchase meat could not be certain that the quality of the product they were getting was worth its market price. By providing a standardization for quality assurance, the FDA’s establishment actually grew the meatpacking industry as a result of renewed public confidence in the market .
Asymmetric information obfuscates adversarial action and amplifies their reach
A problem that is unique to Internet services, however, is how common it is for an adversary to perpetuate distrust of a service. On top of the natural instability caused by user distrust of a company’s ability to responsibly distribute and protect personal information, Internet services have to deal with the artificial instability caused by third parties. Internet services (and social media companies in particular) are an attractive target for adversaries because they have become a de-facto critical infrastructure sector, but do not receive the same protection benefits as other sectors. It is particularly easy for an adversary to spread misinformation on a social media platform because they can cater their attacks to target specific demographics where they know its impact will be greatest. The tactics they use are no different than the ones used by targeted advertisers to maximize sales. And just like how a targeted advertisement’s success comes from its use of personal data, so, too, does the success of a misinformation campaign. Two major components of a modern information warfare campaign are the use of bots that constantly publish non-curated content on a platform and the spread of misinformation within social circles . In both cases, the reach of misinformation on a platform is maximized by the ability to target users and leverage their personal information. This assumes that an adversary can access the personal information of the users they intend to target, which has been shown in the past few years to be almost an inevitability.
To Facebook’s credit, they are working hard to prevent data leakage to unauthorized parties. But the recent Cambridge Analytica scandal and the annual Verizon Data Breach Investigations Report illustrate three reasons why Facebook is unable to prevent leakage entirely: 1) even if Facebook controls the distribution of its data, it is hard for the company to curate the full list of the organizations that receive the data they collect, 2) user information is still susceptible to being stolen through tertiary services that make use of Facebook’s data collection service, and 3) it usually takes a company months to realize they have fallen victim to a data breach that was crafted over the course of just minutes . Several studies have shown that the majority of social media services leak personal information about their users [14, 15]. We are now presented with a situation where an adversary knows more about a target’s user data than the susceptible individual themself does.
Solutions to malicious design: the “nutrition label” with an expiration date
Standardizing the design of the opt-in consent process is an efficient way of removing this asymmetry. Its use would result in both the user and company possessing the exact same knowledge about what information is being collected and distributed. This idea was originally proposed by Kelley et al. in 2009 in the context of banking privacy notifications . Kelley et al. propose a grid-style method of displaying information type versus where it will be distributed, with each grid square containing a marker as the importance of that information to the user experience. A standardized privacy consent form would limit an Internet service’s ability to deceive through design and improve a consumer’s perceived control over their information by presenting it in a much more digestible format, thus correcting for the information imbalance. Furthermore, setting an expiration date on the privacy consent form will correct for the issue of foresight by reminding users of how a digital service uses their data every time they have to re-up the privacy agreement.
Counter #1: Platforms should have the liberty to design their user-experience however they want. A counterpoint to regulating the design of a data consent form is that doing so restricts the creative freedom that providers need in order to maximize user experience on their platform. A company wants to streamline the process of admitting and keeping users on the platform and forcing users to review a standardized consent form every time the terms of service are updated damages their ability to do so. One potential cascading effect of this regulation is that smaller start-ups may no longer be able to function in the market because their revenue stream will drastically decline, and thus a whole array of “free” digital service may no longer be available to the consumer. Such an argument, however, ignores the current trend of instability in the market and how consumers are mostly okay with sharing their data as long as it is used in a non-predatory manner . As illustrated by the FDA example, a correction for the asymmetry may actually lead to growth in the total market size.
It is also hard to predict which information collected by a company may be of value and for how long, so it would be impossible to standardize such a form because the situation is so dynamic. Then, if we choose to standardize the data that an Internet service can collect, we have also chosen to restrict its ability to innovate. Additionally, the scope of data being collected may be so cumbersome that a casual user of a service will largely ignore the consent form anyway. Users may also prioritize the protection of different information types, so any attempt to condense the form into a digestible product may be met with public dissent. But these are merely superficial design concerns that can be easily assuaged. User data can be broken down into core components whose presentation would be digestible to the consumer and still be of value to the company.
Counter #2: An expiration date will just create unnecessary burden on users concerning the protection of their data. If users are forced to constantly review their data sharing agreement with a particular company, then they may become even more complacent about reviewing said agreement because they have to do the same thing for every digital service they use. This undue burden is a legitimate concern, which is why the design of the nutrition label should be as easy to understand as possible so that users don't feel bogged-down by privacy consent forms every time they sign up for a new service.
Counter #3: This will not fix the public trust issue entirely because data breaches still occur regularly. Two other major critiques to this regulation are: 1) standardizing consent will not matter if companies cannot protect the data they collect and 2) the creation of a standard consent form may actually benefit the adversary because now it will be significantly easier for them to pick out high value information whose existence previously remained obfuscated. A privacy nutrition label may be a necessary component for protecting user privacy, but it is not sufficient for doing so and it does not clear companies from culpability if information about their user database is stolen. A counterpoint to the first critique is that we are concerned with upholding public trust in the digital ecosystem and creating transparency is only one piece of that puzzle. To address the second concern, every digital service should assume they are the target of an attack. Relying on platform design to obfuscate the value of the data a company collects should never be a major component of a security policy. Finally, if a data breach were to occur, users would now be much more cognizant of the type of information that an adversary may have on them, and thus more aware and resistant to predatory behavior.
Conclusion: Mandating a privacy consent form is not enough to repair the trust that has been broken
Internet services are financially incentivized to maximize the amount of user data being shared, whereas users, in general, have a preference to minimize their online footprint. So long as Internet services have the liberty of designing their platforms however they choose, they will continue to do so in a way that obfuscates the actual consent of data sharing. This will only exacerbate an individual’s confidence that the online services they use will protect their information. Regulation in this space goes beyond mandating upfront consent; we need to regulate the method of consent itself.
As the portion of our lives that exists in the digital space continues to increase, it is paramount that we promote an environment of safety and security, much like we already do with real world government services. Restoring public confidence in our ability to protect user data is only one component of creating that environment, but it is a crucial step towards a sustainable digital ecosystem.
 E. J. Markey, “CONSENT Act,” 2018.
 P. Gage Kelley, J. Bresee, L. F. Cranor, and R. W. Reeder, “A ”Nutrition Label” for Privacy,” in SOUPS, 2009.
 L. Rainie, “How Americans feel about social media and privacy,” 2018.
 M. Warner, “Potential Policy Proposals for Regulation of Social Media and Technology Firms,” tech. rep., 2018.
 Federal Trade Commission, “CAN-SPAM Act: A Compliance Guide for Business — Federal Trade Commission,” 2009.
 L. Cranor, “Necessary but not sufficient: Standardized mechanisms for privacy notice and choice,” Journal of Telecommunications and High Technology . . . , pp. 273–307, 2012.
 F. H. Cate, “The limits of notice and choice,” IEEE Security and Privacy, vol. 8, pp. 59–62, 3 2010.
 K. Martin, “Transaction costs, privacy, and trust: The laudable goals and ultimate failure of notice and choice to respect privacy online.,” First Monday, vol. 18, p. 5, 12 2013.
 R. H. Sloan and R. Warner, “Beyond Notice and Choice: Privacy, Norms, and Consent.” 2013.
 Food and Drug Administration (FDA), “FDA Basics - When and why was FDA formed?,” FDA Website, 2018.
 W. Nugent and W. Cronon, “Nature’s Metropolis: Chicago and the Great West,” The Western Historical Quarterly, vol. 23, no. 1, p. 75, 2006.
 “How is Fake News Spread? Bots, People like You, Trolls, and Microtargeting,” 2018.
 Verizon, “2018 Data Breach Investigations Report Executive Summary,” tech. rep., 2018.
 D. Irani, S. Webb, C. Pu, and K. Li, “Modeling Unintended Personal-Information Leakage from Multiple Online Social Networks,” in Security & Privacy in Social Networks, IEEE Computer Society, 2011.
 B. Krishnamurthy, K. Naryshkin, and C. E. Wills, “Privacy leakage vs. protection measures: the growing disconnect.,” Web 2.0 Security and Privacy Workshop., pp. 1–10, 2011.
 L. Matsakis, “Online Ad Targeting Does Work—As Long As It’s Not Creepy,” Wired, 2018.