New! H2O now has access to new and up-to-date cases via CourtListener and the Caselaw Access Project. Click here for more info.

Main Content

The Anthology of Swiss Legal Culture

The Privacy Puzzle: Little or No Choice

Ashkan Soltani

The policies and common practices guiding online advertising put Internet users in a tough spot when deciding how to protect their privacy. The technical underpinnings of our digital interactions are so complex that the average Internet user doesn’t have the know-how to build their own tools to browse the web, much less to interact securely and privately online. Instead, consumers rely on “free” platforms built by software companies to communicate and browse the Internet. In exchange for free services, consumers often allow these companies to track their activities and target advertising. Meaningful regulatory structure protecting users from online tracking abuses is also lacking; in fact, we even lack a clear sense of what it would mean to take advantage of a user of a free service. Currently, users must choose between accepting the options provided by these platforms and trying to independently navigate a complicated web of privacy tools and techniques. This decision is complicated by the fact that some of the do-it-yourself privacy protection measures available to consumers might put them at risk of violating arcane laws. The current policy landscape governing online tracking is woefully out of date and sometimes protects companies at the expense of consumer choice. As a result of the inconsistencies in this environment, users face a difficult puzzle when they attempt to protect their privacy.

The business model financing technology companies determines the privacy choices users are given. In the case of most platforms (browsers, social networks, phones, etc.) the model is built on monetizing consumers’ data to deliver advertisements. As such, the defaults are typically set to encourage users to share information as broadly as possible to enable better targeting and measurement. While most of these companies offer users a selection of privacy settings, these are also designed with the company’s bottom line in mind. This is a predictable outcome of the powerful incentive to maximize the value of user data by running complicated data mining algorithms that rely on large datasets. A selection of privacy settings can make users feel like they are in control, but these options are limited. This, combined with our knowledge that consumers rarely adjust the default settings, means that a few companies have implicit control over a majority of individual users’ privacy settings.

There is no guarantee that companies will respect a user’s stated preference to not be tracked, and users often lack the tools to confirm whether or not their preferences are recognized. Additionally, the companies responsible for much of this tracking are increasingly successful at circumventing blocking tools. Even if they may not undermine their own privacy setting options, they are not particularly inclined to adhere to preferences that are expressed through other vendors’ software. My research has documented numerous cases of companies repeatedly circumventing the privacy settings developed by other companies that users utilize to protect their privacy. It is important to note that this kind of circumvention can violate regulations, and the Federal Trade Commission (FTC) has successfully held companies accountable for circumventing settings. So if a user (or researcher) notices a violation of this nature, there is an opportunity to rectify the situation. However, again, this depends on users being able to observe the infraction, which is far from guaranteed.

Particularly ambitious users can try to work around the sanctioned choices, but there are pitfalls here as well. There are some tools and techniques to mask online movements that a consumer could cobble together in order to make it more difficult to track their online behavior. However, some these could be interpreted as violating the law. In particular, the Computer Fraud and Abuse Act (CFAA) poses a problem for innovations in privacy protection. The crux of a CFAA violation hinges on whether or not an action allows a user to gain “access without authorization” or “exceed authorized access” to a computer. In some cases, commonplace behaviors like managing cookies, changing browser headers, using VPNs, and even protecting one’s mobile phone from being identified could be construed as an attempt to exceed authorized access to content. For example, clearing cookies is a commonly prescribed method to protect privacy (by limiting the ability for advertisers to uniquely identify a given user); however, by periodically clearing cookies—or using a browser’s private browsing mode—users can easily bypass publishers’ paywalls (e.g., the ten-articles-a-month limit at the New York Times). Under an unsophisticated judge’s review, this could be interpreted as exceeding authorized access and is therefore a potentially prosecutable violation of the CFAA. This means that, by attempting to protect his privacy from one company, a user might “exceed authorized access” elsewhere (clearing your cookies to prevent Google from developing a profile could violate the NYT paywall).

This combination of circumstances severely limits users’ choices to limit online tracking and protect their privacy. Most users stick with the business-supporting defaults set by the company and even those who deviate are still choosing among options designed to support a business model based on monetizing tracking. The ambitious users who step outside these pre-approved choices have to invest a great deal of time investigating privacy-protecting strategies and, even when they succeed in protecting themselves, may find themselves on the wrong side of the law. All of these factors make it hard to envision a way for the average Internet user to find a reasonable and effective way to protect his privacy.