The surveillance privilege

Oleksii Strashnyi
10 min readJul 23, 2021

Why some pay for being tracked, while others live under imposed scrutiny.

Photo by Agni B on Unsplash

The technological advancements of the recent decade — increase in sensor quality, portability, improved battery life and processing power made it possible for a panoply of tracking software and hardware offerings to emerge and democratize. One of the features of the tracking industry that I find as surprising as worrisome is that while some people are willingly paying to be tracked, the same tracking level is often simultaneously imposed on others who would, ironically, pay to set themselves free.

Sleek and fashionable trackers like the Apple Watch, FitBit, or Garmin products are all examples of devices that people pay a substantial price for. These devices often have the same features that the bulky and inhospitable trackers that law enforcement uses to track suspects, people on probation, the ones awaiting their immigration hearing, etc. The people that are being tracked are often compelled to pay a fee (and some need to go into debt to be able to do so) for the tracking device itself.

In both cases, the wearable technology communicates sensitive data to an external server, for it to be processed by the device provider. The real reason these devices exist is precisely for their providers to have the ability to gather and analyze the vast amounts of data they collect. Shoshana Zuboff, in her famous “Surveillance Capitalism” essay, baptized them “means of extraction” that are enabling, with the data the devices provide, to study, predict and ultimately control the population that is using them. Major providers do indeed turn a profit from the sale of their gadgets, but the main income source of all the big tech giants is, inescapably, in the secondary market for user targeting, behavioral control, and predictions.

When the device is imposed on the user, like in the case of violent offenders, the target is aware that the ankle monitor’s purpose is to monitor and regulate their behavior. The Apple Watch consumer is much less aware of the fact that not only the device is collecting more data than the ankle monitor (up to 10 000 data points every 10 minutes), but the information itself is more revealing as it spotlights both the user’s physical and emotional states: blood pressure levels, the song being played, ECG data, etc. In some cases, the data collected by the paid wearables for the general public can even be unlawful to collect via imposed police monitors that are used for criminals. In sum, fitness tracker users are imposing parole-like conditions on themselves, and often happily pay a pretty penny for it.

It is precisely because of that high price tag that consumers feel sovereign and immune from the tracking: they have a false sense of empowerment due to the self-selection to being subject of the data collection. The high pricing of the product allows companies to say “we make money from the device, not from the data” — one of the main reasons Google acquired the high-end luxury-like startup Luxe, after having invested in FitBit that makes much more affordable devices. The users that have willingly paid for the tracking device see the tracking technology as a benefit they are getting, a data-driven feature they can show friends at BBQ parties, keep score, and improve their metrics over time. The words such as “smart”, “magic” and “pal” feed into the narrative of users getting a bargain after paying the steep price for the tech.

In America, Trusted Traveler programs make apparent an interesting trade-off: you submit yourself to the scrutiny (or as I would call it, “disguised surveillance”) by a company, you pay for it, and in return feel privileged because your travel experience is upgraded, polished and upgraded with treats and favors.

On the other end of the travel experience spectrum, there is the economic migrant or political refugee that travels to western countries where he is subject to all sorts of passport controls, monitoring, biometric data collection, electronic surveillance devices, and so on. The surveillance and behavior modification pressure mechanisms are made explicit, and so is the lower social position of the migrant traveler.

For many years, the technology exemplifying the latter monitoring style has been the ankle monitor. But today, tech firms offer “e-incarceration” technology, a great example of which is self-described by the ShadoWatch producer in the terms: “attractive tamperproof wristwatch” that “incorporates wi-fi, GPS, and network location technology to increase location accuracy” and that “includes motion sensors, vibration alerts, messaging, heart rate, and blood pressure detection.” The “high-tech offender-management solutions company” (yes, someone came up with that language) Shadowtrack sells this smartwatch-like device to law enforcement and penitentiary administrations who later impose fees on people that are on probation or parole — they have to buy the device they will be later forced to wear. Wearables are not the only gadget that is part of this surveillance toolkit — smartphones are another gold mine for border controls across the world who tap into its data collection potential, as the device is often the one source of technology that migrants often have with them at all times.

So, on one hand, monitoring is often imposed on “them”, explained by the institutions as a way of using technology to keep “us” safe. On the other, we willfully submit ourselves to the same or higher level of surveillance after being lured by giveaways and gifts. Here, the warnings of both Orwell and Huxley are concerning because of their relevance:

  • In its 1984 novel, Orwell described a Big Brother that is omniscient, who understands us better than we do ourselves and reports to the ruling bureaucratic authority who decides whom to reward or punish.
  • In A Brave New World, Huxley’s characters are subject to hidden social control: their wishes are granted before they are even consciously recognized, but the utopia comes at the expense of personal freedoms and even free will, as the manipulation mechanism is controlled by a self-protecting governing body.

We are witnessing that those two dystopias can indeed co-exist and are not only compatible but can even be complementary: segregating the technology providers (like the ones for the Apple Watch and the ShadoWatch) that go on serving different markets allows for a better specialization of the firms.

How did we let biometric surveillance exist in those two seemingly different shapes — or two exact shapes that are only perceived differently — and what does that say about our view and analysis of technology that some adopt willingly, and others get imposed on them? Do the people in examples of the two dystopian books were conscious and knew what they were building when they were building it?

Power, politics, and profits — and their respective perceptions — have their role to play. The ones that have none of the three are more sensitive to the surveillance issue, as they experience and perceive the problem in mostly negative terms. The ones that do possess a combination of the three tend to see the social power as “on their side” and either overlook the surveillance or embrace it as luxury (in part because of the price tag). When we believe that we are part of the winning coalition we tend to think of technology as working in our favor and thus often happily pay to install it. And we do so, even when we are not the ones collecting its true benefits and when the gains act as a lure and a smokescreen for someone else’s profit.

Amazon’s Ring surveillance cameras and the Neighbors app are good examples: users purchase them because they expect to feel and be safe, but the data shows the exact opposite: not only they are not any safer, but that they also feel less safe than the non-users. Anxiety and worry are what drives the initial purchase of such tech, and if the products produce more of it, the attractiveness for Amazon’s products is only tenfold. Consumer marketing is full of other examples of the same cure/disease dichotomy: dating apps, social media, and video games manufacturers managed to create a self-reinforcing dynamic that pushes the users to return to their products, even when the product itself becomes the source of their dissatisfaction, the exact thing the users are trying to set themselves free of. The surveillance tech industry has similar dynamics that produce even more nefarious self-reinforcing circumstances: as the feeling of anxiety is expanded, so is the web of surveillance. The feelings of doubt, mistrust, and worry are normalized society-wide, which erodes our trust in one another and, in a dysfunctional and anxiety-fed feedback loop, leaves us craving even more surveillance, directed to the supposedly threatening group.

The perceived presence of two (seemingly) separate classes: the “us”, doing the surveillance, and the “them”, unwilling targets of the surveillance, serves another interesting purpose: they give pleasure to the superior class — the ones paying to get surveilled — as their pleasure is derived from a sense of uniqueness and prestige. Just like luxury-driven needs are fueled by a sense of belonging to a restricted elite (the smaller the customer base the better), the high price tag of the high-end surveillance products extends the perceived benefits, as those come from a sense that “they” cannot afford it. And, on the other end of the spectrum, the distress of the subjects of imposed surveillance reassures the real customers of the technology — the law enforcement agencies. The real benefits of surveillance tech on malefactors are thus more about driving behavior change via a sense of fear (for as long as the surveillance is active) rather than long-term rehabilitation.

An interesting example of a mixture of imposed and self-inflicted surveillance is the corporate wellness programs offered to employees. In their paper, Ifeoma Ajunwa, Kate Crawford, and Jason Schultz claim that employees voluntarily provide personal data — and more of it — when attracted by monetary and non-monetary incentives that the wellness programs offer, and would not do so in a truly voluntary context. The authors also note that the programs must be described as “voluntary” and “self-care” precisely because they would otherwise be subject to the regulations that prohibit personal data collection in the United States. Employers, because of the absence of a definition of “voluntary”, are also free to impose sanctions on anyone who fails to provide the supposedly voluntary information, essentially masquerading the sticks as carrots. A definition of “voluntary” was only recently proposed, and the final adopted version of the legal language will probably be very broad. Corporate wellness programs are essentially imposed, even if initially presented as opt-in products, and the scale of their adoption normalizes and spreads their use, as it serves as a sign of employee’s appetite for them. In another article, Frank Pasquale and Gordon Hull claim that the programs’ conception of wellness is “partial and biased”: the improvements to wellness are overshadowed by the “power grab [that] they offer to their implementers”. The authors refer to an HR journal article of 2003 that explicitly mentions “establishing […] corporate culture and means of social control” as a key driver for the employee wellness programs.

Credit scores, akin to the one imposed on Chinese nationals, serve the same purpose: social dynamics are reshaped when the citizens know they are being surveilled and the “New York Times test” that they are constantly subjected to turns them obedient — in the same way that our awareness of someone monitoring our online activity makes us all docile and compliant online. Similar to the employee wellness programs, the scoring systems not only punish but also reward: behave well, and your kids will go to a better school, you will be upgraded on your next flight or your internet speed will be faster. Note that wherever the social credit system is implemented, the feedback provided by the Chinese themselves on it is surprisingly positive: “I feel like, in the past six months, people’s behavior has gotten better and better. For example, when we drive, now we always stop in front of crosswalks. If you don’t stop, you will lose your points. At first, we just worried about losing points, but now we got used to it.”. The irony that the system itself might be nudging people to be insincere in the opinions that they give about it does not escape me. And that is precisely the point: sincerity and compliance morph in the same way as being constrained and feeling privileged become one. What may look different on the surface is precisely the two sides of the same challenge — its reward and punishment if you will — the challenge to the democratic system that is based on its electorate’s expression of free will.

The carrot given to the winning coalition sanctions the use of the stick on the oppressed. Electronic monitors diverge in shape, price, and origin, but their usage converges on the same focal point: to modify behavior by creating a sense of accountability. Smartphones are also being looked at as a viable option to replace ankle monitors for people on parole or probation.

The surfaces of the Venn diagrams representing what we want, what we want to want, what we need, what is good for us, and what is good for society overlap only occasionally. Surveillance tech raises the stakes by dividing society even more into the “haves” and the “have nots”, and it does so by utilizing the perceived privileges of the few to diminish the protections of the many against scrutiny, manipulation, and imposition of “voluntary” technologies.

What we need is to create a more thoughtful approach for how we discuss technology and its impacts on our minds and behavior. Culture is located upstream of politics, democracy, society… and it shapes everything that it touches in subtle but meaningful ways. As our technology has already changed our culture so much and will continue to do so, we need to recognize the need to analyze, monitor, and regulate it before we are no longer able to collectively do so.

Our personal choices are never isolated decisions that one takes for him or herself, as they have ripple effects on everyone in a society, perhaps in ways we are not aware of and especially when it comes to privacy. What really matters is the given privacy environment in which everyone is emerged into: we need to start asking what we are paying for, how is the money made, who are the “we” and who are the “them”, and if we indeed are okay with how overall our society is structured.

--

--

Oleksii Strashnyi

Poly-curious optimist writing about tech, society, democracy and economics. Non-native English speaker.