All the calories, none of the great taste.
“We cannot have a society in which, if two people wish to communicate, the only way that can happen is if it’s financed by a third person who wishes to manipulate them”.
Jaron Lanier
We are social creatures, and as such we are effective at large-scale collaboration and trust: individually, we are powerless and would have never survived through the evolutionary cycles. Our collaboration skills, catalyzed by our ability to trust strangers — you don’t have to know the pilot to trust him to fly the plane — are our major competitive advantage and have propelled us into air-conditioned offices while leaving other animals starving on shrinking floe.
Blindly trusting everyone is naive and leads to all sorts of abuse, so we learned how to to “trust but verify” and created entities, institutions, in which we placed our trust — the FAA and the accredited pilot school in the above example — that distill trust in our society and thus provide us with a public good. We have agreed to give those institutions some of our individual freedom and a share of our income, to collectively enjoy the societal infrastructure they provide. We have struck a balance between general interests and personal rights.
Today, as our societies are evolving alongside an omnipresent technology that is having an ever-greater impact on our lives, social contracts must be revised. As in the fable of the boiling frog, because the change has been gradual, we often failed to notice how tech has transformed our world and thus fail to see now how those changes make us vulnerable: the data economy has set the stage for an Orwellian dystopia for us to live in, and requires urgent reform.
Part 1: Hooked
Societal-seeking conduct is rooted in our DNA and we have all enjoyed the benefits of technology that connects us: emails, social media, VoIP, messaging and dating apps (once made user-friendly) became widely adopted, often becoming our center of attention. Originally, nearly all of these were freely available and the internet was the Wild West of capitalism: an unregulated, unstructured, non-marketed and non-monetized digital world. Only after the first dot.com bubble burst, investors started to put pressure on tech executives: the latter were told to monetize their creations, or be forced to put a price tag on them (yes, at the time, a few companies attempted to pitch paid email and chat services to investors, which sounds almost heretical today). With investors’ push, in the late 90s, tech companies had to pivot from growth to margin — akin to Uber, AirBNB and DoorDash today — forced to raise their prices, to keep up with ROI targets.
The shift from building a customer base to yielding profits is best exemplified with Google, where John Doerr and Michael Moritz (from Kleiner Perkins and Sequoia Capital), urged Sergei Brin and Larry Page to monetize their superior-to-competitors search engine. The founders were asked to either come up with a business model or be forced to divest the insolvent project: Palo Alto was introduced to the rules of Wall Street. As expected, the two bright minds came up with a brilliant idea: use the data generated by user searches to predict their interests, and sell those predictions to anyone willing to pay for them. By doing so, Google avoided shifting away from growth — the search engine remained free for the users — and became an investor’s darling in the process. Revenues increased 3000% in the period leading to Google’s IPO.
The ad-based internet then gave birth to what Shoshana Zuboff described in her book “Surveillance Capitalism” as a “new economic order that claims human experience as the raw material for hidden commercial practices of extraction, prediction and sales”. It didn’t take long for Sheryl Sandberg to leave Google to join Facebook and replicate the miracle. Most of tech products that were developed since followed the same model, and enjoyed the same financial success. We see companies like Zynga, owner of the app FarmVille (where one can trade virtual tomatoes) being valued in the tens of billions of dollars because of their data processing capabilities mounted on the knowledge derived from exactly how one uses his phone to sell virtual tomatoes.
In exchange for receiving the vast amounts of personal data that we provide, the companies are now able to offer more personalization and an overall improved service, which by itself isn’t so bad. The problem we have in the relationship with the tech giants is that our bargaining power is so limited and information asymmetry so strong that we are not even presented with another choice but to agree to their terms and conditions. A telling example is the experiment ran by a software company promising $1000 to anyone who claimed it, at the end of its terms of service: the company received exactly one claim after four months and 6000 downloads. What is legally considered as our true consent is more a sign of our collectively accepted impotence.
For the rest of the economy, seeing the tech companies become the most valuable enterprises in the world, in part because of zero marginal costs products (someone downloading the Uber app costs nothing for the company), combined with high network effects, created an urge to replicate the freemium model across all industries. Nearly all other sectors, irrespective of their nature, are now branding themselves as “tech-based”, “data-driven” or “AI-powered”, in the hopes of being valued at 20x times revenue. WeWork’s fiasco is just the most recent, but no less absurd, example of the frenzy. This reverbates into more monetization of data (as more economic entities now derive, or try to derive, revenues from it), so more money is invested in tech for tracking users, analytics tools get more savvy, etc… The most notable examples of industries affected by this “techification” are banking, insurance, travel, retail and healthcare. Even our hardware has mutated to follow the trend: no matter its original purpose, almost any device is or will at some point be connected to the internet, implying by law that the device must to be associated with a personal identifier such as a MAC, EMEI or IP address (more on that in part 2).
With all that in mind, it’s easy to see why user engagement became key to generate more data, and it turns out that gamification but also conspiracy theories, hate speech and clickbait are far more effective to keep us engaged than anything else out there. Tapping again into our physiology and the structure of our brains, social media platforms have created the perfect environment for us to constantly be on a dopamine rollercoaster as they evaluate the quality of their products using attention retention rates. The software looks sleek as the overall experience is designed to feel mysteriously magical, and the use of services is socially rewarding or punishing — that’s when we become hooked and keep placing our bets on the social media roulette, desperately trying to hit the ultimate jackpot of any social creature: popularity. Gambling and social media are equally similar in the distribution of winners and losers: a tiny fraction of the players reap all the benefits of the game and the betting houses do their best to put them on display, to make every loser believe that they can also make it to the top. In this system, the users are naturally encouraged to generate the maximum amount of behavioral surplus as possible, for the business model to monetize. Cal Newport noticed that with the constant need that some of us have to hit refresh on whatever social media they are using, it has become almost impossible to live a truly intentional life and enjoy our experiences while they happen. On a personal note, it is saddening to see that our best and brightest no longer try to put a Man on the moon, as they are too busy trying to glue all mankind’s attention to their pocket computers.
Competition between different tech firms and the finite nature of human attention only amplify the need to design products aimed at maximizing the time users spend on them. As we all have only 24 hours in a day, and that time is not expandable, the tech companies are locked in a prisoners dilemma-like situation: every company competes to grab and keep users’ attention, and it is sub-optimal for any firm to not replicate the user retention techniques used by its rivals. Once YouTube auto-plays its videos, Netflix has no choice but to auto-play the next episode, Facebook has to automatically start the videos that appear in your feed, etc., as none of them wants to see users shift their attention away and disrupt the data flow. While we see the users being converted to constantly-engaged content-generators – not final beneficiaries of the service – we somehow refuse to evaluate the monopoly power of the tech giants based on the rent-seeking activities directed towards their real customers: the advertisers. Facebook ads price list has so many zeros or it that even small presidential candidates cannot afford to run major advertisement campaigns on Facebook: isn’t this the definition of a deadweight loss that antitrust law claims to be combatting?
In our knowledge economy data is the new oil, and personal data the high-octane kerosene: we are constantly nudged to generate more of it, as it is what fuels many industries as the owners of the data, governments included, derive valuable insights from it. In this new economy information truly is power, and by sharing our data with entities that did not exist a few decades ago we are able today to reshape the world’s geopolitical balance: Facebook’s servers and Amazon cloud infrastructure now play a non-negligent role in US foreign policy and defense strategy. TikTok and WeChat are part of the Chinese foreign and domestic soft power, Google’s spam algorithm filters which presidential candidates’ email will end up in your mailbox and the above-mentioned companies can now predict with certainty who you will vote for: politics is also transformed by the shift to a data-driven economy. At the same time, as of today, there are no strong regulatory standards that define how privately-owned data must be protected, no laws exist that prohibit data misuse and there is no framework for designing algorithms that can have an enormous impact on our behavior and political choices. While many private aspects of our offline lives are protected by concepts such as private property, image rights, medical secrecy and legal professional privilege, none of this exists for when the same aspects of our lives are digitized — and they increasingly are — and we rely solely on private actors to self-enforce the few outdated existing regulations: a lawless internet makes us all vulnerable.
Part 2: Shackled
In 2013 Edward Snowden exposed that most governmetnts rely on the surveillance machine built in Silicon Valley to invade the privacy of their citizens, without necessarily trying to attain a safer society in the process. What looked like a crisis turned up being a simple scandal: after the media cycle had passed, no consequential reforms were enacted. Perhaps only Germany, having a fresh memory of the Stasi days, had a meaningful response. Our governments went on to explain that mass surveillance is required to keep us safe, and that to reduce the extent of the data collection would put us all at risk. Some have even brought the dreadfully Orwellian argument that those who have nothing to hide should have nothing to fear.
Wiretapping and eavesdropping are as old as communication itself: Christopher Soghoian, a privacy researcher at the ACLU, goes as far as to say that our entire phone network was designed for surveillance, not communication. In and of itself the ability of your government to monitor your communications to protect you is not bad: it is yet another trade-off between individual rights and collective benefits that we make in a modern society to not live by the rule of the jungle. It is the easiness with which citizens’ surveillance is achievable today, and its ever-broadening scope that are worrisome:
- The fact that we all use the same tools and devices today to connect (there are no special types of accounts for terrorists on social media) supposedly justifies the idea that all our tools and devices are to be made hackable, for the police to easily peep into one, when needed.
- Data mining techniques, computer processing capabilities and more recently machine learning algorithms have made rapid processing large chunks of data possible, timely and cheap.
The argument suggesting that we need to allow the authorities to keep an eye on our online lives to keep us safe is triple-flawed. First, because we now know that our everyday devices can be tapped at any time, we see the emergence of parallel networks and private encryption protocols that are harder for the authorities to monitor and crack: ISIS no longer plots its next attack in the comments section, below its own Twitter posts. Second, by designing networks and devices in a hack-friendly way and with backdoors in their code we make it possible for anyone with means and desire to spy on us: maybe you don’t have a problem with letting your government look into your life, but what about the intelligence services of a foreign power, or a profit-maximiizing entity? And it is foolish to assume that NSA’s own agents cannot be tempted by radical ideology or use of violence in the pursuit of political goals. Finally, we have learned the hard way in the past century how even a strong European democracy can quickly turn into the most evil of dictatorships: mass data collection and information processing capacities simply confer too much power to any regime, making it almost impossible to be rightfully resisted by its own citizens. Any effort to redesign our communications with privacy in mind should thus not only make it harder to tap into the flow of data in the physical infrastructure (the submarine internet cables and the server rooms in Antarctica), but be specifically made in a way that makes it hard to switch it all back to surveillance mode.
“Just think what Goebbels could have done with Facebook” — Haaretz
What is equally unsettling is the “New York Times Test” to which we are all constantly subjected online. Initially the test was designed as a behavioral check for executives to have before making a decision: if they would not like to have the outcome to be published in the next edition of the paper, they should think twice. Today, as we know that our governments can easily track our online activity, we also modify our behavior, just like we would in the physical world if someone was looking above our shoulder: we are much less likely to engage in any kind of political opposition or to speak truth to power. The idea of possible constant online surveillance has turned us into more obedient and submissive citizens, which will be a problem for us when the next guy with a funny mustache tries to restore our country’s lost greatness.
Making our connected world designed with privacy in mind will certainly make spotting the bad guys online more difficult. So do search warrants, probable cause and the presumption of innocence, and let’s hope that no one will soon be suggesting to get rid of those too in the name of safety. Opinions on the matter may differ, but it makes sense to at least have a public debate and vote on the issue, as the mere ability to tap into the myriad of personal data that we generate through our ever-increasing amount of devices can change our relationship with the government — those are exactly the conversations we need to be having, and so far had none. Overall the balance between individual liberties and safety provided through different institutions (the armed forces, the intelligence community and the like) became skewed, as we are asked to give away an increasing amount of our privacy for an alleged safety in return.
This skewness has emerged gradually: in times of crisis, when public opinion was most in fear of a threat, we had agreed to give away parts of our privacy and freedom for the institutions to protect us against the assailing foe. Examples include the second World War, left-wing radicalism in the ’60s, and religious terrorism in recent years — every time multiple laws were passed that made us give away personal freedom in the hopes of restoring peace. Governmental power and its direct implication in the economy both tend to increase during turbulent periods: nationalization of industries, universal healthcare and income tax were born in times of economic downturn. Not only crises make the radically privacy-invasive ideas sound acceptable, but our elected leaders enjoy buoying approval rating spikes that give them momentum and legitimacy to enact reform. The same argument is likely to be used today, as we are more scared of the worldwide pandemic than of the big brother watching us: some of us are keen to trust Apple with analyzing our movements to predict the risk of contamination, and Amazon with delivering sanitized masks and testing kits to everyone. Would you not like to get a push notification after taking the same bus that an infected person, or agree to share your Fitbit data to get tested by Google’s new AI?
Once enacted, it is surprisingly hard to reverse any reform passed in times of a crisis that may have seemed to come at the expense of an only temporary privacy limitation. History teaches us that once the response to the crisis is enacted there is little, if any, post-crisis review of the measures: no one thought about repealing the Patriot Act after victory was declared over ISIS and US troops were pulled out of Syria. The reforms that will be passed today to face the Covid-19 pandemic will probably stay with us for the next decade and are likely to be invasive to our privacy, especially as we see Asian countries (that often turn to extreme ways of tracking their citizens, for many reasons) emerge as an example for how to handle this health emergency. More broadly, governments across the globe will ramp up their role in the economy: the push towards less government intervention usually comes from the idea that markets allocate resources more efficiently but right now, more than efficiency, we need coordination and focus of our resources and effort: big government is back in town.
Multiple studies show that westerners are now more than ever scared of folk that don’t look or eat like them, terrorism ranks high in the list of fears that we have, and we completely failed to eliminate domestic violence.
It looks like the industrial-scale data collection is not quite aimed to make the world a safer place, but is instead used for economic espionage and political oppression. Privacy and data collection should not be treated as issues related to technology – privacy is about power, and right now because we didn’t get to chose the digital world we want to live in – it was designed by a small circle of tech executives – we are giving too much of that power away.
As of now, I am yet to be convinced that the claimed safety gains in the recent decade are proportionate to the privacy intrusion that we have all passively agreed to — we are getting all the calories, with none of the great taste. It is hard for some of us to even imagine how to tame the surveillance apparatus than in Asia forces people to obey and work, and in the West nudges people to calm down and buy.
There is good news on the corporate landscape, as people are becoming more aware of the dystopia that we have allowed to creep in on us. Privacy associations and journalists have raised the alarm and more tech companies now claim to have privacy-as-a-feature in their products. Several messaging apps, like Signal or Telegram, have done many cosmetic improvements so privacy does not always have come at the expense of user-friendliness anymore. There is also a notable trend to shift away from advertisement as the only source of income: successful companies no longer depend on data monetization only — we even seem fine with paying Netflix or HBO 200 dollars a year to enjoy the same products we used to get for free — why can’t Google invoice us based on the number of searches we’ve done in the past quarter, or Facebook be an ad-free, subscription-based platform?
What we are in desperate need of is more oversight and reform on data usage in our modern data-based economies, but several roadblocks persist. After watching the Senate hearings with Mr. Zuckerberg, my hopes did not get too high for a meaningful change: most of the legislators (some twice as old as their constituents) still seem disconnected from the reality of the tech world that we live in. The other obstruction to reform that is often brought up by opponents of privacy regulation is the “privacy paradox”: yes, more people claim to care about their privacy online but at the same time gladly give away their personal data to use tech products for free — for most of us, the carrot is just too damn tempting.
Should we engage in tech reform because people care about their privacy online, or let the invisible hand do its job? Should we get paid for the generated data and make it a publicly-traded commodity, for us all to benefit from generating it? Can we reshape the tradeoff between us and our institutions and be safe without having to be watched?
Perhaps the best thing to do right now is for us to start claiming back our privacy with our online choices. As we adjust our appearance aspiring to impossible ideals to prove our social worth, let’s set realistic ideals for our privacy.
OST