Jun 22

Shoshana Zuboff: We thought we were searching Google but in reality Google was searching us

First publication: Disenz, 22. june 2022, Photo Michael D. Wilson

For the pioneers of the internet, the World Wide Web was inherently good. Its fundamental benevolence could occasionally be misused by criminal networks or authoritarian regimes. Consulting firms like Cambridge Analytica exploited user data to manipulate voters and fuel right-wing populism. Private profiteers spread misinformation about COVID-19 and marketed their own “alternative” products. Social media platforms became breeding grounds for hate speech, which in many cases escalated into real-world violence. Electronic surveillance of citizens became routine even in democratic societies and intensified during the recent pandemic.

But according to the owners of major internet companies, these are merely isolated anomalies, solvable through better algorithms and more effective labeling of false information.

American scholar Shoshana Zuboff strongly disagrees. In her view, these are not isolated incidents but systemic consequences of a new form of capitalism, which she analyzes in her landmark book The Age of Surveillance Capitalism (2019).

In surveillance capitalism, the key commodity is our personal data: collected, processed, and traded by large tech and data companies without our knowledge or consent. From our social media posts, search histories, movements, clicks, photos, emotions, and even heartbeats, these companies build predictive models to shape future behavior. According to Zuboff, this aggressive data extraction causes even greater harm than the industrial capitalism of the past, which devastated lives, societies, and ecosystems through unchecked exploitation of human and natural resources.

Surveillance capitalism, she argues, threatens the very foundations of democracy. It replaces civic agency with authoritarian control, enabled by vast informational asymmetries that undermine our ability to make free choices.

As a social psychologist and sociologist, Zuboff has spent more than two decades studying the societal transformations driven by digital technologies. She is the author of several books on the future of work, but her most influential work is The Age of Surveillance Capitalism, in which she systematically examines the production logic of the data economy. She argues that big tech will not regulate itself and that legislation is needed to ban at least the most harmful practices of surveillance capitalism.

Zuboff also appeared in the documentary The Social Dilemma (2020), which explores the negative impact of social media on mental health and society at large.

Slovenian schools were closed for many months during the pandemic of the novel coronavirus Covid-19. A few weeks ago, the pupils protested against the lock-down in several Slovenian cities and demanded that the government must re-open schools. However, the government claimed that their protests were illegal and some pupils were fined – as well as their parents and supporters. The police were allegedly using facial recognition software to identify and fine the organisers of the protests. And they collected the images …

From their social media accounts, I guess?

Most likely – according to media reports.

This is happening all across the world. Private companies like Clearview AI have collected billions of our photos from the internet: from websites, blogs, social media platforms and other sources they consider “open”. They have built enormous databases of people’s faces, created commercial products, and offered their services to law enforcement and intelligence customers.

Is it not ironic? Those pupils have been immersed in the internet since they were born. They use social media to communicate and socialise, to mobilise and learn, to share their experiences and memories with friends and family members … They never suspected that such a private memory could get them punished one day in their future.

And they would have never agreed to such use of their photographs. But nobody has ever asked them. They have never given such consent. As soon as you put something on social media, it is counted as the private property of the corporation that owns that media platform. And you have no control over it.

We have journeyed deep into a territory where there are no laws to protect us. During the first twenty years into this internet experiment, these corporations had to consider very little law to constrain and impede them. They have been able to claim that all human generated data is their private property. This does not only mean our photos and other content that we actually give them. They can also claim all the data they are secretly taking from us without our knowledge.

What kind of data?

One example is facial recognition. Yes, we gave them our photos. But we didn’t give them our fears, our anxieties or confusions, our sadness, our happiness … We didn’t give them those things.

That can also be seen from our photos?

They do not only use photos to recognise us. They use our photos and videos to analyze our micro expressions which are validated against very fine tuned emotional states. They can use our photos to take our emotions and emotions are, of course, prime data for predicting behaviour. And predicting behaviour is the ultimate goal of their data collection. All the big players are using machine learning and AI on human generated data to create behavioral predictions. And ultimately that’s what they’re selling.

This business model emerged almost by accident. After the dot-com bubble burst in 2000 the internet companies were forced to figure out how to actually make money. Google was one such company. Their founders realised they can no longer provide only free search on the internet if thew want to survive and grow.

That is why they started to sell ads?

Google settled on selling advertising. But they also promised that their advertising would be more targeted and thus relevant to their users. How? Because they realised they have also collected and stored enormous quantities of data by-products that came with their search engine.

By-products?

Click patterns, location, spelling, and other pieces of detailed behavioral information that were produced with each Google search. Such information can help them better target advertisements to their users. But targeted advertising was only the first step for Google.

Google and other companies were seeking new sources of revenue to survive commercially beyond their initial goals. And people at Google realised they were sitting on a new kind of asset that I call our “behavioural surplus”: the totality of information about our every thought, word and deed, which could be traded for profit by predicting – but also influencing and modifying – our every need.

We thought that we were searching Google but now we understand that Google was actually searching us. Google has created the first market to trade in human futures: online targeted advertising based on predictions of which ads users would click. When the company went public, their revenues increased by more than 3,500 percent! This incredible number represents what I call the surveillance dividend.

Is the idea that investors will prefer the companies that present themselves as “smart”?

Yes, and companies that present their services as “personalised”, exactly. Many start-ups and established companies followed Google and shifted their business models toward surveillance capitalism and expected outsized revenues. Facebook has been entirely built on the promise that enormous money can be made from selling human futures. The surveillance dividend has also intrigued other sectors of the economy from retail and finance to entertainment, healthcare, and education.

One such example is The Ford Motor Company, the inventor of the 20th-century mass production economy. They have been hit by slumping car sales and they would like to re-invent themselves as a data company. How? By re-imagining their vehicles as a “transportation operating system”. Modern cars collect enormous amount of user data and Ford executives expects to make a fortune monetizing it.

So cars can become another gold mine of information by-products?

Indeed. Cars – as well as most other smart electronic devices – are equipped with numerous sensors, cameras, gyroscopes, microphones, microprocessors, antennas and computers. Sure, they need these components for many good reasons: for better road safety, for more convenience, and to improve your driving experience. Or simply: to make your new car a better product.

But the surveillance dividend is not about making better products. It is about data extraction. Surveillance capitalists want to enter into your car and your home. They want to know what you say and do within its walls. They want your medical conditions, your physical movement around the city. They want to know what you eat, what you buy, and which TV shows you stream. They will record your voice, your location, all the streets and buildings in your path, and all the behavior of all the people in your city. They want your voice and what you eat and what you buy; your children’s play time and their schooling; your brain waves and your bloodstream. Nothing is exempt.

You think you put an app on your smartphone to track your diabetes symptoms, your fitness, your ovulation, or to help you find your way in the in the metro system. But in fact every app is a mule. For surveillance capitalism, every app is a mule for extracting and transporting data from your phone.

To the computer clouds?

To data centres and ultimately to the AI systems that produce predictions.

But how will surveillance capitalists use my brain waves, my bloodstream, and my television habits?

Surveillance capitalism’s economic imperatives were refined in the competition between tech companies to sell certainty. So when they are selling, for example, targeting information to the advertisers, the targeting information is in fact a behavioral prediction.

Over the last two decades, I have observed how those young companies have become surveillance empires that are powered by global architectures of behavioral monitoring, analysis, targeting and prediction that I have called surveillance capitalism. As a consequence, they have engineered a fundamentally anti-democratic epistemic coup that is marked by the unaccountable power that accrues to such knowledge.

Epistemic coup?

In an information civilization, societies are defined by questions of how knowledge is distributed. Who knows? Who decides who knows? Who decides who decides who knows? Surveillance capitalists now hold the answers to each question. They claim the authority to decide who knows by asserting ownership rights over our personal information. They defend that authority with the power to control critical information systems and infrastructures. But we never elected them to govern and this is the essence of the epistemic coup.

The epistemic coup proceeds in four stages. The first is the appropriation of epistemic rights. Surveillance capitalism originates in the discovery that companies can stake a claim to people’s lives as free raw material for the extraction of behavioral data, which they then declare their private property.

So we are not products as the old saying goes, but sources of raw material?

Indeed. And the extraction of data is engineered very intentionally to occur outside of our awareness, because we would never agree to it.

The second stage of the epistemic coup has brought a sharp rise in epistemic inequality. This is the difference between what I can know and what can be known about me. The third stage – and this is the stage in which we are living through now – introduces epistemic chaos. It is caused by the profit-driven algorithmic amplification, dissemination and microtargeting of corrupt information. And much of this information – we can also call it misinformation, disinformation, or propaganda – is produced by coordinated schemes of disinformation such as the troll farms.

Its effects are very much felt in the real world. Epistemic chaos splinters shared reality, poisons social discourse, paralyzes democratic politics and sometimes instigates violence and death.

As we have seen in Myanmar where Facebook posts incited violence against the Rohingya minority?

Or with the storming on the US Capitol after Donald Trump’s attempted political coup. And there are plenty of other similar examples, unfortunately.

And what is the fourth stage of the epistemic coup?

In the fourth stage, epistemic dominance is institutionalized. Democratic governance is replaced with computational governance by private surveillance capital. Each stage builds on the previous one and epistemic chaos prepares the ground for epistemic dominance by weakening democratic society.

In what way?

This past two years of pandemic misery and Trumpist autocracy magnified the effects of the epistemic coup. It is now obvious that we may have democracy, or we may have surveillance society, but we cannot have both. A democratic surveillance society is an existential and political impossibility. We have granted to surveillance capitalists some kind of surveillance exceptionalism and gave some young entrepreneurs an opportunity to accumulate infinite information and unaccountable power – without any democratic mandate or democratic oversight.

Surveillance capitalists claim that they can use their information monopolies, their computing power and their algorithms to predict and shape the future: future markets, future societies, and future elections. This is their most important sales pitch. But how realistic are such promises? Every era of human history has its own oracles – from the great Pythia of Delphi to Nostradamus and Cambridge Analytica.

Well, you have to understand technological development as a movie, not as a photo. So, you know, what surveillance capitalists could do in 2005 is very different from what they can do today and will be able to do tomorrow. These huge knowledge concentrations are increasing. We can only speculate about the speed and scale at which they are able to collect, store, and process the data. A leaked Facebook document from 2018 indicated that their AI backbone is ingesting trillions of data points every day and producing six million behavioral predictions every second. That is giving us an idea of scale. And that was four years ago.

And we also know that computing power keeps growing exponentially.

Indeed, the exponential function is crucial for understanding neural networks and all the other information technologies that have also been developing exponentially. It is so easy for people to take a shot at this and say, oh, well, you know, I am still getting the same targeted ads that I got four years ago for that same bathrobe that I already bought.

Which is true.

Which is true. But that really overlooks the fundamentals of the situation, which is the scale of data collection and the acceleration of machine learning and analytical capabilities as long as all of this continues to not be illegal. You just have to look at their predictive capability today and compare it to where they started in 2001.

And where they are now.

And where they are now and where they are going to be twenty years from now.

We know from the investigative reporting work that has been done that the Trump 2016 presidential campaign was able to use five thousand data sets to analyze black voters in the three swing states of that election: Michigan, Wisconsin and Ohio.

We also know they were able to use these data sets to come up with detailed political and psychological analyses of black voters and, through that, to create segmentations that would allow them to target the black voters in each of these states who were regarded as deterrable; able to be deterred from voting. And we know that just as we see now in my country the Republican Party is determined to limit voting rights because the only way that there is any hope for minority rule in this country is by excluding people from voting.

Why would they want to deter people from voting? Do they not use strong populist rhetoric to mobilise their voter base?

The Republicans are increasingly not only a minority, but a fringe party. Their fringe voice is getting louder because of the economics of disinformation. But it is also getting crazier, more on the fringe. So in the real world, the Republican Party represents an increasingly small number of people, a minority of people. And the leaders have been shockingly frank, including Mr. Trump himself, about saying: if people get to vote, there is no way we will ever have a Republican hegemony.

So right now, in statehouses across the United States, where there are Republican majorities, they are passing draconian limits to voting. The likes of which we have not seen since Jim Crow.

The point is that in these swing states, in these urban areas of these swing states in 2016, we know that the Trump campaign used all of the data, the analytics, and the targeting mechanisms of surveillance capitalism to identify deterrable voters and to target use the whole range of targeting tools to persuade them not to vote on Election Day.

Were they successful?

We know that a significant number of black citizens did not go to the polls because of these targeting mechanisms, because we can compare the voter turnout numbers in that group to four years before and we can see the significant difference. So, yes, they were successful.

If you still want to complain about the ad for that that bathrobe that you have already purchased, fine. But look at the big picture here. This is happening and it is happening now. Disinformation, what I call epistemic chaos. And the way it has taken over our societies is a direct effect of human generated data at scale: the analytical abilities to turn it into fine grained targeting that can zoom from the individual to the collective to drive polarization, divisiveness and so on and so forth. Because the more corrupt the data, the more engagement it draws, the more extraction it enables. These increased abstractions, economies of scale, mean better predictions. All of this is happening right in front of our faces.

Epistemic chaos is therefore good for their business? Or, as you wrote in the New York Times Op-Ed: asking a surveillance extractor to reject content is like asking a coal-mining operation to discard containers of coal because it’s too dirty?

Indeed. Surveillance capitalists are largely producing epistemic chaos by their own mechanisms because it is good for business. Mark Zuckerberg has had many opportunities to reengineer algorithms so they would stop driving the formation of ultra right-wing groups. They could have stopped amplifying the far-right messaging and the conspiratorial messaging. And their internal researchers have identified this dynamic and proposed many solutions. But because those solutions would inevitably hurt growth, they were abandoned. Ostensibly, that was the reason for Zuckerberg to refuse to implement any of these remedies.

So if Trump says jump, Zuckerberg decided even by 2015 during the primaries that he would do what he had to do to keep the cash flowing. And if democracy suffered, if US society was torn apart, if German and French societies were torn apart, if civilians in Myanmar were the subjects of a genocide because of their religious affiliation, so be it. The same has happened again with disinformation about COVID-19 and vaccines. It’s just business, monetisation and growth activities. And these companies have all refused to stand down.

It may be a naive question, but – how is this legal?

There was a question during the congressional hearing and a congressman or congresswoman asked Mr Jack Dorsey, then CEO of Twitter, why did you allow such content to stay up? And Mr. Dorsey said, as Mr. Zuckerberg has said hundreds of times before him: because it did not violate our policy.

But come on! What kind of policy is this? This is the question that we have been asking of Facebook all along. Why aren’t you taking this down? This is an existential threat to every lawmaker but Facebook executives keep saying the same thing. It is not against our corporate policy. It is not against our corporate policy. But how pathetic is this policy? How inadequate is this policy? How can you sit here and hide behind this policy that we know is itself destructive of society and destructive of democracy?

But expecting that tech executives would voluntarily change their policies is like asking a giraffe to shorten her neck – as you wrote in one of your columns. Why would they want to lose their competitive advantage and the one most important source of profits?

Exactly. So let us stop asking them to do anything. And let us stop listening to what they are saying.

Nobody should be paying attention to what these executives are saying because the executive’s job is to gaslight the global public. Gaslighting is their business strategy. What you need to be paying attention to is what the members of Congress said. And what the members of Congress said was historic: We are going after the business model and we will hold you accountable.

We are way past self-regulation, we are way past ethics. We have to take action to constrain surveillance capitalism and its threat to democracy.

By regulation? Is this realistic?

We easily forget that there was a time when we looked at child labour as normal. Or at the cartels, the trusts, and monopolies of the so-called gilded age. The leaders of the industries of the time said this is inevitable. This is industrialization. This is how it works. And that, you know, it is the machines. You have to keep the machines running. Otherwise, it’s not economical. Their rhetorical game was to conflate the suffering of workers with the necessary requirements of the technology.

It is easy to see now that that was the gaslighting of their era. But the same thing has happened to us! They have conflated technology and their economic paradigm. They have worked very hard to persuade us that whatever they are doing is the only way. That the machines are just so overwhelming and huge, and, you know, inevitable.

But actually, there are just about four or five things that lawmakers could do right away to take all the air out of the sails if they could get the votes and the courage. All these companies would then stop. But the reason that surveillance capitalism is moving across the normal economy is because it provides an easy, easy road to margins. You do not have to work for margins by making a product that somebody actually wants to buy or a service somebody is actually willing to pay for.

What do you mean by that?

All you do is just apply the software to whatever you do, the software collects the data, and then you become part of an ecosystem where those data are fungible. And so you do not have to do anything. You do not have to make a good product or service. It is very much like financialisation. Let us just invest in and strengthen financial power.

Is this the reason why you criticise surveillance capitalism within the framework of traditional capitalism? If we assume that the outcomes of free market capitalism are improved products and services, innovation and competition. But you are saying that we do not see any of that under surveillance capitalism.

Exactly. Surveillance capitalism is failing according to the criteria of productive capitalism. It does not work according to Schumpeterian criteria. Those boys love to say that information technologies are bringing creative destruction – tough luck, that is the price you pay for your progress. But for Schumpeter, creative destruction was just a kind of a footnote to a larger story. His ideas were later radicalised by Hayek, whose whole rationale for the radical freedom of market actors was that the market itself is an unknowable, ineffable entity. So if the market is too complex for any actor to know, then the only way to create efficiency and effectiveness is to give each actor maximum freedom.

Well, that description does not legitimate the freedom of these corporations because the market to them is no longer ineffable. They do have a degree of transparency of what is happening in every dimension of society, including economic dynamics that disqualifies them for freedom. They know too much to qualify for freedom even by the standards of the more extreme neoliberal dogmatics like Hayek. So on top of everything else, it is just bad capitalism.

Even in the context of late capitalism?

Now, there are some people who believe that all capitalism is bad. And they may say that rather than arguing about surveillance capitalism I should have just been arguing against capitalism, generally.

But that really is not the point here. I mean, that is interesting and important discussion about the future of capitalism. My point is a far more urgent point: whatever happens in the future of capitalism, right now there are variations of capitalism that can operate in ways that are relatively compatible with democracy and that even – as we have seen in parts of the 20th century – enable democracy to flourish. Democracy flourished in the west, not in the east, and that had to do with a form of capitalism that actually did meet Schumpeterian criteria. It actually did lift all boats that actually did lift the standard here to middle class.

And this worked for quite a few decades until the Hayek extremists took over in 1975.

What happened?

The social contract was broken. Capitalism of the time enabled the social contract in many ways. I know when I was a kid it enabled the social contract because we were not operating in scarcity. We have prima facie evidence that market democracy does not need to be an oxymoron. It can work. It can be beneficial to society and it can allow democracy to flourish. But that market democracy has equal weight on both terms.

That means that you do not have markets unfettered by law. You do not have markets that operate outside of clear regulatory paradigms and you do not have markets that operate without democratic governance. It is the absence of these things that makes surveillance capitalism so destructive and so dangerous. This is where we come back to your very first question.

About school kids putting their photos on Facebook?

Yes. How is it possible that we – in good faith – can put our photo on Facebook and find it exploited by police, by political campaigns, by Amazon’s and Microsoft’s facial recognition systems, which are used to imprison Uyghur Muslims in open air concentration camps in Xinjiang. It all comes back to this sense of marching naked into the digital century, facing the unbridled growth of these companies through extractive and exploitative operations without any legal protection.

And it was not really meant to happen. I mean, as you say, it was never inevitable.

It was not and is not inevitable. We often hear that the destructive effects of epistemic chaos are the inevitable cost of our rights to freedom of speech. Wrong! Just as catastrophic levels of carbon dioxide in the earth’s atmosphere are the consequence of burning fossil fuels, epistemic chaos is a consequence of surveillance capitalism’s economic imperatives. Social media is not a public square but a private one. It is governed by machine operations and it is incapable of, and also uninterested in, distinguishing truth from lies or renewal from destruction.

In your first book In the Age of the Smart Machine you envision a digitized factory. Such a factory becomes a model, a software model. And whatever you do in this model also has real life consequences. Is it also possible that entire societies, cities, and countries could become models within this framework of surveillance capitalism? With enough data and processing power markets could be predicted and distorted, as could societies and individuals.

Well, that is already happening. I have already mentioned the orchestrated disinformation campaigns. They can be effectively used to move us into epistemic dominance, which is what I describe as the fourth stage: computational governance.

What you are describing is actually computational governance where society is reduced to information science. And you govern it the way you govern any machine system. We just need to take a close look at what Sidewalk Labs, a.k.a. Alphabet and Google were attempting to do in the city of Toronto.

What were their plans?

They write about the instrumented city. You do not need laws when you have algorithms. You set algorithmic parameters and everything is instrumented. You can monitor everything to where it falls and if it exceeds a parameter you just shut it down. When a car is speeding or it is parked in a no parking zone you can disable the car remotely. You can also shut behavior down, for example. If some people are making too much noise and they are exceeding the decimal range for that location, the police are alerted to go to that site.

Sounds like Singapore.

Indeed, Singapore is on its way to computational governance. So is China.

Is China setting the course for the rest of the world?

Such future is not inevitable. Most liberal democracies have ceded the ownership and operation of all things digital to the private surveillance capital, which now vies with democracy over the fundamental rights and principles that will define our social order in this century. We now live in the formative years of information civilization and our time is comparable to the early era of industrialization when owners had all the power and their property rights privileged above all other considerations.

In the United States, workers had no rights well into the first decades of the 20th century. Their employers also had all the authority and all the power to determine what happens in a workplace: what are your working conditions, what are your working hours, what is your pay and whether you are fired or not. There were no laws to regulate child labour. Workers had no right to join a union, strike or bargain collectively. There were no consumer rights; and no governmental institutions to oversee laws and policies intended to make the industrial century safe for democracy. It took decades of collective action, of struggle, and of creativity that our societies went through to establish workers’ rights, to establish consumers’ rights, and to establish the legislative frameworks.

Do you see anything like collective action in the digital realm?

Not yet, unfortunately. To a certain extent, there is another kind of big analogy here: global warming.

Global warming?

When people’s bodies were being broken in factories, when they were dying in unsafe working conditions and people had no power and had to work fourteen hours a day to barely make enough to survive … They immediately felt that they were suffering. The embodiment of that suffering was immediate and real.

But global warming – much like instrumentarian power – presents no immediate bodily threat of terror, murder, and violence: the things that we associate with totalitarianism. That is why there has been a more challenging and slower road to awareness and public mobilization in the digital realm.

The tragedy of the climate movement is that it is only after the threats are now becoming far more real with palpable changes in weather – everybody is experiencing extreme weather and climate swings – that climate change is getting real for a lot more people. In the United States there is finally, hopefully an administration that is saying they are dramatically dedicated to this issue which is an existential threat and intrinsically a global challenge.

So are the consequences of surveillance capitalism.

I would say we are not there yet with the tech companies but we are a lot further down the road than we were even a few years ago.

They are still trying to engineer the systems to bypass the user’s awareness so that these processes are undetectable. And that, of course, has the effect of nullifying our decision rights. All the decision rights are appropriated by the corporate entities and it has the effect of nullifying our right to combat, to contest, to resist. They are leaving us in ignorance, and our ignorance is a strategic objective of these corporations.

But we now know how the systems they build are extracting behavioral surplus from us. This process is not hidden anymore and we can be mobilised, we could produce friction, and we could produce law.

But what kind of law could protect us from negative consequences of surveillance capitalism? Some politicians, scholars, and commentators suggest that the biggest tech behemoths should be broken into smaller companies – just as big oil and big telcos were hit by antitrust in the 19th and 20th century. But you are not happy with such a solution.

Sure, antitrust measures can be used against market dominant companies. But when it comes to defeating the epistemic coup, the antitrust paradigm falls short.

Why?

Competition scholar and antitrust champion Tim Wu compares Facebook to Standard Oil. The leaders of both companies – Mark Zuckerberg and John D. Rockefeller – are both ruthless capitalists. They use anticompetitive practices and concentrations of economic power to buy or bury potential competitors. Antitrust can ban such a business model and that was the reason why the US congress adopted the Sherman Antitrust Act in 1890.

Exclusive focus on their Standard Oil-style monopoly power raises at least two problems. In 1911 a Supreme Court decision broke up Standard Oil into 34 fossil fuel industry companies. However, this decision did not end unfair concentrations of economic power in the oil industry as the smaller companies quickly started to merge into new fossil fuel empires that now exist as ExxonMobil, Amoco and Chevron.

A second and far more significant problem with antitrust is that while it may be important to address anticompetitive practices, it does not address the harms of fossil fuel production and consumption. The court’s breakup decision only addressed Standard Oil’s anticompetitive practices. But the judges were ignoring the fact that the extraction, refining, sale and use of fossil fuels would ultimately destroy the planet.

And breaking Tech companies only for their anticompetitive practices would ignore all other consequences of surveillance capitalism?

There may be good antitrust reasons to break up the big tech empires, but that would not protect us from the clear and present dangers of surveillance capitalism. We need to be protected from the massive-scale invasion and we should each decide if and how our experience is shared, with whom and for what purpose.

I have not given Amazon’s facial recognition the right to know and commercially exploit my facial expressions for targeting and behavioral predictions. My feelings are not for sale, but they take them from me anyway. For them, they are just another data point they have collected that day. And I am only one of their customers.

I am suggesting that their property claim over our behaviour and emotion itself is illegitimate. I have to say this again: we may have democracy or we may have surveillance society but we cannot have both. We need legal frameworks that interrupt and outlaw the massive-scale extraction of human experience and stop unimpeded data collection. We can also use laws to disrupt the financial incentives that reward surveillance economics. Can it be done? Absolutely. Democratic societies have already outlawed markets that trade in human organs and babies, for example. And we still have the power to revoke the licenses to steal and ban operations of commercial surveillance.

And that is what they are most afraid of.

Most afraid.

No Comments

Leave a comment

no