Our website uses cookies! You can disable them by changing your browser settings but if you carry on using the site we'll assume you don't mind! Read our privacy policy for more details.

AI, Elections and Democracy: How Big Tech hijacks our free will and prices our consciousness

On platform models taking the stage for truth and influence

Illustration by @kalakal_klk

With 64 countries holding elections, 2024 is the election year. Four billion people can vote in an information age that was supposed to be as democratising as the Gutenberg press – anyone with access to the internet, has access to information. 

Yet, the concept of truth is crumbling at the feet of disinformation campaigns and rumour bombs using generative AI. This is where an algorithm takes the data it was trained on to create new data, like cloning audio, fake imagery, and synthetic data that is not based on real-world events. Popular image generators like Midjourney, Stability and Open AI’s DALL-E can be used to create deepfakes capable of driving narratives that don’t correspond with reality to undermine the opposition. 

AI-bots allow for the efficiency of targeting voters individually – using “sentiment analysis” according to philosopher Nick Bostrom – where opinions can be categorised simultaneously to form an idea of what each user on social media platforms thinks of the government. 

It has the potential for a kind of “mass manipulation,” where messages with customised persuasion could be sent to each individual rather than one campaign message to everyone. It’s what Sam Altman of OpenAI warned of “one-on-one interactive misinformation” ahead of US elections. 

This manipulation becomes more of a threat the less we are able to spot it. AI can develop to overcome cultural and language barriers, which has greater potential to modify human behaviour. The algorithms move from the attention phase to the intimacy phase, argues Yuval Noah Harari. Its aims extend beyond cultivating engagement to a more sophisticated zone of intimacy where it begins to decipher human emotions and behaviour.

The private realm is supposed to be a place for introspection where we reflect, grow and flourish. If AI systems use the access to our internet footprint containing our thoughts, our desires, and our fears, to build connections with us, then platforms can become exploitative, with people getting tuned for certain outcomes. 

We risk losing the opportunity to develop the capacity for autonomy, to mature into our beliefs. This raises profound ethical questions about privacy and manipulation that exploits human vulnerability because this process inevitably excludes people from the democratic process – it contravenes any notions of free will and violates the right to self-determination.

What is being done about it?

In February 2024, 20 tech companies including Google, Meta, TikTok and OpenAI signed an accord to take “reasonable precautions” to combat AI-generated tools that deliberately trick voters. 

The accord outlines methods to detect and label deceptive AI content with the sharing of their best practices and provide “swift and proportionate responses” to when content starts to spread. TikTok and Meta even launched an in-app Election Centre with a fact-checking expert and AI labelling technology to mitigate the spread of online misinformation in real time ahead of the European Parliament elections earlier in June, and the UK elections in July. 

Additionally, Meta launched a helpline in India to detect deepfakes, and OpenAI released a deepfake detector to a small group of researchers to detect content from its own image generator DALL-E, with aims to kickstart research on real-world application. The tech giants are also part of the committee for the Coalition for Content Provenance and Authenticity, or C2PA, that works towards the development of digital standards, like a label, to certify the source and history of media content. 

Approaches to the problem 

We could take lessons from Taiwan, a country heavily subject to disinformation campaigns for their presidential elections in January. According to disinformation experts, their successful strategy included a multifaceted, “whole of society response” with the platforms working with the government, private citizens and civil society groups like Taiwan Factcheck Centre to identify disinformation and publish corrections and counter-narratives quickly. 

This is a different approach than the piecemeal one the UK government took during the pandemic to counter COVID-19 disinformation (such as popular theories claiming that it was a hoax or claims that 5G masts were responsible for its spread). The Telegraph reported that the Counter Disinformation Unit, established in 2019, sought to stop the flow of disinformation altogether by requesting social media platforms to remove specific posts, raising important questions about free speech and the monitoring of dissent. 

Needless to say, there is no silver bullet in tackling the threat from generative-AI. 

But what is getting less attention during this year’s election cycle, is the increasingly polarising echo-chambers that breed the spread of misinformation and exploit societal rifts. 

Algorithms powered by predictive-AI are tools in an engagement-based model solely designed to mine data, blind to what engages you. This has already eroded the informational foundations that keep democracy alive and in motion, slipping under the radar of regulators because its economic logic mediates nearly all digital human engagement across every domain of our lives. 

The problem with Big Tech is the problem with capitalism

The ideology of radical market freedom, most extreme in the US and seen particularly within the hub of Silicon Valley, allowed companies like Google to experiment in the sphere of ‘permissionless innovation’, to “move fast and break things” enabled by the generous boundaries provided by the state which permitted abundant risk-taking

It led to the development of surveillance capitalism – a term coined by Harvard professor Shoshanna Zuboff in 2019, describing a new frontier of economics wherein corporations collect and commodify our personal data. The raw material extracted is private human experience which is then translated into behavioural data. 

This data combines with artificial intelligence and creates a black box out of which come predictions about our present and future behaviour. Predictions once used primarily for targeted advertising, now pervade almost every economic domain – from health, education, finance, insurance, retail to news production. 

This created a new marketplace that trades in behavioural futures, based on what our behaviour would be using the predictions derived by the surveillance capitalists and sold to businesses. 

In the face of competition in the market, they didn’t stop there. Why just sell predictions when you can sell certainty? 

💌

Hijacking free will 

Monitoring evolved into modifying. Their practices include an intervention in our behaviour, to tune, nudge, and herd us towards commercial outcomes in highly scientific ways designed to bypass our awareness, and therefore directly assaults our autonomy. By 2013, Facebook developed a mood manipulation tool that shapes user’s real world actions and feelings for marketers to cue behaviour in a moment of maximum vulnerabilities. Facebook’s experiment filtered users’ feeds and exposure to their friends, finding it could make people feel more positive or negative through “emotional contagion” on a massive-scale via social networks.

We think we are the product, but we are the raw material. The product is the ecosystem – the digital trading platform that companies like Google, Meta and more recently TikTok provides to sellers. 

The deciphering of our behaviour isn’t to make our lives more convenient by giving us better recommendations, it’s so they can charge higher rent for commercial access in what Yanis Varoufakis calls their “digital fiefdom.” Acting unknowingly as ‘cloud serfs’, we feed the algorithm with our unpaid labour: our attention – a 7-hour daily screen time is indicative of a full day’s work creating a billion-dollar industry in which we have no stake in. 

In fact, we are the ones who pay, with every terms and conditions we accept, we enter into “pay with your data” contracts where the use of these platforms means surrendering any “reasonable expectation of privacy”, a standard form contract you can’t negotiate. Take it or leave it. 

By entering this exchange, we paved the way for baby startups to grow into a global institutional order: ecosystems which own data about its people, data science, the cables, computers and clouds. 

Corrupting social discourse

The integrity of the information that holds our attention has no bearing on revenues. If anything, the more corrupt the information, the better bait to drive extraction. Journalists are pushed out of the picture, and news content now operates in an engagement-based zone, away from the public and professional standards of news institutions, into a path where misinformation flourishes because Big Tech prioritises revenues.

The engagement-based model not only corrupts social discourse, but has seemingly ill-defined content moderation practices. Platforms have become our commons for communication, used to share information and advocacy, but recently Meta tried to demote political content on its Instagram platform, coming under scrutiny for censoring and shadow-banning civic action. A notable example of this is an allegation from the Human Rights Watch that Meta’s policy has “censored content in support of Palestine.” 

Content moderation facilitates the information exchange and has significant real world implications. In 2017, Facebook’s algorithm was to blame for creating an echo chamber hate speech against the Rohingya that facilitated a genocide in Myanmar. Amnesty reported Facebook’s deliberate disregard not only for its own policies of hate speech, but for the known human rights risks it was warned about for over 10 years. With the gravest consequences, we found out that user safety does not interfere with their bottom line.

But its data has a history of being used for political outcomes. We saw this back in 2015 when Cambridge Analytica used Facebook’s data to “micro target” political messages around Brexit. The problem becomes even harder to manage with AI, whose infrastructure and training data come from the cloud owned by surveillance capitalists.

Combined with the fact that political advertising performs better on social media compared to traditional methods, it becomes increasingly concerning, during election time, that these platforms hold the stage for undue influence.

Overtaking states, overtaking democracy 

Beneath the banner of Enlightenment, Big tech paved a business model that is profoundly anti-democratic and illiberal. 

Their monopolisation of the web gave them a unique dome of instrumentarian power, one that embeds itself through the architecture of digital instrumentation. Zuboff describes the digital networks as “Big Other” – the middle man in our access to information – a power 

that does not have its claim through violence and fear but through impersonal systems trained to influence our actions remotely, for profit and for politics. 

An infrastructure that actually emerged outside the state which provides a one-way mirror into our behaviour, our thoughts, feelings, necessities and desires. They hide their practices under the commercial protection of ‘trade secrets’ and innovation advantages, leaving us uninformed and without protection. They brag about their ability to addict us, sending us into exile from our own behaviour. As Zuboff puts it: “Big Other’s knowledge is about us, but it is not used for us.

Pricing human consciousness 

Capitalism claimed labour, then nature, and now claims our attention, our behaviour, our private human experience. 

When the methods of surveillance tip over from monitoring to actuation in ways you aren’t aware of, it exploits your thoughts, feelings, sensations, and environment – it fundamentally changes your way of being with the world.

How can we define consciousness in a data-driven world that reduces human behaviour to the actions we make when our moods get manipulated, or in a vulnerable moment primed for nudging, or when addicted to the scroll, or just the decisions we make in the infrastructure handed to us, whether its 140 characters, the positioning of a button, or short form video content. In a data-driven world, our consciousness gets objectified. 

Our behaviour within this digital architecture is a manifestation of consciousness, it is the accessible value, an experience within an ecosystem, and it’s being used to train the new world.

How do we begin to quantify the service our consciousness renders to the surveillance capitalist ecosystem? Perhaps our closest estimation might be the value of the influencer economy that is to reach half a trillion by 2027, or the market valuation of Google, Meta and Tiktok, or how much rent Google demands from its serfs for access in its ecosystem. It seems irrefutable that human consciousness is the digital world’s greatest asset. Therefore, free will should be the highest authority of all, however it is one which is being driven out to make room for data-driven private power. 

The slower we update our understanding of how the digital informs our way of being, the quicker we lose our authority, relinquish our power and live by the algorithm. And with that, practices like democratic elections will become obsolete. Democracy and its legitimacy is rooted in epistemic value, and without nurturing the development of individual ethical and intellectual capability away from capture by the tech elite at the top, it cannot survive. 

Privacy is not private, it is a collective action problem

We are not just living through a tech revolution, but a tech-driven economic revolution. 

Surveillance and the encroachment of our private human experience is not the natural result of technological advancement, but of capitalist progression. This defines questions of power and authority in our current social order.

Does this make the business model of social platforms the inherent challenger of fair elections? Information integrity needs to be built in through requiring transparency, data privacy and user control, to help mitigate against misinformation and undue influence on users. 

Opting out and hiding from the digital is not effective in weakening surveillance capitalists. We have to engage in defining ideals away from their dangerous order, to combat the pursuit of scale and profit over safety and public accountability. 

Engagement in public spaces can help towards the development of new norms for how we use the technologies professionally and interpersonally in our lives. This includes engagement with our representatives and communities to develop a political vision of a digital century that keeps democracy in motion.

I would urge you to not give into inertia, then perhaps change will be possible in communities and countries that collaborate to create sophisticated shared norms and passionately attach themselves to the truth. Without this, the world can never be but a vast void which can be tuned with distraction and manipulation.

What can you do?

  • Engage with your own community to define ideals in what you require from platforms you spend time and how you behave within this digital infrastructure.
  • Pay closer attention to fact-checking the content you digest and share – FullFact is a fact-checking organisation providing the latest election fact checks.
  • Read The Guardian’s guide on How to spot a deepfake.
  • Watch this space for regulations on transparency and data privacy: The EU’s Digital Services Act (which applies to platforms) limits the scope of behavioural targeting, and the AI act emphasises a human-centric approach to the development of AI technologies, setting requirements of transparency for models like ChatGPT.
  • Read about how your health data is linked to Israeli occupation
  • A brilliant article to read on the emerging ideology of techno-authoritarianism.
Illustration by @kalakal_klk who says: “A glitchy eye preys on a man, feeding him content. Its invisible hands eventually drag him away into a ballot box, turning his voice irrelevant.”
Book Club 07 The Society of the Spectacle AI, Elections and Democracy: How Big Tech hijacks our free will and prices our consciousness TopSoil: gardening as radical queer resistance Stammering in the intersections Beyond the pole: cultivating community and destigmatising sex work What is Abolition? What is Settler Colonialism? The Revolution is in 808 What is Green Colonialism? The Black women in my life who bring me joy Exploring mixed musical heritage in collective healing and solidarity