Mal Fletcher
AI and Toxic Politics

Democracy is based on transparency, but AI systems are opaque. We don’t always know who is building them, or who is misusing them. 

Key takeaways

  1. Democracy is based on transparency, but AI systems are opaque.
  2. AI might push us toward authoritarianism, playing on our fears and manipulating opinion.
  3. Over-reliance on automation will increase the trust deficit between electors and the elected.
  4. Social disinhibition grows online when people hide behind a wall of anonymity.

"Politics is war without bloodshed, while war is politics with bloodshed.” So wrote Mao Tse-Tung, a master in both politics and bloodshed. 

The attempted assassination of former US President Donald Trump shows us how easily domestic politics can give rise to something imitating war. It reminds us, too, that for all our advancements in other areas of life, our politics have perhaps regressed, becoming more toxic in recent years. 

In most democracies, politics has always involved a certain amount of rough-and-tumble, at least verbally. Seeking election to public office has never been an endeavour for the faint-hearted. Arguably, though, political discourse became more volatile and divisive following the advent of 24/7 news media, social platforms and bot-driven information loops.

“Socials” as they’re sometimes called, gave rise to a phenomenon I call the “hot response culture”. Social networking promised so much. It offered a way of democratising the publication of opinions. Suddenly, everyone with a phone and connection to the internet had their own personal media platform, a public outlet for airing their views. Better still, everyone could use their platform to become a brand.

Little wonder, then, that 5.04 billion people worldwide now consume and produce social media content. Very early on, we came to appreciate how social media made the world feel smaller and encouraged discussion and collaboration.

We also learned, though, that only a small percentage of the millions of messages posted daily are seen or read by anyone. A study by Gartner found that text messages have an open rate of almost 100 per cent, with a response rate of 45 per cent. However, users of Instagram boast an average engagement rate of a miserable 1.53 per cent.

Saying something is easy, but the challenge is to have something worthwhile to say. The instant nature of social media often encourages people to launch missives that are ill-considered, illogical, or even defamatory. 

People often prefer “hot response” messages written in the heat of emotion, to thoughts presented in a well-researched, calm and measured way. Nowhere is that more evident than in message streams devoted to politics.

Imagine, though, how much worse our political intercourse might become in the wake of the artificial intelligence revolution. Unlike social media, AI will soon feature in almost all forms of digital activity, in all sectors - from manufacturing, entertainment, education and medicine to the arts and religion.

Your Friendly AI Candidate!

Already, AI is playing a role in political processes, including elections. The recent British general election will go down in history not just for its landslide outcome but for the presence of the nation’s first AI-generated candidate.

“AI Steve” was an online avatar which promised to hold a chatbot conversation with every voter in the Brighton Pavilion constituency. It would then, it said, decide its policies one issue at a time, based on fifty per cent polls among constituents. 

Set up by a local businessman, AI Steve didn’t do too well on polling day, garnering just 179 votes. It showed us, though, that AI will soon impact our democratic processes in various ways. 

Before the election, Tony Blair issued some advice to the Labour Party. “You’ve got to focus on this technology revolution,” said the former Prime Minister. “It’s not an afterthought. It’s the single biggest thing that’s happening in the world today…This revolution is going to change everything about our society.”

He’s right. AI will soon become part of almost everything we do, for better or worse. It will certainly impact politics. 

Some of the forty countries I’ve worked in over four decades know nothing like the levels of democracy developed nations take for granted. Democracy thrives on informed citizens and free and fair elections. AI can easily be used to threaten all that, by manipulating public opinion through, for example, the creation of fake social media accounts spreading disinformation. Or the use of micro-targeted propaganda - identifying people susceptible to certain forms of suggestion and targeting them with propaganda that plays on their fears. 

Without a doubt, AI is already used by unfriendly foreign powers to automate cyber-attacks during domestic elections. Even the recent British election saw claims of outside interference - albeit from mainly fringe elements. This will be an even bigger challenge as more countries move toward electronic voting (e-voting) in the next few years. 

Democracy is based on transparency, but AI systems are opaque. We don’t always know who is building them, how they’re built, what they’re doing or who might be misusing them.  

AI used in unhelpful ways will sow distrust not just in public institutions, but in citizens’ attitudes to each other. Numerous studies demonstrate the negative impacts social media platforms have had on social cohesion. How much more fragmented might social groups become when AI is added to the digital mix?

AI-powered surveillance systems could be used to intimidate political dissidents, journalists, and opposition figures, creating a climate of fear and hindering free expression.

The wrong use of AI might also push us toward authoritarianism. Autocratic regimes rely on misleading the populace, playing on its fears and manipulating opinion. 
We need to hold AI developers to account. We must ensure that the data they use to train algorithms is as unbiased and comprehensive as possible. Governments need to impose fines or other penalties for developers whose AI doesn’t match certain standards of transparency and accuracy.

AI To The Rescue?

Now, with almost any technology, what is intended to harm can also be used to promote good. Machine learning systems could help us uncover problems within political systems and promote greater openness and accountability among politicians. 

Through data analysis, AI can spot patterns in the language and content of political communications revealing bot networks designed to spread disinformation. It can detect deep fake videos and audio, by analysing subtle inconsistencies in facial expressions, voice patterns or lighting. It can sift through voting patterns, looking for anomalies that suggest someone has hacked or manipulated the system. 

Of course, governments and public agencies that use AI must balance security with freedom. We will need strong measures to tackle AI-related disinformation, hacking and fraud but we must not stifle legitimate political discourse.

Protecting democracy starts with international cooperation to regulate the development of machine learning systems. Closer to home, it also requires that we, the voters, access educational resources to improve our data literacy. We need help to understand AI’s pitfalls. 

We also need robust measures that hold social media companies to account for material published on their platforms. This might also require that they combat online anonymity, which encourages social disinhibition, where people say things online that they wouldn’t dream of saying to a person’s face. 

Disinhibition is heightened when people hide behind a wall of anonymity. In time, social media systems might need to apply tougher AI-driven checks to content published by anonymous users. (Without going so far as to fully censor ideas.)

Artificial intelligence can also help identify political micro-targeting tactics, which spread disinformation. Meanwhile, AI-powered fact-checking tools can crawl the web, verifying claims against credible sources, and presenting voters with clear, concise information as a tool for assessment.

While using helpful forms of AI in electoral processes and political campaigning, we must enlist a humans-in-the-loop approach. An over-reliance on automation will increase the trust deficit we see today between electors and the elected. 

It will exacerbate the sense many voters have of being distanced from the processes of government. It will boost the attractiveness of conspiracy theories, for electors who see outcomes they don’t like.

In politics, as in life, advanced technology must serve as a collaborator with, not a usurper of, human agency.

Mal Fletcher (@MalFletcher) is the founder and chairman of 2030Plus. He is a respected keynote speaker, social commentator and social futurist, author and broadcaster based in London.

About us