Mal Fletcher
AI: 6 Election Questions

We urgently need to press politicians for answers to tough questions. We can’t rely on tech companies or AI to regulate AI.

Key takeaways

  1. Machines lie, express prejudice and make mistakes because we do!
  2. A new AI arms race would present a new form of mutually assured destruction.
  3. AI could skew the results of e-voting, even without human involvement.
  4. Building super-computers means mining mountains of minerals potentially stripping our planet.

“It has become appallingly obvious,” said Albert Einstein, “that our technology has exceeded our humanity.”

Splitting the atom and mapping the human genome arguably led us closer to proving Einstein right. The same might now be said of artificial intelligence (AI).

Is it already too late to set limits for artificial intelligence? Have we already unleashed a tool that’s unconfinable; one that will eventually make us redundant? These are hugely important questions - especially in a year packed with elections. 

Much is happening in the world and this year an estimated two billion people will go to the polls globally. Most people probably won’t expect politicians to focus much on advanced technologies in their manifestos and speeches. After all, it was just a couple of years ago that “artificial intelligence” was a term used only by technology engineers, futurists, science fiction writers and some movie-makers. 

The appearance of generative AI in the form of OpenAI’s ChatGPT quickly changed all that. Within weeks, a string of similar models appeared, backed by the largest big tech companies.

Suddenly, AI was a buzzword. It spoke of something almost magical; a frontier technology that could save us time and effort and, in the process, make us all appear more skilled than we are. With just a few lines of instruction, via text or audio, it can generate everything from poetry and essays to pictures, video and audio. It is also changing our experience with search engines.

As happens with most buzzwords, though, the excited chatter about AI ebbed within a fairly short time. It dawned on many users that today’s generative AI is a very fast collator and analyser of existing human-created material, which it repackages in many genres. People began to realise that there are fundamental differences between AI and human intelligence, which in many ways is still superior.

Not least among the points of difference is AI’s relatively narrow focus. It performs very complex single-focus tasks in ways humans can only dream of. Computers have been beating chess champions for some time and more recently AI has bested us in the even more complicated game of GO.  However, AI doesn’t do so well with simultaneous multiple-focus tasks, at least in its present form. So we’re yet to arrive at anything like an artificial general intelligence (AGI) that's capable of anything a human can do. And we’re probably a long way from an artificial “super” intelligence, which would outsmart even the cleverest human being.

That said, the rate of AI’s growth is surprisingly fast. So fast that it concerns some of the pioneers of machine intelligence. Geoffrey Hinton is often called the “godfather of AI”. He spent a decade at Google developing the machine learning tools that drive AI, by spotting patterns and anomalies in huge amounts of data. 

In a 2023 interview with the New York Times, Hinton said, “[Originally] the idea that this stuff could actually get smarter than people — a few people believed that. But most people thought it was way off. And I thought it was way off. I thought it was 30 to 50 years or even longer away. Obviously, I no longer think that.” 

“It is hard to see,” he added, “how you can prevent the bad actors from using it for bad things.”

Fully automated lethal weapons (killer bots), ubiquitous facial recognition systems, mass identity fraud and deepfake videos that are impossible to detect are just some of the areas of concern. Things considered fantasies a few months ago are now moving into realms of the real. 

British police recently called for existing public CCTV cameras to feed into AI systems that identify thousands of citizens in almost the blink of an eye. This raises huge ethical questions. Machines would need to identify huge numbers of innocent people before spotting one criminal. Even then, police could not be sure they'd identified the right person. Studies have shown that AI makes mistakes and it can be racially biased. Visual AI models sometimes misidentify faces or express bias toward people of darker skin tones. It seems the often unconscious biases of AI’s programmers, or “trainers”, are fed into the machine.

Meanwhile, Britain’s National Cyber-Security Centre (NCSC) has warned that sophisticated AI will increase the number of everyday people who turn to crime. For example, machine learning tools make crimes like phishing, ID fraud and cyber-stalking easily accessible to non-criminals. All of these are already on the rise. 
Given growing concerns about errant uses of AI, you’d think election candidates would say something about plans to regulate the uses of the technology. But all we hear are the crickets.

Yes, some political parties fold AI into specific policy packages. In the UK, the current Labour opposition wants to place AI at the forefront of reforms in the National Health Service. The Conservative Party has already launched the world’s first AI Safety Institute designed to test new AI models before their release. Doubtless, other political groups are looking at AI’s potential impact on education, environmental sciences, bioscience and so on. 

Yet little is said about how AI research and development might need to be proscribed by agreed ethical standards. Political leaders here and abroad seem to have left it to big tech companies to regulate AI.

This approach represents a disaster waiting to happen. A couple of weeks ago workers from two big AI companies called for their industry to be subjected to the same testing regulations as the aviation and nuclear power industries. They’re deeply concerned, they said, about some of the ways AI is being developed, without due concern for human safety, the environment and the common good. 

These tech insiders, and I suspect many others, believe that AI regulation can’t be left to corporations whose primary goal is increasing market share and profits. The profit-motive, like technology, is a great servant but a poor master. It has already driven some companies to release tech platforms before they've been properly tested for unethical, harmful or criminal behaviour. 

Often, this leads to models being pulled from the market, sometimes just hours after launch. A while back, Microsoft made a big noise about its Tay bot. It was designed to learn how to use social media by talking to human users. Within hours of its launch, Tay was closed down because it had started using racist and abusive language. And does anyone remember Metaverse? It was launched with a truckload of hype, only to become a haunt for sexual predators. 

We can’t rely on technology companies to regulate AI. We can’t leave it to AI itself, either.

A recent study proved that AI can and does tell lies. That shouldn’t surprise us when machine intelligence teaches itself by interacting with humans. Our flaws and mistakes are passed on to the machine. Machines lie, express prejudice and make mistakes because we do! 

Whether they’re prepared for this or not, governments need to step up, take responsibility and introduce regulations to cover the challenges and opportunities presented by the new tech frontier. It’s up to the voters, though, to confront election candidates with questions that demand answers. 

Some people argue that setting up AI regulations would represent a case of too little, too late. People already use AI to commit fraud and other crimes, they say, and regulation won’t achieve much. But that’s faulty reasoning. The reason we make laws about anything is to limit or prevent behaviours that already exist; actions that could have an even worse impact if there were no rules and no one to enforce them. Nobody says, “We don't need laws against murder because there are so many murderers out there”. The fact that there is so much killing proves that we need strong action to prevent even more of it.

AI needs to be regulated and it needs to happen quickly. We urgently need to press politicians for answers to tough questions.


Protecting Electoral Processes

One of the most urgent questions demanding answers is this: how do politicos plan to address the risks posed by AI in the electoral process? How will they ensure that if AI is used, say, in the tallying of votes, it is utilised in transparent ways - so that we don’t see huge numbers of fake votes, or AI that reports inaccurate results? 

This will be hugely important over the next decade as more and more nations incorporate E-voting. 

In 2005, Estonia became the first nation to introduce remote electronic voting in all of its elections. I’ve had the privilege of working in this proud nation a few times and can testify that innovation is one of its top priorities. By moving into e-voting and allowing young expatriate professionals to vote, Estonia steered itself away from a resurgence of support for pro-communist parties. It also kept young adults living abroad interested in their homeland. It was a win-win, but you can see how AI could skew the results in an electronic election, with or without human involvement. 

Building Ethics Treaties

Question two for any candidates should be: what will you do to facilitate comprehensive debates on technology ethics? I’m not talking about talk fests between politicians and techies. AI is having an impact that is broad and deep; in time it will raise fundamental questions about what it means to be human. 

The debates we need would involve respected figures from many spheres of society - engineers, physicists, biologists, artists, lawyers, medicos, environmental scientists, philosophers, ethicists and theologians. All of them should contribute to discussions about where AI’s power might need to be constrained, for the common good. Without strong ethical standards, laws won’t be robust enough to protect us as AI advances.

Avoiding Environmental Disaster

All political candidates should also be quizzed on the measures they or their parties will introduce to limit AI-related environmental damage. Maintaining huge data farms and producing mountains of new chips and batteries requires huge amounts of electricity. Meanwhile, running super-computers and smartphones requires that we mine mountains of precious minerals, often in poorer countries. How can we do this without stripping our planet and leaving little for the future?

New Jobs, Anyone?

Politicians must address how they will mitigate the risks of AI-related job loss. This is no longer a purely hypothetical problem. While new technologies usually create new forms of work - there were no type-setters before the printing press arrived - AI’s growth might outpace our ability to retrain and take advantage of new opportunities.  What programmes and financial packages will politicos introduce to help people retrain quickly, in an age where they will change careers several times over?

AI Education and Health

Every candidate should also be quizzed about their party’s plans for AI in the education and healthcare spaces. There are huge potential gains in both fields.  In education, AI combined with virtual reality will allow more immersive learning environments. This would be a boon for students of languages, for example, and imagine learning history by seeing it unfold around you, through a combination of virtual reality and AI operating in real-time.

In medicine, AI is already diagnosing certain forms of cancer more accurately than human doctors. The tech will also feature in increasing numbers of robotic and remote surgeries. Meanwhile, it will allow doctors to present more individualised treatment plans based on up-to-the-minute data drawn from wearables.

What budgets have politicians set aside to accommodate all of these benefits?

Limiting Lethal Weapons

What policies have been drawn up to cover the emergence of Lethal Autonomous Weapons Systems, using AI to power robot fighters, or fully autonomous killer drones? Some of today’s military drones are semi-autonomous. They can adjust their flight path or altitude according to local weather patterns, for example. What they can’t do is decide whom they should target. Those decisions are made by remote human controllers. 

Advanced AI, however, will make it possible for machines to make “kill decisions” without any in-the-moment human agency. Numerous national governments are currently investigating or developing AI devices which may or may not require human oversight in the field. Weapons development is always driven to a degree by the need to keep up with the competition. A new arms race driven by unbridled or unpredictable AI might present a new form of mutually assured destruction.

So, will our political representatives promote international treaties limiting the development of AI for deadly weapons? 

These are just some of the major questions for which we should demand answers, especially in election years. We can do that by contacting local candidates, journalists or intermediate agencies that have contact with government - trade unions, for example, or business representative groups like Britain’s Institute of Directors. 

We can also take to social media, identifying candidates’ user handles to press them on these issues -without abuse. Leaders of local community organisations such as churches and clubs can offer training on these issues, or gather interested members for discussions. Small business leaders can raise awareness with their teams, especially if they’re using AI in the workspace.

What’s most important is that we each commit to increasing our awareness of at least some of the issues surrounding AI. The artificial intelligence revolution might have a greater impact on our lives than the original internet revolution. 

AI will at best be a worthy collaborator in human endeavours - it should not be permitted to become a usurper.

Mal Fletcher (@MalFletcher) is the founder and chairman of 2030Plus. He is a respected keynote speaker, social commentator and social futurist, author and broadcaster based in London.

About us