Mal Fletcher
Tech Forecast Human vs Machine

Key takeaways

  1. The technologies shaping our immediate future are not distant dreams, but burgeoning realities.
  2. AI biometrics could mean suspected criminals are tracked for signs of intent to commit crimes before they happen.
  3. We assume that technology will serve us well because our governments are inherently benevolent but we must remain vigilant.
  4. Immersive journalism will use virtual reality, augmented reality and holographic projection to allow us to "experience" news stories.

"The future is already here — it's just not evenly distributed." So wrote William Gibson, the man widely credited with launching the cyberpunk genre within sci-fi writing.

As we stand in the lobby of 2025, Gibson's prescient words ring truer than ever. What was wildly imagined fiction yesterday makes headlines today. Even the now ubiquitous wi-fi was sci-fi not long ago.

The technologies shaping our immediate future are not distant dreams, but burgeoning realities. They're already taking root in laboratories and startups the world over. This report delves into some of the cutting-edge advancements poised to redefine our world very shortly, offering both a beacon of hope and a clarion call for vigilance.

In many ways, the future is a product of decisions made today. Outside of the profound impact of natural disasters and the terrible, random implications of war, our collective futures are decided not by the techs we develop but by how we choose to use them.

The technologies explored in this report have the potential to solve many problems, especially at the pragmatic level. But they also carry risks that demand our attention and careful consideration.

Some of the many tech developments that form part of our research at 2030Plus do not appear in this report, for brevity’s sake. I have included many of those that we feel will have the most immediate impact on people and societies, particularly in the West.

 

 

AI Biometrics

 

Facial, voice and gait recognition, powered by AI, are already used or being developed by police and security agencies. These can help reduce crimes such as ID fraud. The Cifa National Fraud Database reports that this costs UK taxpayers an estimated £1.8 billion each year, with more than 235,000 cases reported in 2023 alone.

In the US, there's a new ID theft every two seconds. This crime affects more than 60 million Americans each year, at an estimated total cost of around $16.9 billion.

China is pioneering gait recognition tools that can identify individuals from up to 50 meters away, based solely on their walking pattern. In the Middle East, this type of system is being considered for boosting airport security.

Facial recognition relies on computer imaging programs that map certain data points on the face - for example, the shape of the cheekbones, the space between the eyes, or between the nose and mouth. Computer algorithms transform this information into a unique "faceprint," a set of biometric data comparable to a fingerprint.

Many of us interact with facial recognition on our smart devices, in place of passwords. But in 2025, we can expect to see some governments adopting much more sophisticated AI algorithms that can "read" emotions, based on micro-expressions. Some will explore their use for border control. The EU is already moving this way.

This year we will also hear about moves toward three-dimensional face recognition, which will be even more detailed in the data it collects. It will allow much more granular emotion detection.

Doctors will find this helpful, as they try to monitor a patient's general health and provide a targeted diagnosis. Psychologists will find it useful as an aid to understanding the depth of a client's psycho-emotional trauma.

3D face recognition and other forms of AI biometrics might also help police track criminal perpetrators. But they could also be used for predictive policing, where known or suspected criminals are tracked for signs of intent to commit crimes before they happen. This poses a significant challenge to human rights, particularly the assumption of innocence which is a cornerstone of Western jurisprudence.

Biometric technologies are efficient but might not always be used in humane ways. For police to identify an offender they might need to surveile thousands or tens of thousands of people. And that one supposed match might still prove to be wrong, especially given the tendency of AI models to be biased along racial lines.

Studies show that AI systems can it difficult to decipher between people of darker skin tones. Developers are working to reduce this bias, which is partly a product of how AI models are trained. But public agencies need to be careful about relying too heavily on them.

AI biometrics will almost certainly be used to construct interlocking global databases, in the fight against organised crime and terrorism. In the UK, senior public figures such as former British Prime Minister Tony Blair, have called for a global database of the vaccinated, ostensibly to help world leaders prepare for future pandemics.

The subtext is that global digital databanks will eventually be needed to solve other problems too, so we'd better start building them now. However, there are huge downsides to interconnected databases, with privacy topping the list. Whatever its subject matter, even a basic global database would include highly sensitive personal information. Such repositories would be natural targets for hackers and fraudsters.

Global databases also create space for human rights abuses, including restrictions on freedoms of expression, movement and worship. They could lead to major abuses of power domestically and the misuse of private data by foreign agencies.

We who live in liberal democracies often take our freedoms for granted. We assume that technology will serve us well because our governments are inherently benevolent. History has shown, however, that free people must defend their civil liberties and remain vigilant about incrementally intrusive bureaucracy.

AI biometric systems are prime candidates for technology creep. They might be introduced for a single limited purpose but gradually given much wider usage without public awareness or approval. In the end, AI biometrics could very easily deepen the trust deficit that already exists between governments and their citizens.

When police, for example, rely on this type of technology, they might easily spend more time interacting with digital tools than with human beings they are meant to serve.

Studies show that, on an individual level, over-reliance on smart technologies can cause our interactions with other people to become more distant and even cynical. If this happens with police or security service personnel, the result might be very bad for them and the wider public.

AI biometrics could add to social fragmentation. AI facial recognition systems are now being tested in augmented reality (AR) wearables, such as glasses. These have inbuilt cameras, data connectivity and the capacity to search for information on anyone they photograph, projecting data and images onto the lenses of glasses. This will allow the wearer to find out all about you while simply looking at you in the street.

This is great news for anyone inclined toward nosiness and even more so for stalkers. But imagine what governments or police services might do with these tools - especially in an age where citizens are encouraged to report one another for even the slightest perceived insult, in the name of preventing hate crime.

In the end, without properly applied ethics, oversight and regulations, AI-powered biometrics potentially open the door to citizen ranking.

China is the world leader in this area with its social credit system. While not yet a nationwide institution, it is being gradually rolled out across China's regions with nationalisation as the goal. Basically, the government surveils communities and then offers benefits to citizens who are seen to behave in government-approved ways. It restricts opportunities to people who don't.

As a form of privacy invasion this might, in time, rate second only to the "reading" of thoughts via misused neural implants. We must not follow China's lead.

In short, AI-enhanced biometric systems could benefit us in some ways, but strict limits on their use will need to be stipulated in law and then policed. This is not an area where we can afford a laissez-faire attitude. Nor can we rely on self-regulation by big tech developers, something politicians sometimes seem too quick to do.

The future of AI biometrics should not be decided primarily by market forces. Already we're seeing AI models released onto the market that have not been properly before they're released onto the market. Microsoft's Tay bot is just one of many examples of tools that have had to be recalled shortly after release, because of their erratic behaviour.

While attempting to regulate artificial intelligence, governments need to sponsor wide-ranging debates on the ethics of AI. These need to go beyond talk fests between politicians and technology developers. We need debates featuring experts from the worlds of physics, engineering, ethics, politics, law, philosophy and even theology. All have important contributions to make regarding two of the most pressing questions of our time. What does it mean to be human? Where should human autonomy end and machine automation begin?

Unless we can reach broad cross-sectoral agreements on ethics, the laws we make won't be resilient enough to withstand pressure from narrowly focused vested interests.

 

 

The News Revolution

 

Something interesting and potentially revolutionary is happening in the world of news. For journalists, producers and editors, it is either exciting or terrifying depending on your sector of employment.

For consumers, at least at the moment, it presents many more ways to engage with the news cycle and the opportunity to hear diverse sides of one story - if we're bold enough to seek them out.

2025 will see a widening gap between mainstream or "legacy" media and what might (only just) still be labelled "new" media, when it comes to public trust and audience expansion.

The outcome of the recent US election shocked many political pundits in that country and beyond. Some were surprised that Donald Trump won so convincingly, despite apparent opposition from powerful elites - not least America's main corporate-owned media.

The election result highlighted the growing power of digital media, particularly independent platforms, as sources of news and editorial content. New media, perhaps especially social media, have begun to challenge legacy media across all genres, including entertainment.

This was borne out by the July 2024 edition of the Nielsen Gauge. It reported that streaming accounts for 41.4 per cent of the US media market. Meanwhile, cable TV is used by 26.7 percent and the once mighty broadcast TV is consumed by just 20.3 percent. Other forms of media account for 11.6 per cent.

Of the streaming market, YouTube is the largest service, accounting for 10.4 per cent of viewership. Netflix is used by 8.4 per cent of the overall market and Amazon Prime covers 3.4 per cent. It seems likely that YouTube attracts the largest number of viewers because it provides a widely accessible platform for professional and non-professional video content creators. It has also long allowed extended programmes, such as interviews.

YouTube also welcomes full news and editorial programmes, which some other streaming services have only slowly embraced. The possible exception to this has been Facebook, which in 2022 was used by 31 per cent of UK adults looking for news. That's the finding of the Reuters Institute Digital News Report.

In the closing days of the US presidential election, Donald Trump agreed to a three-hour interview with Joe Rogan, host of the world's most popular podcast. The "Rogan Experience" boasts 17.5 million YouTube subscribers and 14.5 million Spotify followers.
This interview attracted an audience of 26 million views within its first 24 hours online - and that was on YouTube alone. It has now been downloaded more than 100 million times across all platforms.

Legacy media can't compete with that. CNN's US election night coverage was viewed by just 5.1 million people, a drop of almost half compared to the 2020 election.

In Britain, the story is not quite so stark, but the trend is heading in the same direction. Media regulator Ofcom reported in September 2024 that 71 per cent of adults over 16 accessed news online - mainly from social media - in the year to September 2024. This was up from 64 per cent in 2018. Meanwhile, 70 per cent of UK adults accessed news from TV, but the trend has headed steadily downward in recent years. In 2024, for the first time, the nation’s flagship BBC One news bulletin lost out to online news.

Not only have social media platforms like YouTube and Facebook, plus X and TikTok, made news more accessible, they have democratised the creation and distribution of news. They allow individual consumers to become respected influencers, or "citizen journalists".

Social platforms also allow consumers to help police the accuracy of news content. For example, X encourages "community notes", generated by users, to be added to messages that contain questionable claims.

Of course, the speed and volume of user-generated content create problems with fact-checking, which is a key component of true journalism. As Jonathan Swift wrote, "Falsehood flies, and the truth comes limping after it." It takes much longer to disprove fake news or deepfake videos than to produce and disseminate them. For this reason, people will still appreciate the work of trained journalists, provided that the profession seeks objective truth above internal corporate approval or acceptance with peers in the office.

Using AI algorithms on social platforms also creates information loops, potentially limiting people's exposure to diverse viewpoints. This has already led to a breakdown of commonly held narratives and, to some degree, increased the power of cancel culture. According to a 2024 report by the president of Dartmouth University, an Ivy League school in the US, students have trouble engaging people with whom they disagree. Sian Beilock, also a cognitive scientist, says this is largely because of the influence of social media, which she blames for creating "echo chambers" that make it harder for students to interact in real life.

I would argue that it's not social media alone that is to blame for students' apparent intolerance of opposing views. It's the dangerous cocktail of social media and neo-Marxist critical theory, which accepts no departure from its creed and is promoted by some head-in-the-fog academics.

In 2025, the shift toward a more organic, fluid and fragmented delivery of news and editorial will continue. It will force established news providers to experiment with existing and emerging forms of new media. Some of these are described below. Eventually, a new status quo will emerge, built on a largely peaceful, if not comfortable, co-existence between new and established platforms and formats.

On a more positive note, 2025 will see news media combined with technology in exciting ways. We will start to see the beginnings of augmented reality news. AR tools, such as glasses, will allow us to engage with written news stories in fresh ways. Eventually, AR will replace hypertext links with the ability to link to new information by focusing the eyes on a piece of text.

News will continue to become more personalised and preference-based. According to Pew Research, 65 per cent of news organizations are experimenting with new AI models that utilise much more granular data than was possible before. Sadly, while this feels cosy, it will push us even more toward "content tribes", where we can avoid being unsettled, emotionally or intellectually, by new ideas.

In 2025 we'll also start to hear about what I call "social news". Using virtual reality (VR) social spaces, some people will imbibe news in conjunction with online friends and followers. Around 65 million people in Europe will be regular VR users within five years, reports eMarketer, so it's not surprising they will want to see news delivered in this way.

This year will also see more omni-channel presentations. News outlets will no longer want to be tied to a single medium. Traditional media will find new ways to create seamless experiences across print, digital, audio and video platforms.

We'll start to see a greater convergence of legacy and social media. News articles, in print, video or audio form, will allow space for social media reactions built directly into the platform.

In the year ahead, we'll hear about immersive journalism, which uses virtual reality, augmented reality and, in time, holographic projection, to allow readers to "experience" news stories. 

From 2025, immersive technologies will start to play a growing role in society as a whole. In medicine, for example, immersive tools combined with AI will allow patients to visualise how drugs, surgeries or other treatments will work in their bodies. 

Immersive tools will become even more helpful as we move beyond traditional haptics, to allow digitisal representations of not just the senses of sight, sound and touch, but also taste and smell. A few years ago, City University London began to research the use of mobile phones as taste and smell emulators. 

In the realm of news reporting, a war zone report viewed through a VR headset might allow people to emotionally connect with victims. According to Pew Research, more than 60 per cent of news organizations already experiment with immersive technologies.

Of course, AI can only offer an approximate guess at the actual environment it portrays. Any use of VR or AI-induced coverage will need to carry clear warnings to that effect. There is a real danger of crossing the line into deepfake territory, where titillation becomes more important than information.

Also in the year ahead, we can expect to hear about more people with severe injuries who access the news via neural implants linked to computers.

 

 

Social Media: Bans and Innovations

 

2025 will be a big year for social media on more than just the news front.

Live TV or streaming shows will have real-time audience interaction. Some will integrate directly with major social media platforms. And through augmented reality, social media experiences will integrate seamlessly with our physical world. For example, localised news stories will project onto the lenses of signed-up AR wearers as they approach certain locations. AR users will also be able to leave virtual messages in real-world locations, perhaps informing others about what to see, or avoid, in the area.

Perhaps not surprisingly, 2025 will be a big year for generative AI, too. It will soon be able to produce deepfake videos, movies and TV shows in real time. Audiences will be able, on the spot, to request alternatives to a scene they've just watched, or an ending they didn't like. Before long, we'll construct entire movies, using only basic plotline prompts, in the same way we create AI images or short videos today. We may not all be doing this by the end of 2025, but it won't be far away.

Meanwhile, some governments, frustrated by big tech's failure to regulate its content, will explore ways to restrict social media access for children.

The Australian government recently introduced legislation to ban social media access for under-16s. Other Western governments will watch this project, gauging the feasibility of doing something similar.

It's high time that governments acted to protect the young on social media. Children are not in a position to make informed decisions about the content they consume or produce online. However, while we might applaud the intent of the Australian legislation, there are important questions to be asked about its implementation.

Introducing a total ban on socials for children is likely to boost the amount of data collected about them. How can online algorithms gauge a person's age without linking their name to other personal information such as a legal record of their birth date? Will algorithms also seek out contact details for parents, including perhaps addresses and phone numbers?

Will governments or nervous big tech companies insist on more than one form of verification? How much other data might be collected in that process? How secure will that data be against hacking and use in crimes like identity theft?

And how will age verification work in the age of biometrics? Big tech companies are already experimenting with the use of ai-powered facial and gate recognition, as replacements for passwords. Do we want children identified in those ways?

Another urgent area for discussion is how children will be protected in other areas of the internet. A teenager doesn't necessarily require social media access to find online pornography or to be exposed to online scams and abuse.

What's more, the cloud has no borders. In a democracy, there's only so much one government can do to block near-global content. And once governments start blocking people, will they finish with children?

To be effective, an Australia-like ban would need support from near-global safeguards covering online content. This raises the vexed question of censorship. Will safeguarding agreements stifle free speech or technology innovation?

I maintain that we must always seek to balance individual rights with social responsibilities. Even in the most robust debates, we need to exercise our freedom with respect for the essential humanity of others and their right to disagree without being harassed or maligned.

When it comes to safeguarding children, AI might well become part of the solution. It could help us build real-time filters, to block harmful content for children. However, it might, also offer children subtle workarounds that allow them, without detection, to access social content deemed to be harmful.

Governments will need to think carefully about all of this in 2025 and beyond. They should also, though, devise programmes and provide resources to help parents and teachers train children in data literacy. Our schools, universities and workplaces need programmes that teach people of all ages how to face the online experience with skills in critical and analytical thinking.

This training should start in primary schools. Data literacy - with at least a basic component of data science - should stand alone as a required subject. This will help young people understand how their data is used. It will enable them to see through scams, avoid ID fraud and respond well to bullying and false narratives.

READ PART 2 HERE

Mal Fletcher (@MalFletcher) is the founder and chairman of 2030Plus. He is a respected keynote speaker, social commentator and social futurist, author and broadcaster based in London.

About us