Tech Forecast 25: Human vs Machine

 

Report edited by Mal Fletcher 

Media Outline available here (Dropbox PDF)

 

"The future is already here — it's just not evenly distributed." So wrote William Gibson, the man widely credited with launching the cyberpunk genre within sci-fi writing.

As we stand in the lobby of 2025, Gibson's prescient words ring truer than ever. What was wildly imagined fiction yesterday makes headlines today. Even the now ubiquitous wi-fi was sci-fi not long ago.

The technologies shaping our immediate future are not distant dreams, but burgeoning realities. They're already taking root in laboratories and startups the world over. This report delves into some of the cutting-edge advancements poised to redefine our world very shortly, offering both a beacon of hope and a clarion call for vigilance.

In many ways, the future is a product of decisions made today. Outside of the profound impact of natural disasters and the terrible, random implications of war, our collective futures are decided not by the techs we develop but by how we choose to use them.

The technologies explored in this report have the potential to solve many problems, especially at the pragmatic level. But they also carry risks that demand our attention and careful consideration.

Some of the many tech developments that form part of our research at 2030Plus do not appear in this report, for brevity’s sake. I have included many of those that we feel will have the most immediate impact on people and societies, particularly in the West.

 

 

AI Biometrics

 

Facial, voice and gait recognition, powered by AI, are already used or being developed by police and security agencies. These can help reduce crimes such as ID fraud. The Cifa National Fraud Database reports that this costs UK taxpayers an estimated £1.8 billion each year, with more than 235,000 cases reported in 2023 alone.

In the US, there's a new ID theft every two seconds. This crime affects more than 60 million Americans each year, at an estimated total cost of around $16.9 billion.

China is pioneering gait recognition tools that can identify individuals from up to 50 meters away, based solely on their walking pattern. In the Middle East, this type of system is being considered for boosting airport security.

Facial recognition relies on computer imaging programs that map certain data points on the face - for example, the shape of the cheekbones, the space between the eyes, or between the nose and mouth. Computer algorithms transform this information into a unique "faceprint," a set of biometric data comparable to a fingerprint.

Many of us interact with facial recognition on our smart devices, in place of passwords. But in 2025, we can expect to see some governments adopting much more sophisticated AI algorithms that can "read" emotions, based on micro-expressions. Some will explore their use for border control. The EU is already moving this way.

This year we will also hear about moves toward three-dimensional face recognition, which will be even more detailed in the data it collects. It will allow much more granular emotion detection.

Doctors will find this helpful, as they try to monitor a patient's general health and provide a targeted diagnosis. Psychologists will find it useful as an aid to understanding the depth of a client's psycho-emotional trauma.

3D face recognition and other forms of AI biometrics might also help police track criminal perpetrators. But they could also be used for predictive policing, where known or suspected criminals are tracked for signs of intent to commit crimes before they happen. This poses a significant challenge to human rights, particularly the assumption of innocence which is a cornerstone of Western jurisprudence.

Biometric technologies are efficient but might not always be used in humane ways. For police to identify an offender they might need to surveile thousands or tens of thousands of people. And that one supposed match might still prove to be wrong, especially given the tendency of AI models to be biased along racial lines.

Studies show that AI systems can it difficult to decipher between people of darker skin tones. Developers are working to reduce this bias, which is partly a product of how AI models are trained. But public agencies need to be careful about relying too heavily on them.

AI biometrics will almost certainly be used to construct interlocking global databases, in the fight against organised crime and terrorism. In the UK, senior public figures such as former British Prime Minister Tony Blair, have called for a global database of the vaccinated, ostensibly to help world leaders prepare for future pandemics.

The subtext is that global digital databanks will eventually be needed to solve other problems too, so we'd better start building them now. However, there are huge downsides to interconnected databases, with privacy topping the list. Whatever its subject matter, even a basic global database would include highly sensitive personal information. Such repositories would be natural targets for hackers and fraudsters.

Global databases also create space for human rights abuses, including restrictions on freedoms of expression, movement and worship. They could lead to major abuses of power domestically and the misuse of private data by foreign agencies.

We who live in liberal democracies often take our freedoms for granted. We assume that technology will serve us well because our governments are inherently benevolent. History has shown, however, that free people must defend their civil liberties and remain vigilant about incrementally intrusive bureaucracy.

AI biometric systems are prime candidates for technology creep. They might be introduced for a single limited purpose but gradually given much wider usage without public awareness or approval. In the end, AI biometrics could very easily deepen the trust deficit that already exists between governments and their citizens.

When police, for example, rely on this type of technology, they might easily spend more time interacting with digital tools than with human beings they are meant to serve.

Studies show that, on an individual level, over-reliance on smart technologies can cause our interactions with other people to become more distant and even cynical. If this happens with police or security service personnel, the result might be very bad for them and the wider public.

AI biometrics could add to social fragmentation. AI facial recognition systems are now being tested in augmented reality (AR) wearables, such as glasses. These have inbuilt cameras, data connectivity and the capacity to search for information on anyone they photograph, projecting data and images onto the lenses of glasses. This will allow the wearer to find out all about you while simply looking at you in the street.

This is great news for anyone inclined toward nosiness and even more so for stalkers. But imagine what governments or police services might do with these tools - especially in an age where citizens are encouraged to report one another for even the slightest perceived insult, in the name of preventing hate crime.

In the end, without properly applied ethics, oversight and regulations, AI-powered biometrics potentially open the door to citizen ranking.

China is the world leader in this area with its social credit system. While not yet a nationwide institution, it is being gradually rolled out across China's regions with nationalisation as the goal. Basically, the government surveils communities and then offers benefits to citizens who are seen to behave in government-approved ways. It restricts opportunities to people who don't.

As a form of privacy invasion this might, in time, rate second only to the "reading" of thoughts via misused neural implants. We must not follow China's lead.

In short, AI-enhanced biometric systems could benefit us in some ways, but strict limits on their use will need to be stipulated in law and then policed. This is not an area where we can afford a laissez-faire attitude. Nor can we rely on self-regulation by big tech developers, something politicians sometimes seem too quick to do.

The future of AI biometrics should not be decided primarily by market forces. Already we're seeing AI models released onto the market that have not been properly before they're released onto the market. Microsoft's Tay bot is just one of many examples of tools that have had to be recalled shortly after release, because of their erratic behaviour.

While attempting to regulate artificial intelligence, governments need to sponsor wide-ranging debates on the ethics of AI. These need to go beyond talk fests between politicians and technology developers. We need debates featuring experts from the worlds of physics, engineering, ethics, politics, law, philosophy and even theology. All have important contributions to make regarding two of the most pressing questions of our time. What does it mean to be human? Where should human autonomy end and machine automation begin?

Unless we can reach broad cross-sectoral agreements on ethics, the laws we make won't be resilient enough to withstand pressure from narrowly focused vested interests.

 

 

The News Revolution

 

Something interesting and potentially revolutionary is happening in the world of news. For journalists, producers and editors, it is either exciting or terrifying depending on your sector of employment.

For consumers, at least at the moment, it presents many more ways to engage with the news cycle and the opportunity to hear diverse sides of one story - if we're bold enough to seek them out.

2025 will see a widening gap between mainstream or "legacy" media and what might (only just) still be labelled "new" media, when it comes to public trust and audience expansion.

The outcome of the recent US election shocked many political pundits in that country and beyond. Some were surprised that Donald Trump won so convincingly, despite apparent opposition from powerful elites - not least America's main corporate-owned media.

The election result highlighted the growing power of digital media, particularly independent platforms, as sources of news and editorial content. New media, perhaps especially social media, have begun to challenge legacy media across all genres, including entertainment.

This was borne out by the July 2024 edition of the Nielsen Gauge. It reported that streaming accounts for 41.4 per cent of the US media market. Meanwhile, cable TV is used by 26.7 percent and the once mighty broadcast TV is consumed by just 20.3 percent. Other forms of media account for 11.6 per cent.

Of the streaming market, YouTube is the largest service, accounting for 10.4 per cent of viewership. Netflix is used by 8.4 per cent of the overall market and Amazon Prime covers 3.4 per cent. It seems likely that YouTube attracts the largest number of viewers because it provides a widely accessible platform for professional and non-professional video content creators. It has also long allowed extended programmes, such as interviews.

YouTube also welcomes full news and editorial programmes, which some other streaming services have only slowly embraced. The possible exception to this has been Facebook, which in 2022 was used by 31 per cent of UK adults looking for news. That's the finding of the Reuters Institute Digital News Report.

In the closing days of the US presidential election, Donald Trump agreed to a three-hour interview with Joe Rogan, host of the world's most popular podcast. The "Rogan Experience" boasts 17.5 million YouTube subscribers and 14.5 million Spotify followers.
This interview attracted an audience of 26 million views within its first 24 hours online - and that was on YouTube alone. It has now been downloaded more than 100 million times across all platforms.

Legacy media can't compete with that. CNN's US election night coverage was viewed by just 5.1 million people, a drop of almost half compared to the 2020 election.

In Britain, the story is not quite so stark, but the trend is heading in the same direction. Media regulator Ofcom reported in September 2024 that 71 per cent of adults over 16 accessed news online - mainly from social media - in the year to September 2024. This was up from 64 per cent in 2018. Meanwhile, 70 per cent of UK adults accessed news from TV, but the trend has headed steadily downward in recent years. In 2024, for the first time, the nation’s flagship BBC One news bulletin lost out to online news.

Not only have social media platforms like YouTube and Facebook, plus X and TikTok, made news more accessible, they have democratised the creation and distribution of news. They allow individual consumers to become respected influencers, or "citizen journalists".

Social platforms also allow consumers to help police the accuracy of news content. For example, X encourages "community notes", generated by users, to be added to messages that contain questionable claims.

Of course, the speed and volume of user-generated content create problems with fact-checking, which is a key component of true journalism. As Jonathan Swift wrote, "Falsehood flies, and the truth comes limping after it." It takes much longer to disprove fake news or deepfake videos than to produce and disseminate them. For this reason, people will still appreciate the work of trained journalists, provided that the profession seeks objective truth above internal corporate approval or acceptance with peers in the office.

Using AI algorithms on social platforms also creates information loops, potentially limiting people's exposure to diverse viewpoints. This has already led to a breakdown of commonly held narratives and, to some degree, increased the power of cancel culture. According to a 2024 report by the president of Dartmouth University, an Ivy League school in the US, students have trouble engaging people with whom they disagree. Sian Beilock, also a cognitive scientist, says this is largely because of the influence of social media, which she blames for creating "echo chambers" that make it harder for students to interact in real life.

I would argue that it's not social media alone that is to blame for students' apparent intolerance of opposing views. It's the dangerous cocktail of social media and neo-Marxist critical theory, which accepts no departure from its creed and is promoted by some head-in-the-fog academics.

In 2025, the shift toward a more organic, fluid and fragmented delivery of news and editorial will continue. It will force established news providers to experiment with existing and emerging forms of new media. Some of these are described below. Eventually, a new status quo will emerge, built on a largely peaceful, if not comfortable, co-existence between new and established platforms and formats.

On a more positive note, 2025 will see news media combined with technology in exciting ways. We will start to see the beginnings of augmented reality news. AR tools, such as glasses, will allow us to engage with written news stories in fresh ways. Eventually, AR will replace hypertext links with the ability to link to new information by focusing the eyes on a piece of text.

News will continue to become more personalised and preference-based. According to Pew Research, 65 per cent of news organizations are experimenting with new AI models that utilise much more granular data than was possible before. Sadly, while this feels cosy, it will push us even more toward "content tribes", where we can avoid being unsettled, emotionally or intellectually, by new ideas.

In 2025 we'll also start to hear about what I call "social news". Using virtual reality (VR) social spaces, some people will imbibe news in conjunction with online friends and followers. Around 65 million people in Europe will be regular VR users within five years, reports eMarketer, so it's not surprising they will want to see news delivered in this way.

This year will also see more omni-channel presentations. News outlets will no longer want to be tied to a single medium. Traditional media will find new ways to create seamless experiences across print, digital, audio and video platforms.

We'll start to see a greater convergence of legacy and social media. News articles, in print, video or audio form, will allow space for social media reactions built directly into the platform.

In the year ahead, we'll hear about immersive journalism, which uses virtual reality, augmented reality and, in time, holographic projection, to allow readers to "experience" news stories. 

From 2025, immersive technologies will start to play a growing role in society as a whole. In medicine, for example, immersive tools combined with AI will allow patients to visualise how drugs, surgeries or other treatments will work in their bodies. 

Immersive tools will become even more helpful as we move beyond traditional haptics, to allow digitisal representations of not just the senses of sight, sound and touch, but also taste and smell. A few years ago, City University London began to research the use of mobile phones as taste and smell emulators. 

In the realm of news reporting, a war zone report viewed through a VR headset might allow people to emotionally connect with victims. According to Pew Research, more than 60 per cent of news organizations already experiment with immersive technologies.

Of course, AI can only offer an approximate guess at the actual environment it portrays. Any use of VR or AI-induced coverage will need to carry clear warnings to that effect. There is a real danger of crossing the line into deepfake territory, where titillation becomes more important than information.

Also in the year ahead, we can expect to hear about more people with severe injuries who access the news via neural implants linked to computers.

 

 

Social Media: Bans and Innovations

 

2025 will be a big year for social media on more than just the news front.

Live TV or streaming shows will have real-time audience interaction. Some will integrate directly with major social media platforms. And through augmented reality, social media experiences will integrate seamlessly with our physical world. For example, localised news stories will project onto the lenses of signed-up AR wearers as they approach certain locations. AR users will also be able to leave virtual messages in real-world locations, perhaps informing others about what to see, or avoid, in the area.

Perhaps not surprisingly, 2025 will be a big year for generative AI, too. It will soon be able to produce deepfake videos, movies and TV shows in real time. Audiences will be able, on the spot, to request alternatives to a scene they've just watched, or an ending they didn't like. Before long, we'll construct entire movies, using only basic plotline prompts, in the same way we create AI images or short videos today. We may not all be doing this by the end of 2025, but it won't be far away.

Meanwhile, some governments, frustrated by big tech's failure to regulate its content, will explore ways to restrict social media access for children.

The Australian government recently introduced legislation to ban social media access for under-16s. Other Western governments will watch this project, gauging the feasibility of doing something similar.

It's high time that governments acted to protect the young on social media. Children are not in a position to make informed decisions about the content they consume or produce online. However, while we might applaud the intent of the Australian legislation, there are important questions to be asked about its implementation.

Introducing a total ban on socials for children is likely to boost the amount of data collected about them. How can online algorithms gauge a person's age without linking their name to other personal information such as a legal record of their birth date? Will algorithms also seek out contact details for parents, including perhaps addresses and phone numbers?

Will governments or nervous big tech companies insist on more than one form of verification? How much other data might be collected in that process? How secure will that data be against hacking and use in crimes like identity theft?

And how will age verification work in the age of biometrics? Big tech companies are already experimenting with the use of ai-powered facial and gate recognition, as replacements for passwords. Do we want children identified in those ways?

Another urgent area for discussion is how children will be protected in other areas of the internet. A teenager doesn't necessarily require social media access to find online pornography or to be exposed to online scams and abuse.

What's more, the cloud has no borders. In a democracy, there's only so much one government can do to block near-global content. And once governments start blocking people, will they finish with children?

To be effective, an Australia-like ban would need support from near-global safeguards covering online content. This raises the vexed question of censorship. Will safeguarding agreements stifle free speech or technology innovation?

I maintain that we must always seek to balance individual rights with social responsibilities. Even in the most robust debates, we need to exercise our freedom with respect for the essential humanity of others and their right to disagree without being harassed or maligned.

When it comes to safeguarding children, AI might well become part of the solution. It could help us build real-time filters, to block harmful content for children. However, it might, also offer children subtle workarounds that allow them, without detection, to access social content deemed to be harmful.

Governments will need to think carefully about all of this in 2025 and beyond. They should also, though, devise programmes and provide resources to help parents and teachers train children in data literacy. Our schools, universities and workplaces need programmes that teach people of all ages how to face the online experience with skills in critical and analytical thinking.

This training should start in primary schools. Data literacy - with at least a basic component of data science - should stand alone as a required subject. This will help young people understand how their data is used. It will enable them to see through scams, avoid ID fraud and respond well to bullying and false narratives.

 

 

Big Tech Breakups

 

In 2025, technology companies will face new regulations on other fronts, too. US government agencies are investigating the possibility of breaking up big tech monopolies, including those that own social platforms.

Antitrust investigations could mean that Google and Meta are broken up. This might mean that Google is required to sell off the Android operating system, or Meta could be forced to divest itself of Instagram or WhatsApp.

At the very least, companies like these could be required to change their business practices. New restrictions could be placed on how they collect and use data and how they compete with other companies. For a long time, big tech giants have restricted innovation on the part of smaller companies by buying huge numbers of patents for products they have no intention of developing.

2025 will not just open new fronts in the war for freedom of expression; it will see a new battle for freedom of innovation.

Any forced big tech breakup, especially in the US, would impact the technology industry as a whole. It would increase competition in the smartphone market, which might mean lower prices and more innovation. But it could also lead to fragmentation in smartphone ecosystems, making it more difficult for developers to create apps that work across all devices on one platform.

The media industry would also be affected. If big tech groups are forced to sell news aggregation services, it could increase competition in the news market, giving consumers more choices. It might also lead to a decline in the quality of news reporting because smaller news organizations won't have the resources to compete.

In 2025, governments must also ensure that breaking up tech companies doesn't lead to a more fragmented internet. Different companies might end up controlling different parts of the internet infrastructure. Consumers might find it harder to seamlessly access information and services.

Another tech battle looms in 2025. The Chinese giant, TikTok, which focuses on short-form video content, has been singled out by US authorities for its mental health impact on young people. Anyone who's used it knows that the platform's algorithm is highly addictive. The makers have been criticized for pushing harmful content that encourages eating disorders, self-harm and even suicide. TikTok says it has taken steps to address these issues, but its algorithms remain opaque to outsiders and its content moderation is not very good.

A US ban on TikTok would likely have a ripple effect, impacting social media in the UK and Europe. We might see a migration of TikTok users to other platforms, including some that are even less well-monitored. Harmful content could be distributed across many smaller platforms making it harder to identify and remove.

Whatever the outcome regarding TikTok, we'll need inter-governmental cooperation to address online harm, particularly for young people, and to develop effective regulatory frameworks. National governments must have the courage to stand up to powerful vested interests in the tech sector. They must insist on more oversight than tech companies are willing or able to place upon themselves.

Regulation is the responsibility of governments, not profit-seeking multinational corporations. International agreements will need to grow from the bottom up. The internet is largely borderless, but it must be held accountable to democratically elected governments, not company shareholders.

 

 

AI Warfare and Cyber Warfare

 

In the event of a US-brokered deal regarding Ukraine, 2025 will bring an increase in cyberwarfare activities initiated by countries like Russia, China, North Korea and Iran.

These nations are already engaged in cyber attacks against the West, though thankfully not yet at the scale of shutting down core infrastructure on a large scale. This year we will hear more about unmanned submarines, underwater drones, cutting underwater cables and AI-powered malware interfering with elections, financial institutions, mobile networks and business activities.

Meanwhile, significant disinformation campaigns and reputational attacks will be launched against institutions and prominent public figures in free nations.

At the same time, AI might offer us new means of protecting against cyber attacks. This will be particularly true with AI that's powered by quantum computers. Some military experts expect to see the first successful quantum-based cyber attack by 2028. China is already investing heavily in this, aiming to achieve quantum supremacy soon.

Meanwhile, AI will drive the growing field of cyber espionage. The UN expects a 300 per cent increase in cyber espionage in the next five years. China is leveraging its huge technology resources to gain an advantage in this area. Arguably, it already has achieved this in the area of corporate spying.

Of course, artificial intelligence, quantum and blockchain technologies might offer some security in an increasingly dangerous world. Digital shields, powered by machine learning, might well adapt to new types of attacks. Countries like Israel and the US will be at the forefront of a cyber defensive revolution. The push for cyber defences will likely kick off a new type of AI arms race.

Machine learning will also help security agencies and businesses build more foolproof digital security systems. Yet no matter how sophisticated the technology, we will still need trained human eyes and ears to help spot inconsistencies and fraudulent activity online. In 2025 almost all machine intelligence systems will benefit from having humans in the loop.

The line between war and peace might well become increasingly blurred in the digital age. Free societies might need to adapt to a new normal in which a fairly persistent state of low-level conflict exists, with intermittent threats to services and infrastructure. Digital communications and banking services would be among the first targeted in major cyber attacks.

For this reason, among others, we must exercise caution when it comes to Central Bank Digital Currencies (CBDCs). We should not be drawn into a complete abandonment of cash, which carries no personal data.

Thankfully, we can expect governments to discover the need for data literacy programmes in schools, universities and workspaces. Training in critical and analytical thinking skills helps people understand how their data is collected and used online - and who controls that process. It also helps us identify scams, deepfakes and false narratives. 

Technology alone won't save us from cyber threats or criminal activities. Human minds will need to be trained to correctly interpret what they see and hear.

 

 

Cryptocurrencies and CBDCs

 

AI-powered cyber attacks could significantly impact the economy, not just in terms of its metrics, but its very form and nature.
In 2025, some pundits will advocate a switch to digital currencies as a remedy for cost-of-living pressures and other financial crises.

Yet for all their promise of blockchain-based security, easy access and quick investment profits, private enterprise cybercurrencies are subject to wild swings in value. Their worth at any given moment is determined largely by the vagaries of human whim. History tells us again and again that the wisdom of the crowd is not always either wise or reliable.

In 2025 we'll hear that the remedy to this volatility is to launch a Central Bank Digital Currency (CBDC). This is the government-owned, controlled and backed version of Bitcoin et al, offering the supposed benefits of digital currency without the fluctuations in value. The idea of a digital pound has been floated by the Bank of England.

CBDC advocates often speak of it as the first step in launching a Universal Basic Income (UBI). This too will be a subject of conversation in 2025, especially as we move to higher levels of jobs automation.

The idea is that the government issues every adult individual with a base monthly income, perhaps several thousand pounds, whether they work or not. The individual is then free to augment the wage with work if they wish to - and if they can find it.

There are significant holes in this arrangement, which are beyond the scope of this report. I can offer this, though: while automation will certainly change the job market, there is no evidence that it will simultaneously block the introduction of new types of work for humans.

There are precedents in history. For example, the introduction of the printing press forever changed the face of publishing and gave birth to revolutions in social structure such as the Reformation and modern democracies. Along the way, it reduced demand for caligraphers, for example, but it created many new skilled occupations such as typesetting.

Moreover, human beings thrive on work and productivity in labour provides part of our sense of worth and identity. A UBI arguably robs people of the drive to work or set up businesses and other enterprises. It makes the individual too reliant on the state as one's provider, removing the freedom to succeed on one's own merits. Even so, discussions about a possible UBI will continue this year.

CBDCs make us over-reliant on digital systems. They set us up for disaster when a nation or sector of industry undergoes cyber attacks. In recent years we've seen several examples of outages that have prevented people from accessing their money online.

Going fully cashless would mean that we're at the mercy of digital connectivity for the very basics of life. Cash may be messy and inconvenient at times, but it has advantages. It doesn't carry with it personal data; its weightiness helps us understand how much we're spending; and we can still pay with cash even in the midst of data outages, natural disasters or cyber-attacks.

In 2025 governments need to do more to prevent a total elimination of cash in the economy.

 

 

Robots: Nano and Auto

 

In 2025, more hospitals will experiment with AI-powered surgical robots. Whilst not replacing human surgeons, bots can analyse important medical data, in real-time during surgery, much faster than humans. What's more, studies show that robots can perform certain types of surgery more precisely than human surgeons.

Schools will begin to investigate using AI-powered robotic tutors, which can tailor the teaching experience to individual student needs. These will become indispensable teaching aids, especially for students who experience learning difficulties.

Meanwhile, many teachers will want to incorporate AI-driven virtual reality experiences, perhaps especially for subjects like history and languages. Immersion in a virtual 3D environment will help students understand subjects more quickly.

In the coming year, we will continue to see developments in the field of nanorobotics. Nanobots are microscope machines, built from the atomic level up, which can be loaded with chemicals and injected into the bloodstream to identify and destroy harmful cells while leaving healthy cells intact. MIT has already developed nanobots smaller than human cells. These could revolutionize drug delivery within the human body.

We can also expect to see more so-called soft robots. Their design is inspired by nature. They are more flexible and compliant than standard robots. In hospitals and rehabilitation centres, they can adapt to a patient's movements, providing gentle support during therapy. Soft robotic exoskeleton suits can improve mobility in stroke patients by up to 20 per cent.

Collaborative robots (or cobots) will become our helpers in the workspace. The number of robots involved in industry is expected to grow by 12 per cent annually from here, with cobots leading the charge.

And then there are autonomous robots. The University of Oxford predicts that within five years the average British household will own at least three autonomous robots for mundane tasks. 

The development of autonomous robots will owe much to the production of AI agents. Agents are AI platforms that can automatically access and utilise many other tools without human prompting and use them to make autonomous decisions. Essentially they will allow us to set an overall goal and leave the machine to find the best ways to achieve it, from start to finish. This will no doubt rescue us from many time-consuming mundane tasks. 

AI agents will also become a major selling point for humanoid cobots, especially in the workspace. However, this level of automation will require that we pay special attention to human skills we do not wish to lose. We will also need to understand and mitigate the mental health impact of humanlike machines. 

One such impact is the famous “uncanny valley effect”. When we first interact with a machine that looks and acts like a human, our first reaction is mostly positive, if only because of the novelty. However, as the machine’s capabilities become more and more human-like, we feel a growing unease. That uncanny feeling can lead to anxiety and other mental health issues. As we grow more accustomed to the machine, our levels of comfort might increase again, but it seems inevitable that, for at least some humans, the uncanny valley will become a plane, with long-term consequences. 

Robot designers will need to study the human impact of their creations very carefully, as will governments. We need regional and international agreements on ethical standards for AI and robots, and regulations to back these up. 

The rapidly developing world of robotics also raises other important questions. How do we ensure that AI-powered robots make ethical decisions in complex situations? As robots become more integrated into our lives how can we trust them to keep our data private? Will collaborative robots be buffered against cyber attacks? How will we ensure that they remain subservient to human agency? In the age of robots, safeguarding will take on a whole new meaning.

Thankfully, major international organisations, such as the Global Initiative on Ethics of Autonomous and Intelligence Systems (IEEE), are already publishing design guidelines for AI robotics. They emphasise transparency, accountability, and the importance of human values in robotic systems. They will have their work cut out for them.

 

 

Human Implants: Human-Machine Merge?

 

The year ahead will be one of breakthroughs with human implants.

The next generation of cochlear implants, one of the most prominent types of implants, will use AI algorithms to process and transmit sound. This will produce something very close to natural hearing experiences.

Meanwhile, new retinal implants will restore functional vision to people with certain types of blindness. They'll incorporate nanotechnology to create less invasive devices with more precise vision.

We're about to witness breakthroughs in smart prosthetics, too. These will see a merging of man and machine, which allows users to feel texture, pressure and temperature.

European researchers are developing biodegradable materials for temporary prosthetics, to reduce the need for multiple surgeries, especially in growing children.

Another type of implant we will hear more about is the spinal cord stimulator, which is used to treat chronic pain. By 2027, we could see AI-enhanced spinal stimulators that automatically adjust the level of stimulation they provide based on the user's movements and pain levels. They could provide much more effective pain management.

Of course, the type of implant we hear most about is neural implants, brain-computer interfaces that read and interpret signals directly from the brain. In the next couple of years, companies like Neuralink will produce neural implants as small as a grain of sand, using nanotechnology. This will dramatically reduce the invasiveness of implantation procedures.

We can expect to hear about implants created using biodegradable materials, which can dissolve safely in the body after they've served their purpose. This would reduce the number of surgeries patients need to endure.

The UK is positioning itself for a leading role in implant technology, bridging the gap between research and practical applications.

As with any rapidly developing technology, there are many ethical questions to be considered here, not least data security. As we rely more on implants, they and we will become potential targets for cyber attacks, through hacking and unauthorized access.

There are also questions about consent: how do we ensure that people have given informed consent to receive an implant? How do we ensure that the availability or cost of implants does not create a new technology gap, between those who have access to the technology and those who don't?

 

 

Digitisation, De-Influencing, Definition

 

In 2025, we'll continue to see a growing digitisation of the human experience. The raison d'etre of big tech companies is encouraging us to create mountains of data that they can collate, analyse and monetise in various ways. So we'll continue to see if it's to bring more and more of our human experience under the digital umbrella.

This will become much more of an issue in the year ahead, as we further blur the line between humanity and technology with our use of implants, nanobots and other physically invasive technologies. Not to mention all the other digital tools we use to navigate our daily lives.

Two hundred years ago, we invited technology into our workspaces during the Industrial Revolution. In the middle of last century, we invited technology into our homes, with the invention of all kinds of household appliances.

In this century, we have turned digital gadgets, such as smartphones and Fitbits into extensions of our human physiology. And now we are on the threshold of inviting technologies into our bodies through implants and nanobots.

Important ethical questions will inevitably arise. Where do we now draw the line between humanity and technology, or where does man end and machine begin? What does it fundamentally mean to be human in the age of machine learning and AI? Are humans simply biological versions of sophisticated computers, or are we something more comma at the level of soul or spirit?

These are not insignificant questions, as they touch on the very heart of human angst, aspiration and experience. In the coming years, people will respond to questions like these in interesting ways.

For example, some will decide to become techno-refuseniks. They will say, "I want the benefits of technology but I want to set clear limits for its incursion into my life. I want to reduce my data footprint to protect my privacy and autonomy and take more control of who uses my data and how. I also want to limit the impact of digital tech on the environment."

Training AI models like the ones used for ChatGPT requires energy levels equivalent to three nuclear power plants. If anyone ever manages to build an Artificial General Intelligence, which is capable of doing anything a human can, it will require multiplied times more power. The same will be true of quantum computing,which require extreme levels of cooling.

For some Gen-Zs and possibly Gen-Alphas this will involve a push toward de-influencing. Social media influencers have made good money from promoting products as part of their lifestyle narrative. In fact, in many cases, their lifestyle narratives are built to fit the products they promote, rather than vice versa. They're no longer organic representations of natural life choices.

Despite their videos of sophisticated living, today's influencers are, in many cases, simply salespeople by another name. Selling is a worthwhile profession, but we should call it that. We should not label online salespeople in a way that makes them sound like notable thinkers, seers or shapers of generations.

What is now called "influencing" started out as a niche type of short-form life-casting done by enthusiastic hobbyists. It's now become a full-time career, within a lucrative industry.

As part of their de-screening, some younger people will unfollow the influencers. They will demand authenticity and originality from those they engage online. Other Gen-Z refuseniks, true to their reformist generational trait, will band together to form quite militant pro-privacy groups. The global data economy is now worth more than $3 trillion globally. Young people know that it is driven by self-surveillance, as we hand over swathes of personal information with every online purchase, search or social media scroll.

Some of the refuseniks might become neo-luddites, who treat even everyday technologies like mortal enemies. We can expect some to take physical action against big tech companies.

At the other extreme, of course, will be people who make it their profession to offer their minds and bodies for experiments using implants and prosthetics.

Most people, of course, will fall somewhere between those extremes. But we must engage with technology thoughtfully, researching the benefits and downsides of new tools before we use them. 

That may sound time-consuming but it will be enormously important in 2025 and beyond. Especially when we consider the impact our technology choices will have on our children. What one generation tolerates the next will often treat as completely normal - before stretching the envelope even further.

 

 

Meaning Deficit and Utopian Fantasies

 

Ask most people what the plague of our time is and they’ll answer the obvious: Covid-19 (and its variants). Our research suggests an even more potent pandemic is alive in the world - one that will have a longer-lasting impact than coronavirus. I call it the “plague of fluidity”.

Ours is an age where almost nothing can be taken as certain. There are, it seems, very few fixed moral or cultural points of reference. Even the most basic characteristics of life, including our definition of gender, are subject to debate, reinterpretation, and then more debate. Arguably, this fluidity now extends to our definition of the human being, as digital technologies merge aspects of human physiology with machines remotely connected or built-in. 

This is truly the “trans” generation. Not in the sense of transgenderism as a political or social philosophy, but where that prefix is given its original Latin meaning - “across” or “beyond”. So much of life now travels “across” traditional bounds, “beyond” where we have been, moving to who knows where. The destination is not fixed, everything is in constant flux. The groups that agitate most for radical change today, especially in societal mores or values, often find that they quickly become the new conservatives. They find themselves targeted by even more vociferous campaigners who have little respect for their efforts and don’t think they went far enough. 

Linked to this unflagging but exhausting fluidity is a meaning deficit. Victor Frankl, the Austrian psychotherapist who survived the horrors of the Nazi death camps, found that people can endure a great deal of pain provided they find some purpose beyond the pain. 

If mental health statistics are reliable, the twenty-first century sees many intelligent and well-educated people undergoing psychoemotional crises. Their angst can be linked to any number of medically identified causes. Arguably, though, behind many mental health problems lurks a fundamental lack of purpose. 

This is not to argue away the complexity of psychological maladies or to suggest that their solution is found in self-help mantras, far from it. Yet a core part of surviving any problem is a sense of what the battle is for - and what good it might eventually produce. (As someone once wrote, nobody takes their own life if they have a sufficiently strong reason to live.)

A sense of meaning, of course, is not merely a personal phenomenon. Our sense of purpose is partly shaped by the culture around us. The word culture has at its root “cult” which, in its anthropological sense, means a system of belief and practices usually built around a religion. All cultures are, to a degree, derived from worldviews, shared ideas of where people fit within the wider or cosmic scheme of things. Answers to those questions help provide meaning, both collectively and personally.

In the case of Western societies, the gradual erosion of a shared religious foundation has led to a quest for purpose by other means. For many, technology facilitates that search and, in some cases, is itself the focal point of meaning. Some of us approach tech in a quasi-religious way, with an almost spiritual awe.  We marvel at the capacities of tech tools in the way our forebears did miracles. We treat gadgets with the respect we should perhaps reserve for our fellows. 

For many people, the quest for meaning in technology leads to utopianism. They share a common worldview with the heads of big tech companies. Because of their monopolistic behaviour, big tech leaders often appear driven purely by the profit motive. However, the digital technology sector is in many ways fundamentally utopian in its outlook. It sees itself, somewhat pompously, as humanity’s one great hope for building a better society, the one thing standing between us and the gradual decline of our race. It views technology as the bulwark against dystopia.
 
This helps explain big tech's efforts to extend the reach of digitisation. To save us, big tech believes that it must first digitise as much of our behaviour as we'll allow. Doing this will presumably allow engineers and others to map human behaviour and design and then suggest or promote “better” ways of behaving and thinking. 

This is one reason why we will see growing interest in the use of neural implants. Politicos and business leaders in particular will watch closely the potential for mind-mapping - tracking how individuals psychoemotionally respond to particular ideas or situations. Big data garnered from “reading” masses of individual minds might suggest policies or products that attract people’s interest. (The data might also help leaders “nudge” people into more politically expedient or profitable ways of thinking.) 

The utopian leanings of big tech can be seen in its interest in space travel, mass communication, transhumanism and global internet provision. Yes, these are profitable areas, but there is a level of high-mindedness here -  even if altruism often gets buried beneath corporate hype-competitiveness.

The problem, for big tech and for us, is that while utopianism offers some hope, it’s not a hope that accommodates the reality of flawed human nature. It won’t matter if we build a new civilisation on Mars if humans on another planet still carry, in their nature, the potential to treat each other and their environment badly. 

In 2025, we can expect reports about tech refuseniks turning to spirituality and faith, in groups that stress physical interaction. Other people who remain more tolerant of technology will use it to facilitate spiritual learning and experience. A boom in subscriptions to prayer apps and the like is already underway. According to PitchBook Data, US venture capital funding for religious, primarily Christian, apps exploded during Covid lockdowns. It went from $6.1 million in 2016 to $48.5 million in 2020 and $175.3 million in 2021. Meanwhile, a 2021 Indonesian survey on media use during Ramadan found that 42% of respondents planned to use religious apps during the holiday.

Religious or not, we will face an important question in 2025 and beyond. Should we look to machine intelligence for a utopian future, or should we grit our teeth and resign ourselves to a dystopian tomorrow?

The answer, of course, is neither. The future will not be a product of technological determinism, where machines create our future for us - whether for good or bad. For all the hype and excitement about technology, our choices will still play the most significant role in shaping our collective future and that of our environment. 

Even on an individual level, good choices made today make a difference. Wherever it’s in our power to do so, we need to continually make choices about how much we rely on AI, for example. 

Individually and collectively, it will be important that we set limits for how much of our decision-making and experience we give over to machines, intelligent or not. We will constantly need to draw lines between human autonomy and machine automation.

After all, the key to handling artificial intelligence lies in its name. Artificial comes from the root artifice, which means a mask. Machines might appear almost human in their capacities, but this is just a mask.

Personally and collectively we need to vote and buy with our feet, pressuring governments and tech groups to act responsibly in setting limits for tech development and use - without killing innovation.

 

 

Quantum Computing

 

Since IBM unveiled its first commercial quantum computer in 2019, the Q System One, various sectors such as healthcare and security have watched to see what this once purely theoretical technology will mean for them.

Quantum computers can perform functions in seconds that would take today's machines thousands of years.

They could help us find a cure for cancer in a fraction of the time we'd need otherwise. It could also provide highly detailed and accurate weather forecasts many months in advance, and help us could build smarter robots and safer, more responsive driverless cars.

Today's mainstream computers work on binary code. The easiest way to visualise how they work is to imagine a large array of on/off switches. Each switch - each "bit" of information - can have a value of either "one" or "zero", or "on" or "off", but it can't be both simultaneously.

In a quantum computer, every bit of information, every "cubit", can be "one", "zero", or anything in between - at the same time. This in-between state is called superposition. It means that quantum machines can make millions of calculations at once, instead of doing them sequentially, one at a time.

This will revolutionise things like drug development. Designing a drug compound to solve a particular problem means sifting through millions of combinations of drug molecules to find one that works. It's like searching for a specific grain of sand on a beach.

Today's computers do that by checking each grain individually, one after the other. But a quantum system would look at millions of grains all at once. Imagine the impact that would have on our capacity to face new forms of disease, not to mention pandemics.
It could generate incredibly complex encryption systems to make public services and governments more secure.

Quantum-powered AI could make cyber-encryption much more secure and provide predictive models for everything from climate change to migration patterns. It might hasten the development of something close to artificial general intelligence, where a machine could think as a human does and perform any task at least as well as a human being. For all the hype, there's still no sign of anything like that today.

 

 

Space Exploration

 

As we launch into 2025, Jeff Bezos and his Blue Origin company are waiting for regulatory approval for a super-heavy rocket. They hope it will eventually allow humankind to set up industries in space.

Bezos said recently that he dreams of taking polluting industries off-planet so that Earth becomes something like a "residential and light industry" zone. In his vision, all heavy industry would relocate to space.

There are obvious questions about whether this is realistic or desirable. What will we do with the pollutants we produce in space? How will we prevent the accumulation of space junk? Will enough humans want to work in space, even if that only means overseeing automated robots? Nevertheless, Bezos will continue to pursue his vision for this and space tourism in 2025.

Meanwhile, Elon Musk will continue to push toward his ultimate goal: to use SpaceX to colonize Mars and other planets. He aims to do so partly to provide a human escape hatch should some disaster strike the earth.

In 2025, some of SpaceX's goals will remain highly speculative, but there is serious intent and money behind its ventures and there will be significant steps forward. Musk's trial-and-error approach and his acceptance of expensive failure as part of eventual success have changed how rockets are built and tested.

Meanwhile, national space agencies like NASA, the European Space Agency, the UK Space Agency and others in China and India will work individually and together to explore areas like lunar settlements and asteroid mining.

In the year ahead, we'll almost certainly hear more about the possible weaponisation of space. We already use orbiting satellites for espionage and imagining robotic weapons like swarming drones, stored in and deployed from space, is no longer the stuff of science fiction.

China is aggressively pursuing its goal of placing astronauts on the moon by 2030 but has also focussed on space as a potential arena for war. It has developed anti-satellite (ASAT) weapons and has doubled its stock of active satellites since 2018. In 2025 we can expect the US to counter China's push for supremacy in space weaponry.

We'll also hear more about the possible environmental impact of rocket technologies - on air quality, carbon levels and wildlife.
There will be exciting improvements in rocket design, enabling quick-turnaround relaunches of used rockets, and new opportunities for space tourism. Innovative earth-based technologies will also emerge, as spin-offs from the second-generation space race.

It's worth reminding ourselves how many tech tools we now take for granted were developed because of human competitive instincts regarding space. These include GPS (Global Positioning System), which was originally built for military navigation; satellite communications; life-saving water purification systems; and artificial limbs, which have been greatly improved by NASA's expertise with robotics and shock-absorption materials.

On a more day-to-day level, space engineering has given us scratch-resistant lenses, freeze-dried foods, cordless tools, memory foam and infrared ear thermometers. And the list goes on.

These products have made our lives easier, or at least more efficient on a practical level. Space exploration has also, at least in its non-military applications, lifted the human soul. It shows that we can achieve great things, if we have the will and the skill to do so. In 2025, it will provide a stark reminder of how much more we might still achieve, if we work together for the common good.
We should watch space technology very closely in the coming year and beyond. So much else will emerge from it.

 

--//--

 


MEDIA NOTE:

2030PLUS is a futures forum dedicated to exploring how emerging trends might shape the future of society, technology, and governance. 2030Plus aims to prepare and inform for a healthy and ethical future through research, public discourse, and strategic foresight. 

MAL FLETCHER is a futurist, social commentator and author whose expert commentary features on British and international media, including the BBC (TV and radio), Sky, GBNews, LBC, TalkTV, CNBC, ABC, SABC and more. He is the founder and chairman of the 2030Plus global futures forum.

FOR INTERVIEWS (phone, VOIP, in-studio) please contact:

E: media@2030plus.com

W: https://2030plus.com 

Media Outline available here (Dropbox PDF)