The Darker Side of AI: Defence Enhancer or Attack Enabler?

Dec 10, 2025

Author: Jade Reilly

An Intro to the Uses of AI

AI is becoming deeply integrated into the everyday lives of ordinary people, and its presence grows more visible by the day. It’s everywhere: from AI-generated images on social media, to TV adverts, to conversations with AI assistants embedded in shopping apps.

In that context, its usefulness and efficiency feel almost undeniable. But what happens when we examine AI through a darker, more unsettling lens?

In cybersecurity, AI sits in a precarious middle ground. It can be engineered as a defence enhancer, helping organisations build stronger and more adaptive protection against attacks. Yet the same technology can also be weaponised - enabling criminals to conduct cyber-attacks with unprecedented precision.

AI can analyse, predict, detect, mimic, impersonate, guess passwords, generate synthetic voices, create deepfakes, and in many cases, leave no trace at all. It’s no exaggeration to say AI is becoming the ultimate double-edged sword - a catch-22 of the digital age.


AI in Cyber-Defence

AI-driven cyber-defence often begins with understanding the weaknesses people are most affected by during cyberattacks. Its modern uses include password protection and authentication, phishing detection, behavioural analytics, and anomaly recognition...

In today’s landscape, AI is providing a powerful layer of defence across cybersecurity operations.

Organisations like Darktrace [The State of AI Cybersecurity, 2025] view AI’s role in defence positively, reporting that:

95% agree that AI-powered cybersecurity solutions significantly improve the speed and efficiency of prevention, detection, response and recovery.

Beyond the sentiment, the technical capabilities are compelling. AI can “detect and prevent existing types of attacks by analysing patterns and using them as the basis for training artificial neural networks such as deep learning. AI has also been proven capable of detecting known forms of attacks such as SQL injections or cross-site scripting" [Michele Daryanani, KPMG].

AI as a defensive (integrative) tool is fast, adaptable, and astoundingly efficient - which makes its opposing use even more concerning.


AI in Cyber-Crime (Deepfaking)

Deepfaking is one of the most disturbing phenomena to emerge from the golden age of AI. A deepfake is a hyper-realistic, fabricated or manipulated image, video, or audio file created using artificial intelligence.

And the data speaks for itself:

“A major threat that deepfake poses is non-consensual pornography, which accounts for up to 96% of deepfakes on the internet. Most of this targets celebrities. Deepfake technology is also used to create hoax instances of revenge porn” [Fortinet, What is a Deep Fake?]

Because deepfakes are seamless, criminals have quickly exploited them to deceive, impersonate, and manipulate...

A striking example came in 2024, when an employee at a Hong Kong branch of a UK engineering firm was tricked into transferring £20m after cybercriminals generated an AI video call using fabricated likenesses of senior company officers [Dan Milmo (2024), The Guardian].

And deepfakes aren’t the only weapon emerging. In 2025, the makers of an AI chatbot claimed they had detected state-sponsored Chinese hackers using their tool to automate attacks on around 30 global organisations [Joe Tidy (2025), BBC News].

These cases reveal the expanding ambitions behind AI-powered cyber-crime - from financial theft to espionage. As offensive capabilities grow more sophisticated, AI amplifies both opportunity and threat.


Conclusions?

As the line between defensive innovation and cyber vulnerability continues to blur, the need for visionary talent becomes more critical than ever.

At Techfellow, our work in recruiting the top 1% of tech talent places us directly in this evolving landscape - connecting exceptional engineers and cybersecurity specialists with organisations committed to building secure and ethical AI-driven systems.

AI can be conclusively shown to enable advanced cyber-attacks, as the case studies illustrate. Yet it is also proving to be one of the most effective tools for strengthening cyber-defence and protecting critical systems. Both realities are true - and both are accelerating!

But don’t just take our word for it. Tune into our podcast, The Defender's Journal, to hear leading voices in security share their perspectives, experiences, and concerns around AI’s dual role.


So why does this matter? Why should you care?

AI affects every one of us - directly or indirectly, by choice or by circumstance.

The same technologies capable of strengthening national security can also be repurposed to undermine it. This dual-use nature of AI means everyone is connected to how these systems are developed, governed, and deployed.

From safeguarding personal data, to ensuring ethical accountability, to preventing misuse by malicious actors - understanding AI’s role in both defence and attack is no longer an issue reserved for experts. It’s now a shared responsibility shaping the safety, stability, and trust of the digital world we all depend on.

...