Artificial Intelligence has permanently altered the cybersecurity landscape. It was once enough to install a firewall, configure antivirus software, and train staff to avoid suspicious emails. Today, those measures are necessary but nowhere near sufficient. The adversaries organisations face are increasingly equipped with the same powerful AI tools that defenders use — and in many cases, they are using them more aggressively.
The dual-use nature of AI in cybersecurity creates a technological arms race with no clear finish line. Understanding where AI is being deployed — by both sides — is no longer optional for security professionals. It is foundational knowledge for anyone serious about protecting systems, data, and people in the modern threat landscape.
How Attackers Are Using AI: The Offensive Playbook
The democratisation of AI has been a gift to cybercriminals. Tools that once required nation-state resources are now available to independent threat actors operating from anywhere in the world. The offensive use of AI in cybersecurity takes several distinct forms.
AI-Powered Phishing and Social Engineering
Traditional phishing attacks were often easy to spot — grammatical errors, generic salutations, implausible scenarios. Large language models have eliminated most of those telltale signs. Attackers now use AI to generate highly personalised spear-phishing emails by harvesting information from LinkedIn profiles, corporate websites, and social media. The result is a message that references your recent project, your manager's name, and your organisation's specific processes. Studies show AI-crafted phishing emails achieve click-through rates up to 60% higher than manually written ones.
Beyond email, AI enables voice cloning attacks — sometimes called "vishing" — where attackers synthesise a convincing audio impersonation of an executive to authorise fraudulent wire transfers or credential resets. Several high-profile cases in 2024 resulted in losses exceeding AU$25 million each.
- AI-generated deepfake audio and video used in CEO fraud and identity verification bypass
- Automated vulnerability discovery tools that scan and exploit systems at machine speed
- Polymorphic malware that rewrites its own code using generative AI to evade signature detection
- LLM-assisted social engineering scripts adapted in real time during phone calls
- Automated credential-stuffing attacks enhanced by predictive models trained on leaked password datasets
Automated Vulnerability Exploitation
AI dramatically reduces the time between a vulnerability's public disclosure and its exploitation in the wild. Historically, defenders had a window of weeks to patch critical vulnerabilities. In 2025, that window is often measured in hours. AI-driven reconnaissance tools can fingerprint an organisation's technology stack, identify exposed services, correlate them with known CVEs, and automatically generate working exploits — all without human intervention.
AI on the Defensive Side: How Organisations Are Fighting Back
The news is not uniformly grim. AI is also transforming defensive capabilities in ways that would have been unimaginable five years ago. The advantage of AI for defenders lies in its ability to process enormous volumes of data, identify subtle patterns, and respond to threats at a speed no human analyst can match.
"The question is no longer whether AI will be part of your security stack — it's whether you'll be using it before your adversaries use it against you."
— SISTMR Cybersecurity Research Team, 2025Behavioural Analytics and Anomaly Detection
Machine learning models trained on normal network behaviour can identify deviations that indicate compromise — even when an attacker uses legitimate credentials and tools. This approach, often called User and Entity Behaviour Analytics (UEBA), can detect insider threats, compromised accounts, and lateral movement that traditional rule-based systems miss entirely.
For example, if a user who normally accesses files from Sydney at 9 am suddenly authenticates from Eastern Europe at 3 am and begins bulk-downloading sensitive documents, a well-tuned UEBA system will flag this in seconds and can automatically quarantine the session pending human review.
AI-Driven Threat Intelligence
Security platforms now use natural language processing to continuously scan the dark web, threat actor forums, paste sites, and vulnerability databases. By correlating this intelligence with an organisation's specific technology footprint, these systems can proactively alert teams to threats before they arrive at the perimeter. This shifts security posture from reactive to genuinely predictive.
- Real-time anomaly detection across network, endpoint, cloud, and identity data streams simultaneously
- Automated threat hunting that runs continuously without analyst fatigue
- AI-assisted incident response playbooks that reduce mean time to respond (MTTR) by over 55%
- Predictive patching systems that prioritise vulnerabilities most likely to be exploited in your environment
- Natural language interfaces that allow non-technical staff to query security data and understand risk
Adversarial AI: When the Defence Becomes the Attack Surface
One of the most concerning developments in AI security is the emergence of adversarial attacks against AI systems themselves. As organisations deploy machine learning models to detect fraud, malware, and intrusions, adversaries are learning to craft inputs specifically designed to fool those models.
Adversarial examples — carefully crafted inputs that cause a model to misclassify a malicious file as benign — have moved from academic research papers into real-world attacks. A malware sample can be subtly modified to evade an AI-based antivirus engine while remaining fully functional. Security teams must now validate not just whether their AI works, but whether it can be deliberately deceived.
Regulatory Landscape: AI Security Obligations in Australia
Australia's approach to AI in cybersecurity is evolving rapidly. The Australian Cyber Security Centre (ACSC) has issued updated guidance on the use of AI tools in critical infrastructure environments, and the Privacy Act amendments have expanded obligations around automated decision-making that processes personal data.
- The Notifiable Data Breaches (NDB) scheme requires disclosure of breaches involving AI-processed personal data within 30 days
- The Critical Infrastructure Act 2018 (amended 2022) imposes positive security obligations on AI systems used in 11 defined sectors
- ACSC's Essential Eight framework now includes specific guidance for AI-assisted system hardening
- ISO/IEC 42001 — the AI Management System standard — is increasingly required in government procurement
What to Expect: The Next 24 Months
The pace of development in AI-enabled cybersecurity shows no sign of slowing. Several trends will shape the threat landscape through 2026 and beyond.
Agentic AI attacks — autonomous AI agents that conduct multi-step attack campaigns with minimal human direction — are transitioning from theoretical concern to operational reality. Early examples of agentic attack frameworks have been identified in the wild, capable of conducting reconnaissance, finding entry points, establishing persistence, and exfiltrating data as a coherent autonomous operation.
AI-generated synthetic identities will make identity verification significantly harder. Deepfake technology has reached a point where real-time video calls can be convincingly spoofed, undermining KYC (Know Your Customer) processes in financial services and defeating many multi-factor authentication mechanisms based on biometrics.
Quantum-AI convergence remains a longer-term concern, but organisations handling data with a 10-year confidentiality requirement must already consider post-quantum encryption, as adversaries are harvesting encrypted data today intending to decrypt it once quantum systems become capable enough.
Practical Steps: Building an AI-Ready Security Programme
Understanding the AI threat landscape is valuable. Translating that understanding into concrete security improvements is essential. Whether you lead a large enterprise or a small organisation, the following steps represent the highest-value investments in AI-era cybersecurity.
- Audit your AI exposure: Identify every AI tool in use across your organisation — including shadow IT — and assess the data each processes
- Invest in UEBA: Deploy user and entity behaviour analytics to catch compromised accounts and insider threats that signature-based tools miss
- Run AI-specific tabletop exercises: Simulate deepfake CEO fraud, AI-accelerated phishing campaigns, and adversarial model attacks
- Establish an AI governance framework: Define who owns AI risk, how models are tested, and how AI-generated outputs are validated before action is taken
- Upskill your team: Security professionals who understand machine learning fundamentals are significantly more effective at evaluating and deploying AI defensive tools
- Get certified: SISTMR offers free cybersecurity certifications covering AI security fundamentals — a strong foundation for any professional
Conclusion
AI has not made cybersecurity harder — it has made it different. The organisations that thrive in this environment will be those that embrace AI as a core component of their security programme, invest in the skills to use it well, and maintain the human judgment needed to oversee it wisely. Those who treat AI-driven threats as a future concern rather than a present reality are already behind.
The adversaries are not waiting. Neither should you.