ISJ hears exclusively from Greg Rankin how AI can reduce human driven incidents in cyber-defence systems.
The security threat landscape has shifted. Cyber-criminals are now using AI to automate reconnaissance, personalise phishing content and adapt social engineering in real time.
These tools enable attackers to scale psychological manipulation with unprecedented speed and precision.
An employee now faces adversaries who can deploy highly believable and tailored messages, often in seconds and at virtually no cost.
In many cases, criminals are adopting and weaponising AI faster and with far higher returns, than organisations can justify investing in the AI tools needed to defend against it.
While AI is being utilised in cyber-defence, it is primarily in technical domains such as threat detection, anomaly identification and vulnerability analysis.
What has not happened at scale is the use of AI to support the human layer, where employees make split-second decisions that often determine whether an attack succeeds or fails.
The result is an imbalance: attackers are escalating with AI, while defenders are still relying on manual processes, slow interventions and static training that employees often do not retain at the point of attack.
In practical terms, companies are asking their employees to bring a knife to a gunfight.
That is beginning to change.
A new class of cybersecurity-focused AI tools is emerging that places round-the-clock security expertise directly at the user’s fingertips, guiding decisions in the moment and reducing the likelihood of costly errors that no amount of perimeter technology can fully prevent.
These systems go beyond scripted responses and instead rely on behavioural science to adapt and personalise guidance, delivering the right advice to the right user at the right moment.
This increases trust, relevance and the likelihood that users will follow secure actions in the moment.
Despite sharp investment in preventive controls, zero trust architectures and continuous monitoring, most cyber-incidents still originate from a single point of failure: humans.
Widely reported industry data indicates that more than 80% of all security incidents stem from human decisions made under pressure, distraction, or uncertainty.
Security teams have spent decades attempting to train users about safer online behavior. Yet even well-trained employees still fall for convincing social engineering.
The issue is not effort; it is timing. Awareness training is episodic. Attacks are continuous. Training delivers information long before it is needed.
Attacks exploit emotion in the moment.
Under real pressure, no employee pauses to search through training modules or recall a policy slide from onboarding.
The brain defaults to speed and familiarity and this is precisely the reflex attackers design for.
The challenge is intensified by silence. Many employees hesitate to contact security or IT for help, fearing judgment, delay or embarrassment.
Instead, they decide alone at exactly the moment they need support.
Static education can raise knowledge, but only trusted, real-time guidance can influence behaviour in the moment of risk.
Traditional chatbots are a nonstarter for security because they rely on generic large language models that produce answers quickly, confidently but without awareness of context, risk or the psychology of the person asking the question.
Safe AI must operate differently.
Its core architecture rests on two principles: accurate information first, personalised influence second.
Complicating matters is a growing mistrust of AI itself. Employees have seen chatbots hallucinate, offer incorrect guidance or respond with absolute confidence when caution was required.
The result is a dangerous gap: humans are being targeted at the moment they are most vulnerable and AI is not yet trusted to assist them at the decision point.
Dr James Norrie, DPM, LL.M, Founder and CEO of cyberconIQ explains: “When a decision carries real risk, information alone is not enough.
“Influence is what changes behaviour and you can’t influence without trust.”
Safe AI begins by grounding every answer in a curated, organisation-approved knowledge base: policies, standards, playbooks and threat intelligence behind the firewall.
Retrieval-augmented generation (RAG) ensures the system cites only authoritative information, avoiding guesswork and reducing or eliminating hallucination.
The information layer is hardened with provenance, calibration and clear disclosure when uncertainty exists.
This step is essential because without accuracy, there can be no trust and without trust, there can be no influence.
That is why generic AI responses often fall flat on their human operators.
Once answer accuracy is established, safe AI must adapt its communication to the individual.
Companies like cyberconIQ make personalisation programmable by mapping how each user naturally navigates risk, rules and reward, the underlying drivers of human decision-making.
This is not tone-shifting; it is behavioural alignment.
Norrie notes: “People do not respond to security guidance the same way. Some want rules, some want reasons and some prefer a challenge question.
“AI must adapt and create room for truly personalised responses if it expects to be believed.”
The goal is for facts to remain constant; but the framing changes to fit the listener.
By communicating in a way that aligns with the user’s natural reasoning style, Safe AI has been fundamentally proven to dramatically improve human trust in their AI collaborations.
Safe AI must also adjust dynamically based on the severity of the situation.
When the stakes are high, it slows down, providing additional sources, counterarguments or explicit confirmation to prevent snap decisions.
When an action is reversible, it reduces friction and accelerates.
This mirrors competent human judgment: cautious when consequences escalate, efficient when they do not.
By combining accuracy, personalisation and calibrated pacing, Safe AI becomes a credible, judgment-free guide.
It is an always-available security coach who meets employees at the exact moment their decisions matter most.
While creating an AI influence engine can improve human decision making in almost any situation, an urgent application is in cybersecurity.
The defensive perimeter is no longer limited to networks, devices or applications.
It now includes the precise moment a human decides whether to trust a message, click a link, reply to an email or approve a request that could compromise the organisation.
That moment of decision is where influence must occur.
Avoiding AI will not stop adversaries from using it. It only ensures that employees remain outmatched unless they are equipped with AI tools capable of defending them in real time.
The next era of cybersecurity belongs to organisations that deploy trusted, personalised AI capable of guiding human decisions safely in real time.
Early deployments of this approach have already demonstrated meaningful reductions in claims and as much as a 95% drop in recurring security-related incidents.
The shift from awareness to action becomes measurable when guidance is both trusted and available at the moment of risk.