Looking ahead to 2026 with Illumio

Looking-ahead-to-2026-with-Illumio

As part of an online miniseries, Trevor Dearing, Director of Critical Infrastructure and Michael Adjei, Director and Systems Engineer of Illumio discuss their industry predictions for 2026.

Can you tell me a bit about yourself, your job role and how long you have been at the company?

TD: I’ve worked in networking and security for over 40 years and have witnessed the emergence of nearly all major technologies, including Ethernet switching, routing, VPNs, firewalls and the cloud.

I originally started as a development engineer, but I am now the Director of Critical Infrastructure at Illumio, where I have been since February 2020.

In my role, I focus on simplifying segmentation in zero-trust and highly regulated environments.

MA: I’m the Director of Systems Engineering at Illumio, where I am responsible for overseeing the NEMEA systems and sales engineering team, providing technical product oversight.

I have been at Illumio since March 2020 and have 25 years’ experience in network and cybersecurity across various roles.

What are some of the key trends and predictions you think we will see in the security industry in 2026?

TD: If 2025 was the year supply chain attacks made headlines, 2026 will be the year they become business as usual.

Attackers have realised that by targeting a single trusted service provider, they can cripple dozens of customers overnight.

Attackers do not need to go through the front door when a supplier holds the keys. This reality will drive a major rethink of how organisations manage third-party relationships and focus on resilience.

Businesses will have to accept that they cannot outsource accountability; resilience will need to depend on shared visibility and shared responsibility.

On top of this, the sectors that matter most to society such as food, energy, water and transport, remain chronically underfunded in terms of cybersecurity.

Many still operate on five-year investment cycles, making it impossible to respond to fast-moving threats.

2026 will bring increasing pressure to address this. Expect governments and regulators to demand ring-fenced budgets for cybersecurity within national infrastructure, treating it as a continuous operational expense rather than a periodic upgrade.

This year has shown that cybersecurity is not just a technology problem but an economic one as well. Critical services cannot be defended on a shoestring and attackers are well aware of this.

MA: The rapid adoption of agentic AI will drive a surge in autonomous connections between agents, systems and applications.

This hyperconnectivity will amplify existing API sprawl, overwhelming security teams and creating blind spots across digital infrastructure.

Without robust oversight, these unsupervised pathways will become prime targets for exploitation, turning what was once a barely manageable API problem into a systemic vulnerability.

The rush to implement agentic AI will also result in insufficient supervision of how agents interact with other systems.

Organisations will struggle to understand what access agents have to their systems and whether they are interacting with customer and sensitive data appropriately.

The use of agents will also mean that people relinquish part of their identity to autonomous AI.

Agents may assume people’s identities, accessing usernames, passwords and tokens to log in to systems for automated convenience.

As a result, cyber-criminals could target the autonomous capabilities of agentic AI, using them to commit cyber-attacks by compromising agent-to-agent communication.

This approach could make agents appear culpable in potential mass exploitation incidents, allowing the true attacker to remain concealed.

The novelty of agentic AI, coupled with a security-as-an-afterthought approach and the upward trajectory of mass adoption, is likely to fuel these risks.

What is one piece of advice you would give organisations and professionals as they head into 2026?

TD: Resilience has long been treated as a nice-to-have within cybersecurity rather than as a fundamental business outcome. That will change next year, as resilience becomes an expectation for business.

Organisations must move towards anti-fragility, which means not only the ability to withstand shocks but also to emerge stronger from them.

They need to formalise post-incident learning and create “after-action” teams tasked with studying what happened, testing new defences and building back better.

The goal next year must be to keep incidents small, respond quickly and learn from each one.

MA: The risks associated with agentic AI make it essential to bring together stakeholders from various sectors to develop comprehensive security standards and create regulatory frameworks that ensure AI systems remain safe and reliable.

Resilience must be built into the planning and deployment of all agents. This includes implementing safe defaults, validating inputs and enforcing restrictive permissions and escalation protocols.

Organisations must also act individually and adopt a containment mindset to mitigate the inherent risks of AI.

This approach ensures that even if attackers exploit AI agents, the impact of such attacks is minimised, allowing organisations to embrace AI’s benefits without compromising security.

Share this content

Latest Issue

Connect with us

Free digital subscription

Receive the latest breaking news straight to your inbox