ISJ hears from the Chief Technical Officer of Milestone Systems

AI-agents-and-action-intelligence 

Share this content

Facebook
Twitter
LinkedIn

A vision for security technology in 2025 and beyond – a conversation with Rahul Yadav, Chief Technical Officer, Milestone Systems.

As AI transforms the security technology landscape, new tech use models are emerging that will fundamentally change how we think about video security and automation.

We sat down with Rahul Yadav to explore his insights on the technological developments shaping 2025 and beyond. 

What emerging trends do you anticipate will have the biggest impact in 2025? 

We’re entering what they call the ‘Era of Agentics’, where AI systems will operate with unprecedented autonomy.

These AI agents are different from traditional systems; they can understand context, make decisions and take actions independently without following prescribed steps. 

What makes these agents unique is their ability to learn and adapt. They’re not just waiting for commands but can actively plan and execute multi-step tasks.

I’ve watched this technology evolve from simple chatbots to sophisticated systems that can handle complex security operations with minimal human intervention. 

The truly revolutionary aspect is how these agents will augment human capabilities. While traditional AI focuses on analysis and recommendations, these new systems will increasingly take autonomous actions when appropriate, marking a shift from passive to proactive security management.

Think of Tesla’s full self-driving capability – it’s a perfect example of an AI system that understands its environment and acts based on understanding, without constant human direction. 

How do you see this impacting the human element in security operations? 

There’s an interesting evolution happening in how we measure the capability of these systems. We’ve traditionally talked about IQ for intelligence and EQ for emotional intelligence, but now we’re adding what I call AQ – Action Quotient.

This represents how effectively AI systems can take autonomous actions on our behalf. 

However, this doesn’t mean that humans are becoming obsolete. As Microsoft’s CEO aptly noted: “It’s not AI that will replace you, but someone using AI who will.”

The key is learning to work alongside these systems effectively. Security professionals will need to develop new skills focused on managing and directing AI systems rather than performing routine monitoring tasks themselves. 

What this means is that a security operations center that currently requires 25-30 operators might function more efficiently with a smaller team of professionals working in partnership with AI agents.

The human role will evolve to focus on high-level decision making and handling complex situations that require judgment and empathy.

Event management, incident reporting and routine monitoring tasks could be largely automated, freeing human operators to focus on strategic oversight and complex decision-making. 

Can you elaborate on the technical evolution that’s making this possible? 

We’re seeing the emergence of three crucial model types that will reshape our industry. First, there are Small Language Models (SLMs) designed for specific applications.

Then we have Vision Language Models (VLMs) specifically optimised for video processing. Finally, Large Multimodal Models (LMMs) can handle multiple types of data simultaneously, which is crucial for comprehensive operations. 

This evolution represents a fundamental shift from traditional analytics to learning-based systems. These models improve their responses over time through experience, similar to human learning.

We’re also seeing a transition from CPU to GPU-focused architectures, which is changing how we approach system design and programming.

Unlike traditional CPU processing that handles tasks sequentially, GPU processing can perform thousands of calculations simultaneously, making it ideal for the parallel processing demands of AI and video analysis. 

What’s particularly exciting is the democratisation of these capabilities. While major tech companies invest hundreds of millions in training large base models, security companies can leverage these foundations to create specialised applications with more modest investments.  

A mid-sized security operation can establish effective AI capabilities with a GPU infrastructure investment of $200,000-$300,000 – a fraction of what major tech companies spend on their large-scale AI systems. This accessibility is key for driving innovation across organisations of all size. 

With AI regulation becoming more prominent, how do you see this affecting innovation in the security industry? 

Responsible technology development will be a crucial competitive advantage in 2025, but we need to strike a delicate balance.

Being overly cautious can stifle innovation just as much as being reckless can. Companies that don’t strike a balance between innovation and responsibility risk losing their market position.

The key is taking calculated risks while maintaining ethical standards and user trust. 

Think of it this way: Companies need to be pragmatic without compromising their principles. Just as consumers choose trusted brands for their personal devices, organisations will increasingly select security technology partners based on their track record of responsible innovation and ethical AI deployment.

The challenge lies in maintaining this balance while keeping pace with rapid technological advancement. You wouldn’t trust a self-driving car from a company with a questionable reputation and the same principle applies to security technology. 

Looking ahead, how do you see video management systems evolving to accommodate these changes? 

Traditional video management systems are transforming into intelligent platforms that can automate complex workflows and security responses.

What’s especially interesting is the potential for hybrid systems that combine on-premises capabilities with cloud-based AI services. 

At Milestone, we’re exploring how to responsibly leverage video data by developing frameworks for ethical data usage so we can train AI models on responsibly sourced data, creating more reliable and unbiased systems.

This approach is attractive to major technology partners because it offers a unique source of ethically obtained training data, unlike models trained on scraped internet content. 

The future of video management isn’t just about storing and retrieving video data, it’s about creating intelligent systems that can understand and act on that data in real-time, while maintaining the highest standards of privacy and security.

These systems will be capable of coordinating complex responses across multiple subsystems, from access control to emergency communications, creating more comprehensive and effective security solutions. 

Any final thoughts on what security professionals should be focusing on as we move toward 2025? 

The key is to stay adaptable and embrace these changes while maintaining a focus on responsible innovation.

Security professionals should be investing in materials and training to understand these new technologies and how they can be used effectively and responsibly.

The companies that will thrive are those that can balance innovation with trust, creating solutions that not only push technological boundaries but do so in ways that respect privacy and maintain security. 

Remember, we’re not just upgrading our technology – we’re fundamentally changing how we approach security operations.

The future belongs to those who can effectively partner with AI while maintaining the human judgment and empathy that are crucial to true security. 

Newsletter
Receive the latest breaking news straight to your inbox