PureTech Systems: The future of perimeter protection


Eve Goode
Share this content
International Security Journal discusses the future of perimeter protection with Larry Bowe, Founder & CEO, PureTech Systems, Inc.Â
How are global security threats shaping current trends in perimeter protection?Â
Global threat levels have undeniably increased and we’re seeing that impact ripple across every sector, especially when it comes to protecting critical infrastructure.
One of the biggest wake-up calls has been the vulnerability of the US electric grid. The realisation that a single transformer station can be compromised and take down power to a large part of a city is driving a renewed urgency around hardening these sites.Â
We’re also seeing an evolution in the nature of threats. Cybersecurity continues to be a dominant concern.
While it’s not our core focus, it’s intrinsically tied to the overall security conversation.
Perimeter protection can no longer be siloed; physical and digital threats are increasingly intertwined.Â
Another rising concern is the proliferation of drones. For many electric utilities, the question isn’t just if drones are flying over substations and power stations, it’s why.
Are these hobbyists, bad actors or part of a larger reconnaissance effort? The ambiguity alone is driving investment in advanced detection and assessment tools.
Organisations want more than alerts – they want actionable intelligence to understand intent and risk in real time.Â
What’s clear is that the global security landscape is shifting fast. Geopolitical unrest, terrorism and new technologies are forcing organisations to rethink how they define and defend their perimeters.
Even if we can’t predict every emerging threat, the appetite for proactive, integrated security solutions has never been stronger.Â
How do you define autonomous perimeter protection today?Â
I’ve spent quite a bit of time thinking about that. One useful analogy comes from the autonomous vehicle space, where there’s a well-established scale from Level 0 to Level 5 – ranging from zero autonomy to full autonomy, where the vehicle operates entirely without human input.
I think that same framework applies well to perimeter protection.Â
At Level 0, we’re talking about manual monitoring – human operators watching video feeds and responding in real-time.
Full autonomy, Level 5, would mean a system that not only detects and identifies threats but also determines the appropriate response and acts on it, entirely without human intervention.
We’re not quite there yet, but we’re getting closer every day.Â
Today, I’d say we’re operating around Level 4. We can detect intrusions with a high degree of accuracy. We can instantly cue cameras to track and maintain visual on a subject, report their geolocation in real-time and even trigger automated deterrents – sirens, strobes, loudspeakers – all without a human lifting a finger.
Where human oversight still plays a role is in the decision to escalate or intervene. Â
Getting to Level 5 – true autonomy – raises some complex questions. Once a threat is detected, how do you respond? Do you deploy a robot? If so, what does that robot do? Engage? Contain? Deter? These are not just technical challenges, but ethical and operational ones as well.
While full autonomy may remain just over the horizon, we’ve already built systems that dramatically reduce the burden on human operators, accelerate response times and provide unmatched situational awareness.
The goal is to empower security teams to make better, faster decisions – while continuing to push the boundaries of what’s possible through automation.Â
How is AI and machine learning helping to reduce false alarms and improve detection?Â
AI and machine learning has fundamentally reshaped what’s possible in perimeter protection particularly when it comes to reducing nuisance alarms and improving detection accuracy.
At our core, we’ve always approached video analytics differently. Over a decade ago, we began transforming video streams into high-fidelity sensors making them operate more like a passive radar.
This includes extracting spatial data from images to pinpoint the real-time geo-location of an intruder.
That early innovation gave us a unique edge in understanding key attributes of moving objects including their actual size, speed and location in the real-world, well beyond simple motion detection.Â
This spatial awareness allows us to filter out noise and eliminate common sources of nuisance alarms – like animals, wind-blown objects or lighting changes – because they don’t match the expected profile of a human presence.
And that’s critical, because perimeter protection often involves incredibly short decision windows. In many cases, you’ve got only one or two seconds to assess and respond.Â
From the beginning, we’ve leveraged AI principles like adaptive background learning – our patented multi-modal model continuously learns and adjusts to environmental changes, filtering out shadows, weather events and other false triggers.
But about seven years ago, we took it further by layering on machine learning, specifically deep neural networks. We were one of the first in our space to train AI models on perimeter-specific data.Â
Today, we’ve amassed a vast library of human and vehicle detection examples – tens of thousands of real-world scenarios – and we continue to refine those models with a dedicated in-house team.
One of the key breakthroughs has been our ability to detect targets even with minimal pixel data.
Through a multi-classifier “voting” architecture, we cross-reference geo-referenced size and movement data with neural network outputs.
When both algorithms agree it’s a human, the confidence level is extremely high.Â
The result? Fewer false positives, faster response times and a dramatic reduction in operator fatigue. AI isn’t just making systems smarter, it’s making security teams more effective, focused and empowered.
How do you leverage technologies like video analytics, sensor fusion and geospatial intelligence?Â
It comes down to applying the right technologies in concert – each one playing to its strengths, and together, creating a more comprehensive and reliable picture of what’s happening on the ground.
For example, traditional video cameras excel at detecting lateral movement – objects moving across the field of view.
But they can struggle with targets approaching head-on. Radar, on the other hand, is incredibly effective at detecting direct approaches but may miss lateral movement.
Then there’s LiDAR, which brings precise 3D spatial awareness into the mix – mapping environments with remarkable accuracy and adding depth perception that enhances both detection and classification.Â
The real power emerges when you fuse these sensors together. By layering their outputs – video, radar, LiDAR – you significantly reduce the chances of missed detections while also filtering out nuisance alarms.
Advances in sensor affordability have made this kind of fusion not only possible, but scalable across more sites and use cases than ever before.Â
Geospatial intelligence is what unifies it all. We’re able to continuously track and report the exact location of a moving target, even across expansive or complex environments.
That level of precision enables faster, more confident response decisions – and opens the door for advanced automation.
For example, drones can be autonomously tasked to follow and monitor an intruder in real time based on live geospatial data.Â
Ultimately, it’s about giving security teams actionable, real-time intelligence, not just more data. When you combine sensor fusion with advanced analytics and geospatial awareness, you shift from reactive monitoring to proactive threat management.
What’s next for you and for autonomous perimeter protection?Â
The next frontier in autonomous perimeter protection is being shaped by the convergence of edge computing, AI and large language models – and it’s moving fast.
Today, most LLMs run in the cloud, which presents challenges for high security environments where connectivity to the public internet simply isn’t viable. But that’s changing.Â
Companies like Ambarella are pushing the boundaries by enabling these models to run directly on-chip. Once you bring advanced reasoning and contextual querying to the edge, you unlock entirely new possibilities – faster decision-making, localised intelligence and significantly more autonomous systems.
It’s not just an incremental improvement; it’s a leap forward in how we think about automation and threat response.Â
We’ve already demonstrated how automation can deliver tangible results. In one recent deployment, we helped a client reduce 90% of their nuisance alarms across hundreds of cameras and thousands of miles with substantial annual savings by streamlining infrastructure and reducing manual intervention. And that’s just the beginning.
Operationally, the opportunity to automate tasks that once required constant human oversight is massive. From smart alert triaging to automated tracking and response coordination, AI will reshape not just the perimeter—but the workflows around it.
We’re also seeing increased adoption of autonomous drones, which can now automatically launch, track an intruder and maintain visual and lighting coverage from above.
It’s a highly effective deterrent and a powerful example of what’s possible when AI, geospatial intelligence and sensor fusion work together seamlessly.Â
What’s next? More autonomy. Smarter systems. Better coordination across sensors and platforms. And a shift toward truly intelligent, adaptive security ecosystems that operate with minimal human input.
It’s an exciting time – not just for us, but for the entire industry. We’re proud to be at the forefront of that transformation.Â