Unlocking the potential of AI for physical security

physical security

Share this content

Facebook
Twitter
LinkedIn

It’s time to put AI to work on physical security, writes Tim Norris, VP, Global Product Marketing, Brivo.

Many headlines claim AI will soon radically disrupt the entire world, performing almost every task we can think of. Just as many cynics have pointed out the limitations of AI – it can make errors and can invent “hallucinations”.

Then there are those that say we’re headed for the type of nightmare only seen in science fiction.

As always, the truth lies somewhere in the middle. AI won’t do everything and neither will it control us all.

It is a technology just like any other, but with the potential to change how people interact with software and how we deal with large sets of data.

As we have seen more physical security systems become cloud-based, AI could have a great deal of impact. But does this binary thinking around AI extend to physical security too?

Physical security experts are bullish on AI

We wanted to find out if there was an industry consensus on this evolving technology, so we asked experts in decision making roles across Europe and North America, in our 2024 Top Global Security Trends report.

Many of these experts welcome the introduction of AI into their physical security systems and expect considerable change.

When asked which areas would be disrupted most by AI, 55% of experts placed general IT at the top. This is understandable given the wide applications of the technology.

Yet physical security was a very close second, at 48%. Businesses expect AI to change security more than almost every other aspect of their business.

This isn’t just speculation. Concrete action is being taken to make AI’s role in security a reality, including spending money.

More than half of the bigger US businesses we surveyed (with over 5,000 employees), will invest at least $3m over the next three years in AI and automation. Smaller businesses are also making plans.

While these are not on the same scale, a quarter reported a similar planned investment.

The common concerns about AI are generally not held by physical security experts.

63% predict that AI will only require minimal human oversight and 57% disagreed with the idea that AI integrations would drive up business costs.

Similarly, 58% did not see it as a threat to jobs and 66% rejected the idea that AI “hallucinations” will be a problem. A majority see the benefits far outweighing any risks.

AI’s biggest impacts on physical security

The popular understanding of AI may make it difficult to see where it will be applied to security, but many of the AI tools that are public facing are generalised.

The AI that will be embedded in physical security software will be more specialised and created with this specific purpose in mind, for example, for anomaly detection.

Automated security systems can streamline processes and ensure employees work more productively. Yet human intelligence is sometimes lost when security guards are replaced with automation.

Traditionally, access technology couldn’t tell when there are deviations from the norm, such as a change in behaviour.

But, over time, AI can build up an idea of normality and flag anomalies.

Of course, not every deviance from the norm is a problem – sometimes, someone will be required to go to an area they rarely venture to or work on a weekend.

But AI can flag these anomalies for human review.

Without AI, looking for these abnormal occurrences is incredibly difficult and time consuming and nearly impossible to spot patterns. AI makes this much less labour intensive.

It will also have a much better chance of success than its human counterpart.

Finding anomalies is time consuming partly because of the expertise needed to generate reports.

Anyone looking for information from data created by a security system needs to know how to extract it using the right selection criteria.

This is difficult enough, but much worse if this information is needed under time pressure in an emergency situation.

AI will make it possible to use natural language processing (NLP) for queries, using simple questions to create the request needed.

Asking queries can even be done by voice, just as we ask Alexa or Siri to search for information online.

This saves on training time, but it also means those without specialist training can interrogate data, add users and perform other administrative tasks.

Skill shortages are a major problem today and making systems easier to install and maintain helps.

AI can also make the integration of systems easier, with AI capable of interrogating data from different systems and using it to flag problems and make decisions.

AI works best with big data sets and video provides a great deal of data.

This can be incredibly time consuming for someone to sift through and find issues, but AI will be able to detect issues such as tailgating, where one access pass or credentials admits more than one person.

But that doesn’t mean there aren’t challenges to be overcome.

Overcoming the barriers to implementing AI

Despite high investment, only 36% of businesses today say they have “strong” confidence in their ability to make the best use of AI. Why is that?

Our survey discovered that experts pointed to two major issues with implementing AI. Despite promised investment, many still see budget as a challenge.

As a developing technology, AI doesn’t have a fixed price tag and businesses won’t know if their allocated budgets will be enough to make the most of it.

Plus, it is a challenging time for many industries which makes investments a risk.

There is also the problem of data and compliance. Data needs to be collected and input into AI systems and organisations are concerned they lack the skills to make this happen.

Collecting data means asking where data will be stored, whether it meets regulatory standards and more.

These are understandable concerns to have, and in many ways, they are positive. Organisations are not thinking about whether or not they will implement AI.

Instead, they’re thinking about the details, including whether they have budgeted enough, how they will get data into these systems and how they can ensure compliance.

Introducing AI to physical security is a matter of “when”, not “if”.

Security experts are keen to integrate it, have set aside resources to do so and are realistic about the challenges. Nevertheless, they need to work on the clear challenges.

There are high expectations of AI, tempered with some lack of confidence in AI proficiency.

To unlock the potential of AI in streamlining security processes and reducing risk, skills and regulatory understanding needs to be a priority.

Newsletter
Receive the latest breaking news straight to your inbox