Powered by MOMENTUM MEDIA
cyber daily logo
Breaking news and updates daily. Subscribe to our Newsletter

Op-Ed: AI in physical security – opportunities, risks, and shared responsibility

Artificial intelligence (AI) has the capacity to elicit all sorts of different reactions.

user iconGeorge Moawad
Tue, 03 Oct 2023
Op-Ed: AI in physical security – opportunities, risks, and shared responsibility
expand image

For some, it is seen as a powerful vehicle, while others view it with fear and trepidation. In the world of physical security, AI can be weaponised or used as a powerful enabler. But knowing where it fits and how it can and should be leveraged means we need to examine its potential use cases, risks, and shared responsibility.

With the Australian government investigating the regulation of AI, or more specifically, the outcomes AI produces, it’s important to understand what we are really looking at in order to evaluate the threats and opportunities. Generative AI tools such as ChatGPT and Google Bard highlight that great strides have been made in the development of these systems, but there are also many limitations.

Physical and digital security practitioners are faced with a major problem today. The number of sensors, alerts and logs they need to process to determine if a threat is real is overwhelming. The AI tools we see today are complex systems that use probabilities to determine whether specific words or inputs belong together.

============
============

But this is not true intelligence.

We can look at the journey to AI as having four distinct stages. The first is automation, where systems mimic human action. Intelligent automation systems mimic or augment human judgement by executing simple “if this, then that” actions. Cognitive automation systems augment human intelligence by using more complex inputs, such as questions. Generative AI, like ChatGPT, sits between intelligent automation and cognitive automation. At the highest level sits artificial general intelligence – systems that mimic human intelligence.

Those first three levels are not AI. Generative AI, for example, is not intelligent. It analyses a question and then provides a response based on the frequency and probability that certain words occur together. This is a sophisticated form of machine and deep learning.

AI in physical security

Machine learning and deep learning are important elements of modern physical security systems. Machine learning uses statistical techniques to solve problems, make predictions, or improve the efficiency of specific tasks by using data collected by physical security devices such as cameras, doors, or other sensors. Deep learning analyses the relationship between inputs and outputs to gain new insights.

For example, when license plate recognition software collects data about vehicles, when sensors collect information on people, and intrusion alerts detect when a door is opened, or a barrier is breached, it can provide a security operator with an alert that considers the full context of these series of connected events. For example, license plate recognition and the use of a security tag could indicate that an authorised person has parked their car and entered the premises. However, the breached barrier and camera footage could be indicators of a breach that needs to be investigated.

While that may look like intelligence, it is machine and deep learning working to understand inputs, put them into context, and deliver actionable information for security teams to analyse and humans to ultimately respond to. The system has not learnt this independently. The software that underpins this is created by people who program the logic that understands the context.

This is why generative AI tools that use large language models, such as ChatGPT, are not suitable for security applications. The output from those models is not reliable and could lead to false positives or negatives that can waste resources and miss important indicators of compromise.

But machine learning and deep learning can be immensely valuable. They can scan through hundreds of hours of video to find specific patterns, count the number of people in a specific area to manage occupancy, monitor lines, and alert staff to overcrowding in a matter of seconds or minutes, much faster and more accurately than humanly possible. Retailers, for instance, can use that data to improve sales conversions. Stadium operators can use it to control crowd flow, and transit authorities can better understand and address peak travel times.

These tools cannot replace people but can help them be more efficient and effective. Rather than scrubbing through footage and analysing logs, machine learning systems can find anomalies and correlate them. But humans will always be needed as these systems can make mistakes that only people can detect and understand.

Responsible and ethical use of AI data in security

Security systems rely on data and are prone to the “garbage in, garbage out” maxim. Without lots of high-quality data, the outputs these systems generate are unreliable. However, that data must be ethically collected in compliance with local laws. In Australia, those laws vary across each state and territory, and federally.

AI has the potential to be a powerful tool in physical security. But it is not a silver bullet that will remove all physical security risks. It complements the work of security personnel and helps them become more efficient and effective by reducing the time taken to analyse data so they can focus on real threats and respond faster when required.

But using AI requires policies and procedures to ensure data is collected and managed ethically, with people overseeing the outputs to ensure the models are delivering sensible outputs.

And no – this was not written by AI.


George Moawad is the country manager for Australia and New Zealand at Genetec.

newsletter
cyber daily subscribe
Be the first to hear the latest developments in the cyber industry.