It’s Time to Get Real About AI in Physical Security

There’s a lot of noise around AI right now. Some people are excited. Some are overwhelmed. Others are skeptical, waiting to see what happens before they dip their toe in. And then there are folks like me—40 years in this industry, a healthy mix of cautious and curious, but above all, practical.

When I recently sat down for a webinar hosted by SecurityInfoWatch, I didn’t want to talk about AI as a buzzword. I wanted to talk about it like we do inside PSLA — real-world, boots-on-the-ground perspective. What works. What doesn’t. Where we’re seeing value, and where the hype just isn’t living up to reality.

Because the truth is, AI is already here. It’s in the platforms you manage, the cameras you deploy, and the tools your team is already using. But the real question is: Are you using it intentionally—or is it just another feature you hope is helping in the background?

The Hype Is Loud. The Opportunity Is Real.

AI is everywhere in this industry right now. Every product pitch seems to come with a label that says “AI-powered.” But during the webinar, I made this clear: We need to go deeper. Features are great—but what problems are we solving?

The most exciting part of AI isn’t what it can detect—it’s what it can help us understand. I’m talking about insights, context, and decision-making. That’s the gold.

One project that comes to mind involved a large parking lot in San Diego. People were parking there and not entering any of the businesses. Enforcement was a headache, signage didn’t help, and the client was frustrated. So we deployed AI-driven video analytics. Not just to count vehicles—but to track parking duration and detect whether people entered any of the shops.

The result? Real data. Actionable insights. And a business owner who finally had the information they needed to make smart changes.

That’s AI in physical security—but applied as business intelligence. That’s where the conversation needs to go.

It Has to Be Responsible

Here’s where I get a little passionate. AI is powerful—but it’s not plug-and-play magic. And if we don’t approach it responsibly, we’re doing more harm than good.

We need to be asking the hard questions:

  • Where is the data going?

  • Who has access to it?

  • Is it being stored securely?

  • Are the analytics being validated or blindly trusted?

At PSLA, we treat privacy and cybersecurity as core pillars of every deployment—especially when AI is in the mix. Every new data stream is a potential vulnerability if it’s not secured. So yes, we go through the checklists. We test. We validate. Because it’s our job to make sure that the very tools we’re using to protect people don’t become the thing that puts them at risk.

The Role of the Integrator Is Evolving

Here’s the truth: If you’re an integrator and you’re still thinking like an installer, you’re going to get left behind.

Our clients aren’t just asking us for cameras or access control anymore. They’re asking for outcomes. And that means we need to get more strategic.

At PSLA, we built an internal lab where we test AI analytics in real-world conditions. We break stuff on purpose. We push platforms to their limits. Because the only way we’re going to stay ahead is by knowing what works—and what doesn’t—before it’s ever installed at a client site.

The era of “install and hope” is over. This is a time for consultative integration. If you want to be relevant, you’ve got to speak the language of risk, data, business strategy—not just specs and resolutions.

Don’t Forget the Cyber Side

We can’t talk about AI without talking about cybersecurity.

AI systems generate more data, more endpoints, and more opportunities for attackers. Think about what would happen if someone tampered with your video feed in real time—or injected deepfake footage into your surveillance archive.

It’s not science fiction anymore.

This is why we’re pushing for zero-trust architecture, data encryption, and secure networks for all physical security systems. Just because a camera is doing smart analytics doesn’t mean it gets a free pass on cyber hygiene.

If the data can’t be trusted, the system is worthless.


Where Do We Go from Here?

If you’re reading this as a security professional—whether you’re on the integration side, the end-user side, or the tech side—here’s what I’d challenge you to do:

Start asking better questions. Not “what’s the newest feature?” but “what problem can we solve?” Not “what’s the cheapest option?” but “what gives us the most insight and greatest value?”

Build two roadmaps:

  • One for what you’re deploying now—your core, your standards, your bread and butter.

  • And one for what you’re testing, learning, and pushing forward on the AI front.

The goal here isn’t just to use AI—it’s to use it well. To use it in ways that are secure, purpose-driven, and tied to real outcomes.

That’s the path forward. That’s how we take this from hype to something that actually matters.

And at PSLA, that’s exactly what we’re doing every day.

Gary Hoffner

Gary Hoffner is the Vice President of PSLA Security, also known as Photo-Scan of Los Angeles.

https://www.linkedin.com/in/gary-hoffner-49a04b1a/
Previous
Previous

Compliance Can’t Be Static: Why Modern Security Requires Active Management

Next
Next

The Intersection of Physical and Cybersecurity: Why You Can’t Have One Without the Other