Why AI security bots still depend on human decision-making


The Hyundai Boston Dynamics Spot robot dogs are shown at the New York Auto Show media day at the Jacob K. Javits Convention Center on April 01, 2026 in New York City.
Across global deployments, one principle applies: machines monitor, humans maintain authority. Photo by Michael M. Santiago/Getty Images

today, Boston DynamicsRobot dogs — which cost up to $300,000 each — already are data center patrolling across the United States, maintaining the infrastructure that powers Big Tech’s generative artificial intelligence The companies building the world’s most powerful artificial intelligence are entrusting its protection to robots. AI is, in fact, guarded by bots that themselves run on AI

There is an important detail in this arrangement: these robots do not make a single decision. Their role is strictly observational: monitoring, patrolling and detecting anomalies. They do not perform autonomous force or action. Any response to a perceived threat remains firmly in human hands.

Robots that do not make independent decisions reflect a conscious choice by developers and operators worldwide. This limitation is not the decision of a single company or jurisdiction. It’s a common principle that has emerged independently across regions—from Dubai to New York, across Chinese cities and American data centers. Wherever these systems are deployed, the same boundary holds: robots observe, humans decide.

Infrastructure boom and new demand for surveillance robots

American technology companies are investing hundreds of billions of dollars in new data centers. As the infrastructure expands, demand for autonomous security systems is growing in parallel. The global security robotics market is projected to reach $19.18 billion in 2026 and double to $45.31 billion by 2033, representing a CAGR of 13.1 percent.

The reasoning is straightforward. Securing modern infrastructure with only human personnel is increasingly inefficient: the volume of repetitive monitoring tasks is too high and the demand for continuous supervision is too great. Robots, on the other hand, can handle both patrols and industrial inspections, including environments that are difficult or dangerous for humans.

At the same time, the deployment of autonomous systems in public spaces has been increasing over the past few years. In October 2025, Dubai Police introduced an autonomous patrolling robot at the Global Village entertainment complex. The robot moves independently through crowds, captures 360-degree video and transmits it to a control center.

In January, a traffic directing robot made its debut at a busy intersection in Wuhu, China. In the UK, Nottinghamshire Police is testing a robotic dog in armed sieges and hostage scenarios, where the robot goes in first to assess conditions while all force decisions remain with the officers.

In these different countries and systems, one principle remains constant: robots monitor, people decide.

Why does the border fall here?

At first glance, the idea that robots observe as humans act seems to run counter to the public narrative presented by many AI companies, which have long suggested a path to fully autonomous systems. However, in practice, current technology does not support that transition without introducing unacceptable risk.

The language models that underpin most modern intelligent systems do not have a meaning based on the physical world. They do not understand reality in the human sense. they operate on signs and probabilistic models derived from large data sets. Clearly, these systems excel at performing tasks that exist entirely within structured code. But when faced with the unpredictable complexity of the real world, text is not enough. In these situations, models can “hallucinate,” confidently producing results that are reliable but inaccurate. In a chatbot interface, this is manageable as long as the results are reviewed. In infrastructure management or public safety, the consequences are much more severe.

Consider a police robot patrolling the streets at 3 a.m. and encountering situations that cannot be predicted in advance based on training data. A person is lying on the pavement – is this an assault victim, an unconscious individual or someone drunk? On the surface, the visual cue may be identical, but the response required is completely different. Or take someone aggressively trying to open their car door—are they stealing the vehicle or just trying to get in their door?

Misrepresentation in these contexts can escalate into conflict, civil rights violations, or reputational crises for cities and operators.

The threshold of the self-driving car as an analogy

A useful analogy for what we’re actually trying to do comes from self-driving cars. It wasn’t enough for industry pioneers like Waymo to show the public that their systems were, on average, no worse than human drivers. Regulators sought clear statistical superiority: fewer accidents and fewer incidents at equivalent distances.

That threshold remains a matter of debate, but the principle is clear: the greater the potential harm from an error, the higher the bar for proven safety should be. This is especially true for a robot that may one day be given the right to use the force. If armed police robotic systems are ever to be introduced, they will need to demonstrate not only reliability comparable to that of a human officer, but a multi-fold superiority in all key metrics under real-world, non-laboratory conditions.

For now, we are still a long way from this threshold. Modern robotic police officers are deliberately unarmed and function primarily as replacements for patrol cars.

Responsible autonomy as the modern norm

The private sector has already learned the hard way that overestimating AI’s ability to make decisions comes at a high cost. In 2023, Swedish fintech company Klarna laid off about 700 employees after introducing an AI chatbot intended to handle their work, only to quietly rehire them two years later.

Across industries, companies that performed well in controlled demonstrations have encountered dirty datanon-standard requirements and hidden operating costs. Many continue to lose about 40 percent of their expected productivity due to manually correcting AI-generated errors.

As long as AI does not have a grounded model of reality and hallucinations remain a systemic feature rather than an exception, critical decisions must remain under human control. Robots guarding data centers, autonomous patrols in Dubai and mechanical police dogs should not give the impression that we are moving towards a world where machines make decisions instead of people. They signal something more pragmatic: a functional division of labor between humans and machines.

The authority to make final judgments should rest with the responsible individuals. Only in this condition can progress remain stable and sustainable.

Why robots observe, but people still decide





Source link

Leave a Reply

Your email address will not be published. Required fields are marked *