How to deploy physical AI effectively | Tech Radar
Overview
News, deals, reviews, guides and more on the newest smartphones
News, deals, reviews, guides and more on the newest computing gadgets
Details
Start exploring exclusive deals, expert advice and more
Unlock and manage exclusive Techradar member rewards.
We are living in a material world: Physical AI thinks where the action is
When you purchase through links on our site, we may earn an affiliate commission. Here’s how it works.
Unlock instant access to exclusive member features.
Get full access to premium articles, exclusive features and a growing list of member rewards.
Most of today’s enterprise AI still operates within the boundaries of cloud datacenters.
It handles digital tasks well like analysis or personalization, but it struggles when intelligence needs to be applied in the physical world, where decisions need to be instant and IT infrastructure is shifting.
Models are therefore becoming smaller and more specialized by running on edge hardware and responding to constantly changing data streams.
Building private AI: control, compliance and competitive edge
Why enterprise AI will be defined by integration, not model aggregation
Vice president of software development, AI and edge at Couchbase.
Physical AI embeds intelligence directly into vehicles, warehouses, aircrafts, retail spaces and industrial systems.
It’s designed for environments where connectivity drops, latency matters and operations cannot stop because a network link has failed.
As organizations deploy more sensors and edge devices, this model is becoming an operational requirement.
Every physical AI application depends on access to consistent local data, regardless of network quality. Decisions draw on maps, sensor inputs, telemetry, contextual information and model states, all of which must remain available even when devices, vehicles or machines are disconnected from the cloud for hours.
This creates three core technical requirements. First, latency must approach zero. Even the shortest round trip to the cloud is too slow for millisecond-critical decisions. An autonomous vehicle detecting a sudden obstacle, a warehouse robot identifying a missing item or a smart manufacturing system responding to equipment changes cannot wait for a remote API response; the decisions must be made locally.
Second, data must remain available despite weak connectivity. Many operational environments have volatile connections, so physical AI systems must continue to function offline. This “offline-first” approach ensures that data storage, inference and decision logic remain operational even when cloud access is unavailable.
Third, the compute must be efficient. Edge hardware is inherently constrained, which means models must be small, specialized and optimized, often with hardware acceleration. Databases and the broader AI stack need to be lightweight, performant and resource efficient. In this architecture, the database is an integral part of the AI pipeline, delivering the data models required to make decisions at the source.
Regional data sovereignty in the age of AI: Balancing innovation and regulation
Five signs your infrastructure is stalling your AI strategy
Why cloud-only AI breaks down outside controlled environments
Autonomous vehicles move through patchy mobile coverage. Warehouses experience RF interference. Aircraft and cruise ships operate for long periods with limited bandwidth. Even modern manufacturing sites regularly experience dead zones.
In these conditions, latency, the idea that AI can wait for a round trip to the cloud, is a limiting factor. Physical AI relies on local processing and local data because that’s the only way to guarantee consistent, reliable operation.
In autonomous and connected vehicles, edge inference is essential. One self-driving car company, for example, generates large volumes of sensor data that must be processed immediately. Cloud dependency simply isn’t viable because non-autonomous features rely on local storage and offline capability to function reliably.
Aviation shows many of the same constraints. Airlines want to improve crew workflows, maintenance, logistics and passenger experience with AI, but aircraft operate with intermittent connectivity. Data must be collected and stored locally, shared between onboard systems and synced efficiently when the aircraft reconnects.
Retail and logistics offer some of the most accessible examples. At Pepsi, edge devices in warehouses run vision models to analyze shelf stock and initiate replenishment automatically. The intelligence matters, but the practical challenge is managing data locally and syncing it reliably when connectivity allows.
Cruise lines face similar constraints. Operators need to support real-time transactions, personalization and on-board operations on vessels that may not have stable connectivity for days. Across these sectors, the pattern is consistent: AI works only when it operates where the data is generated.
Why so many AI proof-of-concepts struggle to scale
A recent MIT report found that only about 5% of AI pilot programs achieve rapid revenue acceleration; the vast majority stall, delivering little to no measurable impact on P&L. The reasons are well documented: Organizations expect immediate ROI. Teams underestimate the complexity of deploying and maintaining AI systems.
Architectures are built around cloud assumptions that don’t hold in real-world environments. The right data architecture doesn’t solve every challenge, but it does address one of the most common points of failure: the gap between lab conditions and operational reality.
Moving to a physical AI model requires designing systems around the actual behavior of physical environments with local processing for time-sensitive decisions, persistent local storage so devices function during outage, lightweight edge databases and optimized models that match hardware constraint and efficient synchronization to ensure data consistency when connectivity returns. Getting this layer right determines whether AI systems can operate reliably at the edge.
Automotive, aviation, logistics, manufacturing and travel businesses are already adopting this model because their environments demand it. The cloud remains vital, but the assumption that every AI workload must be cloud-first doesn’t fit its requirements.
As more of the enterprise becomes instrumented and autonomous, AI will increasingly need to work at the point of action, not the point of aggregation. The organizations that recognize this early are the ones most likely to deploy AI systems that behave predictably, consistently and safely in the environments that matter.
Vice president of software development, AI and edge at Couchbase.
You must confirm your public display name before commenting
1 Meet Dausos, Surfshark's 'paradise' VPN protocol that seeks to raise the bar for speed and security
2 Exit 8 review — you'll never look at subway tunnels the same way after watching this tense horror movie
35 things you didn't know AI could do to help boost your productivity
4 Quordle hints and answers for Monday, April 13 (game #1540)
5NYT Connections hints and answers for Monday, April 13 (game #1037)
Tech Radar is part of Future US Inc, an international media group and leading digital publisher. Visit our corporate site.
© Future US, Inc. Full 7th Floor, 130 West 42nd Street, New York, NY 10036.
Key Takeaways
- News, deals, reviews, guides and more on the newest smartphones
- News, deals, reviews, guides and more on the newest computing gadgets
- Start exploring exclusive deals, expert advice and more
- Unlock and manage exclusive Techradar member rewards
- We are living in a material world: Physical AI thinks where the action is



