Lens AI
0%
Drag
View
AI-NATIVE
AUTONOMY
Robot
Dog
Intelligence
Platform
NVIDIA Inception Program

AI-Native Autonomy Architecture for multi-vendor robot dogs — connecting any quadruped across all venues through real-time Fast LLM perception, Strategy LLM reasoning, and autonomous fleet coordination from edge to cloud. The robot learns, adapts, and decides on its own.

System Architecture

AI-Native Autonomy Stack

PhysicalAI operates as the AI-native autonomy architecture above all robot hardware — from protocol-agnostic device integration, through on-robot reactive intelligence with direct motor control, to scheduled strategic reasoning that writes memory and pushes new policies across your entire fleet.

HARDWARE
ABSTRACTION LAYER

Any robot, sensor, or camera merged into one unified system regardless of how incompatible they are at code level.

ROS2 BridgeWebSocketMQTT / DDSgRPCDJI SDKUnitree API
01 / 05

ON-ROBOT
FAST LLM RUNTIME

Reactive engine that observes or controls. Multiple instances per robot, each wired to different sensors. Reads context, reasons on what's happening, and outputs action codes in milliseconds. Writes logs — cannot write memory.

Multi-InstanceSensor FusionState TracingLog WriterEdge Inference~120–350ms
02 / 05

REACTIVE CONTROL
& STRATEGY ESCALATION

120–350ms per cycle. Reads raw sensor data, reasons, and fires motor commands before a human could blink. When it hits an unknown state, it escalates to the Strategy LLM for deeper reasoning.

ROS Bridge ControlDirect Motor CommandsCode Block StateCloud LoggingStrategy LLM CallSafe Mode Fallback
03 / 05

STRATEGY
LLM

The brain per robot. Creates, modifies, and deletes Fast LLMs — deciding what each one watches and how it reacts. Simulates in a sandbox before deploying. Reads all logs, connects cause and effect across time.

Scheduled ReadsMemory WriterSandbox SimulationsDynamic FrequencyFleet-Wide ReasoningPolicy Injection
04 / 05

SCENARIOS
MEMORY LAYER

Persistent memory across all robots and sessions. Written exclusively by the Strategy LLM. Memory enhances strategic reasoning — it doesn't gate operations.

Persistent MemoryTelemetry LakeModel RegistryCross-Site AnalyticsTraining PipelinesCompliance Export
05 / 05

REASONING LAYER

Three Engines. One Brain.

Three reasoning systems working at different speeds — from millisecond reflexes to fleet-wide strategy. Each layer perceives, reasons, and acts autonomously.

FAST LLM

The Reflex Agent

Observe or control. Reads sensor context, reasons on what's happening, and outputs action codes in milliseconds. Behaviors defined by its commander.

When it can't resolve something, it escalates to the Strategy LLM for deeper reasoning.

Fast LLM Architecture

STRATEGY LLM

The Reasoning Layer

The brain per robot. Creates, modifies, and deletes Fast LLMs — deciding what each one watches and how it reacts.

Simulates in a sandbox before deploying. Reads all logs, connects cause and effect across time.

Strategy LLM Architecture

SUPERVISOR

Fleet Command

The fleet layer. Sees across all Strategy LLMs, all robots, all sensors. Makes fleet-wide decisions.

Safety gate on every action. Where individual robot intelligence becomes collective intelligence.

Supervisor LLM Architecture
01 / 03

Command Center

Real-Time Fleet Operations

Live monitoring of all robot units across venue zones — every Fast LLM instance streaming state, every Strategy LLM decision logged, fleet-wide position, battery, and mission status in real time.

SoFi Stadium venue map
SPOT-01SPOT-02SPOT-03SPOT-04SPOT-05SPOT-06GR-01
Robot Fleet
7 Units
Edge 38ms WAN Nominal--:--:-- UTC

Autonomous Response Pipeline

FAST LLM · ACT · ESCALATE · STRATEGY LLM

UNATTENDED OBJECT
RESTRICTED AREA INTRUSION
CROWD PRESSURE ANOMALY
NIGHT PERIMETER BREACH CUE
01 / 04

Platform Expansion

Every Venue.
Every Airport.

PhysicalAI deploys across all 11 FIFA 2026 LA venues and 3 regional airports, then extends to stadiums, campuses, warehouses, and critical infrastructure worldwide.

  • SoFi Stadium

    Phase 1 — Active

    Main venue. 70,240 capacity. Full fleet of 20 units. Primary command center GPU cluster deployed.

  • LAX Airport

    Phase 1 — Active

    International + domestic terminals. 8 robot units. Integrated with TSA coordination protocols.

  • Rose Bowl

    Phase 2 — Planned

    Pasadena venue. 88,565 capacity. Edge mission node deployed. Fleet onboarding Q1 2026.

  • Dignity Health

    Phase 2 — Planned

    Carson venue. Integration with existing arena security infrastructure. 12 units allocated.

  • Long Beach Airport

    Phase 2 — Planned

    Secondary fan arrival hub. 6 robot units. Fan zone and ground transport monitoring.

  • Fan Zones × 5

    Phase 2 — Planned

    Downtown LA, Hollywood, Santa Monica, Inglewood, Anaheim. Mobile observation units per zone.

  • Warehouses

    Phase 5 — Future

    Post-event platform expansion. Industrial patrol, hazard detection, inventory anomaly workflows.

  • Smart City

    Phase 5 — Future

    UAV coordination, wheeled robots, fixed camera fusion. The universal Physical AI Brain for any hardware.

Live Telemetry

Fast LLM Log Stream

Real-time streaming logs from every Fast LLM instance across all robots. Every perception cycle, every motor command, every state change, every Strategy LLM escalation — logged to cloud, queryable in real-time.

STREAMING BRAIN.RUNTIME