Invotet logo

Edge AI for drones, delivery & HAPS

Frontier AI inside the drone power budget.

Onboard vision-language reasoning, real-time perception, and comms-denied autonomy — running on the platform within a sub-10W envelope, so the model survives the same flight as the airframe. No tethered GPU. No cloud round-trip.

Use cases

Where on-platform AI earns its keep

  • GPS-degraded & BVLOS autonomy

    Visual-inertial navigation, terrain-relative localization, and route planning running on-platform with deterministic latency — beyond the link, beyond GNSS.

  • Onboard vision-language perception

    Run a VLM on the drone. Identify targets, classify scenes, and ground free-form instructions in what the camera sees — without offloading frames.

  • On-platform model inference

    LLM and vision-language models inside the AeroScale V1 envelope. The model lives where the camera lives — no chase vehicle, no GPU rack on the ground.

  • Comms-denied operation

    Decisions stay onboard when the link drops. No degraded behaviour when the uplink does.

  • Battery-budget AI

    A SWaP-tuned compute envelope: 4.5 W typical, 24 g module — designed to disappear into a payload bay or strap to an airframe.

  • Aerial-grade qualification

    MIL-STD-810H vibration, 30,000 ft altitude qualified, −40 to +85 °C operating. Secure boot and per-module attestation.

The matched module

AeroScale V1

FPGA-based · aerial-grade

AeroScale V1 puts large-language and vision-language models into autonomous drones, delivery platforms, and HAPS systems — within a sub-10W envelope so the model survives the battery, not the other way around. Built on the Invotet Unified Engine running in an FPGA fabric you can buy today — not a chip-down ASIC waiting on tape-out.

Throughput
38 GOPS
Power envelope
4.5 W
Operating range
−40 to +85 °C
Weight
24 g

Why this module, this vertical

The properties that map to your platform.

Up to 20× efficiency

A unified compute engine — systolic and vector processing in one — purpose-built for transformer workloads. Smallest logic footprint, highest utilization, up to 20× more efficient than NVIDIA Jetson.

Sustainable autonomy

Frontier-class models inside a sub-15W envelope. AI fits inside the battery or solar budget — Size, Weight, and Power optimized for every module.

Transformer-grade fidelity

BF16-native execution preserves training-equivalent accuracy at 95% sustained utilization, with native flash attention and hardware tensor-parallel sync. A cycle-accurate hardware trace buffer and compile-graph-to-hardware specialization make every inference verifiable and tuned to the workload.

GPT-native logic

Matrix multiplication, softmax, element-wise operations, and the rest of the transformer operator set run natively in purpose-built logic — no general-purpose emulation tax.

Invotet SDK

Compile once. Deploy to every Invotet module.

A unified Python SDK that ingests PyTorch, ONNX, and HuggingFace checkpoints, quantizes for Invotet modules, and ships a deterministic runtime to the device. No CUDA in the loop.

  • Framework

    PyTorch

    Trace or torch.export checkpoints compile directly with no rewrite.

  • Framework

    ONNX

    Standards-based interchange — compile any ONNX-exported model.

  • Framework

    HuggingFace

    transformers checkpoints land on Invotet through a one-line loader.

Talk to the team that ships the modules.

Most aerial conversations start with a sample unit and a flight envelope. Tell us the airframe, the payload budget, and the workload — we will line up an AeroScale V1 eval kit and route the right datasheet the same day.