Invotet logo

Edge AI for satellites, rovers & HAPS

Inference on-orbit. No downlink, no waiting, no compromise.

Single-event-upset tolerance, redundant compute fabric, and a thermal envelope qualified for sustained vacuum. Run frontier models on the platform — for satellites, lunar rovers, and high-altitude systems where the downlink is the bottleneck.

Use cases

Where on-platform AI earns its keep

  • On-orbit inference

    Triage Earth-observation frames before downlink. Send the conclusions, not the raw pixels — and free the bandwidth budget for what actually matters.

  • Onboard scene understanding

    Vision-language reasoning on rovers and orbital platforms — caption a scene, identify anomalies, and act without a ground-loop in the way.

  • Lunar & planetary autonomy

    Long-duration autonomy across communication windows that can stretch from seconds to hours. The model survives the gap.

  • Anomaly & fault detection

    Run live anomaly detectors on raw telemetry. Catch failures inside the platform before the next ground pass.

  • Radiation-tolerant compute

    TMR + ECC + scrub for SEU mitigation. 20 krad TID. Sustained operation through the regimes that retire commercial silicon.

  • Vacuum-qualified thermal envelope

    Operate −55 to +105 °C, MIL-STD-883 thermal cycling, 10⁻⁶ Torr vacuum-qualified packaging.

The matched module

AstroCore S

FPGA-based · space-grade

AstroCore S brings on-board AI to satellites, lunar rovers, and high-altitude platforms. Built on the Invotet Unified Engine in a radiation-tolerant FPGA fabric — single-event-upset tolerance, redundant compute, and a thermal envelope qualified for sustained vacuum operation. Deploy-ready hardware, not a chip-down ASIC waiting on tape-out.

Throughput
38 GOPS
Power envelope
4.5 W
Operating range
−55 to +105 °C
Tolerance
20 krad TID

Why this module, this vertical

The properties that map to your platform.

Sustainable autonomy

Frontier-class models inside a sub-15W envelope. AI fits inside the battery or solar budget — Size, Weight, and Power optimized for every module.

Transformer-grade fidelity

BF16-native execution preserves training-equivalent accuracy at 95% sustained utilization, with native flash attention and hardware tensor-parallel sync. A cycle-accurate hardware trace buffer and compile-graph-to-hardware specialization make every inference verifiable and tuned to the workload.

Mil-spec environment

Operate from −55 °C to +105 °C. Survive thermal cycling, vacuum, and radiation regimes that disqualify commercial silicon.

Secure by design

Hardware root of trust, signed firmware, and per-module attestation keep both the model and the device tamper-evident.

Invotet SDK

Compile once. Deploy to every Invotet module.

A unified Python SDK that ingests PyTorch, ONNX, and HuggingFace checkpoints, quantizes for Invotet modules, and ships a deterministic runtime to the device. No CUDA in the loop.

  • Framework

    PyTorch

    Trace or torch.export checkpoints compile directly with no rewrite.

  • Framework

    ONNX

    Standards-based interchange — compile any ONNX-exported model.

  • Framework

    HuggingFace

    transformers checkpoints land on Invotet through a one-line loader.

Talk to the team that ships the modules.

Most space conversations start with a qualification report and a thermal/radiation profile. Tell us your bus, the orbit, and the mission window — we will route the right datasheet and qual evidence the same day.