Leopard

Data processing unit for in-orbit Artifical Intelligence

Learn more

Leopard DPU

Leopard is a Data Processing Unit which enables mission designers to apply:

  • Apply Artificial Intelligence solutions in nano and small satellites.

  • Support the capturing, managing and processing of data in orbit.

Leopard redefines the current approach to remote sensing. Now, instead of sending huge, unprocessed sets of data to ground stations, Leopard uses Deep Neural Networks to process data on-board and therefore only sends the most important and valuable insights to the ground. By reducing the time and cost of data transfer and processing, it enables you to focus on a rapid response to any detected phenomena.  

Leopard is integrated with the powerful FPGA to accelerate the execution of deep learning algorithms and has a throughput of 1200 Giga Operations Per SecondA number of hardware and software measures protect the computer against the influence of radiation. 

With its small size (1U form factor), wide voltage range and universal interfaces, it is compatible with most CubeSat platforms. Its scalable architecture makes it possible to create larger and more powerful versions dedicated to bigger platforms as well. 

Key advantages

Huge processing power in a small form-factor

Powerful FPGA to accelerate the execution of deep learning algorithms.

Artificial Intelligence available on board

Processing data directly in orbit using Deep Neural Networks and ML libraries such as Cafe or TensorFlow

Freedom, flexibility and security

Open-source operating system (Linux), support of multiple Neural Network models, single-cell memory and rad-tolerant design

TECHNICAL SPECIFICATION

PROCESSING CORES 2x4 ARM A53 CPU
2x1 ARM R5 lock-step CPU
FPGA architecture for custom function implementation
A radiation hardened Payload Controller
MEMORY 2x8/16 GiB DDR4 providing Error Detection and Correction (EDAC)
2x16 GiB SLC flash-based filesystem storage (EDAC)
500/1000 GiB SLC/MLC flash-based data storage
INTERFACESControl Interfaces: CAN, GPIO
Data Interfaces: High-Speed LVDS, SPI, RS422/485
Additional custom interfaces
SPECIFICATIONSVoltage +6.5 - +24 V
Power Consumption < 40 W
Computational Throughput for Neural Networks Processing > 1200 GOPS
Operating Temperature -40 to 90 °C
Radiation Tolerance > 20 kRad (Si)
SOFTWARE ECOSYSTEM64-bit Linux
Deep Learning Accelerator fed with Caffe or TensorFlow models
Flexible architecture
Reconfigurable
FPGA FORM-FACTOR1x1x1U

Do you want to find out more?
Contact us!

Key applications

Image segmentation and object detection

Signal quality enhancement

Data compression and encryption

Spacecraft autonomy

Optical navigation

And many more!

Contact

Write, call us or meet us directly!