Intuition-1

Processing of hyperspectral images in orbit

Intuition-1

Intuition-1 will be a 6U-class satellite with a data processing unit enabling on-board data processing acquired via a hyperspectral instrument with spectral resolution in the range of visible and near-infrared light. The applied neural networks can be reconfigured during the mission to ensure adjustment to current needs. Thanks to the neural network-based analysis and processing of images in orbit, the amount of data sent to the ground station will be reduced by up to 100 times.

The ground station used for communication with the future satellite will be constructed in Gliwice, at the same location as one of the two PW-Sat2 ground stations.

 

Intuition-1

Hyperspectral instrument

the light wavelength between 470 nm and 900 nm divided into 150 channels will allow a detailed analysis of the research area

Leopard DPU

on-board image and data processing, and reconfiguration of the same satellite for a new purpose

AI-powered algorithms

optical data acquisition, preparation and manipulation, classification, segmentation, compression, and much more

The hyperspectral camera

The hyperspectral camera will use up to 150 spectral bands in the range of 470 nm – 900 nm. Every scene captured by the camera will contain an image divided into multiple frames. Each frame will register a different spectrum range. This means that to obtain a hyperspectral image of an exact area, the frames from all spectral ranges will have to be assembled and processed by the data processing unit – Leopard – which will also store images in non-volatile memory.

Leopard

Leopard DPU on-board the Intuition-1 will enable segmentation and classification of hyperspectral imagery right in Earth’s orbit.

Thanks to in-orbit processing of the collected imagery, at least 100 times less data must be transferred to the ground station. The end-users will thus have lower access time to any information gathered by the satellite that they find relevant. Images will be captured daily, which will allow for continuous monitoring during e.g. a flood.

Intended on-board operations

  • Pre-processing – geometric error correction arising from light refraction in the Earth’s atmosphere and optics system, optics system vibrations, radial distortion, parallax, scattering, sunspots, solar irradiance, the impurity of the camera lens focus.

  • Applying a convolutional neural network to analyse a selected area and detect monitored events and materials.

Use of the data collected

Agriculture

land coverage classification, crop forecasting, crop maps, soil maps, plant disease detection, biomass monitoring, weed mapping)

Forestry

forest classification, identifying spe-cies and the condition of forests, forestation planning

Environmental protection

water and soil pollu-tion maps, land development management and analysis

Watch the video

Contact us

Write, call us or meet us directly!