Skip to main content Skip to secondary navigation

Project: Machine Learning Datasets for Computer Vision in Space

Main content start

Autonomous vision-based spaceborne navigation is an enabling technology for future on-orbit servicing and space logistics missions. While computer vision in general has benefited from Machine Learning (ML), training and validating spaceborne ML models are extremely challenging due to the impracticality of acquiring a large-scale labeled dataset of images of the desired target in the space environment. In response to these challenges, the Stanford Space Rendezvous Laboratory (SLAB) has created a number of datasets to train and test ML models for monocular pose estimation and tracking of a known, non-cooperative spacecraft.

Spacecraft Pose Estimation Dataset (SPEED)

SPEED is the first publicly available benchmark dataset for spacecraft pose estimation. It consists entirely of synthetic images of the Tango spacecraft from the PRISMA mission which is rendered using OpenGL. SPEED was used for the international Satellite Pose Estimation Challenge (SPEC2019) which was co-organized by SLAB and the Advanced Concepts Team (ACT) of the European Space Agency (ESA) in 2019.

See ESA’s Kelvins website for SPEC2019 here.

The SPEED dataset is available at the Kelvin’s platform above and the Stanford Digital Repository:

Sharma S., Park T. H., D’Amico S.;
Spacecraft Pose Estimation Dataset (SPEED);
Stanford Digital Repository (2020).

Next-Generation Spacecraft Pose Estimation Dataset (SPEED+)

The synthetic imagery of SPEED is easy to mass-produce but fails to resemble the visual features and illumination variability inherent to the spaceborne images. In order to bridge the gap between the current practices and the intended applications in future space missions, SPEED+ was introduced.

The core of SPEED+ is the the Hardware-In-the-Loop (HIL) imagery which is created from the Testbed for Rendezvous and Optical Navigation (TRON) facility at SLAB. TRON is a first-of-a-kind robotic testbed capable of capturing an arbitrary number of target images with accurate and maximally diverse pose labels and high-fidelity spaceborne illumination conditions. Using SPEED+, 9,531 HIL images of a mockup model of the Tango spacecraft were created to validate the robustness of ML models trained on synthetic images across domain gap.

SPEED+ was used in the second international Satellite Pose Estimation Competition (SPEC2021) co-hosted once again by SLAB and ACT in 2021. See ESA’s Kelvins website for SPEC2021 here.

The binary segmentation masks for SPEED+ images are also available through the SPNv2 GitHub repository here.

The SPEED+ dataset is available at the Stanford Digital Repository:

Park, T. H., Märtens, M., Lecuyer, G., Izzo, D., D’Amico, S.;
Next Generation Spacecraft Pose Estimation Dataset (SPEED+);
Stanford Digital Repository (2021).

Example images from different domains of SPEED+.

Satellite Hardware-In-the-loop Rendezvous Trajectories (SHIRT) Dataset

The Satellite Hardware-In-the-loop Rendezvous Trajectories (SHIRT) dataset extends SPEED+ and includes sequential images of the target mockup satellite in simulated rendezvous trajectories. Similar to SPEED+, the dataset contains both synthetic imagery from OpenGL and HIL lightbox images from TRON corresponding to identical pose labels of two representative rendezvous scenarios: ROE1 and ROE2. In ROE1, the servicer maintains the along-track separation typical of a standard v-bar hold point while the target spins about one principal axis, whereas in ROE2, the servicer slowly approaches the target tumbling about two principal axes. As shown in the GIF below, the synthetic and lightbox images for the same trajectory share geometric and illumination consistency while also exhibiting stark difference in visual features.

See the project website for SHIRT here.

The SHIRT dataset is available at the Stanford Digital Repository:

Park, T. H., D’Amico, S.;
SHIRT: Satellite Hardware-In-The-Loop Rendezvous Trajectory Dataset;
Stanford Digital Repository (2022).

 

Spacecraft PosE Estimation Dataset of a 3U CubeSat using Unreal Engine (SPEED-UE-Cube)

SPEED-UE-Cube is a derivative of the SPEED/SPEED+ datasets that instead models a 3U CubeSat and is rendered using Unreal Engine 5. It models spaceborne imagery of the CubeSat and consists of two subsets: a training dataset of 30,000 images with a 80/20 training/validation split, and a trajectory dataset of 1,186 images that depict a rendezvous scenario between the CubeSat and a servicer spacecraft. The dataset is released under a CC-BY-4.0 license and is therefore available for commercial use.

Sample training images from SPEED-UE-Cube
GIF of SPEED-UE-Cube Trajectory
Trajectory images from SPEED-UE-Cube

SPEED-UE-Cube is available for download from MathWorks via the Stanford Digital Repository:

Park, T. H., Ahmed, Z., Bhattacharjee, A., Razel-Rezai, R., Graves, R., Saarela, O., Teramoto, R., Vemulapalli, K., and D'Amico, S.;
Spacecraft Pose Estimation Dataset of a 3U CubeSat using Unreal Engine (SPEED-UE-Cube);
Stanford Digital Repository (2024).  

 

Spacecraft Pose Estimation and 3D Reconstruction (SPE3R) Dataset

While the aforementioned datasets consist of images of a single target, the SPE3R dataset comprises 3D watertight mesh models of 64 unique satellites selectively acquired from the NASA 3D Resources and ESA Science Satellite Fleet. Each of these models is accompanied by 1,000 images, binary masks, and corresponding pose labels to support simultaneous 3D structure characterization and pose estimation. The images and binary masks are rendered using the custom synthetic scene developed within the Unreal Engine 5.

The SPE3R dataset is available at the Stanford Digital Repository: 

Park, T. H., D’Amico, S;
SPE3R: Synthetic Dataset for Satellite Pose Estimation and 3D Reconstruction;
Stanford Digital Repository (2024).

 

 

Return to projects page



Related Publications

Ahmed, Z., Park, T. H., Bhattacharjee, A., Fazel-Rezai, R., Graves, R., Saarela, O., Teramoto, R., Vemulapalli, K., D'Amico, S.;
SPEED-UE-Cube: A Machine Learning Dataset for Autonomous, Vision-Based Spacecraft Navigation;
46th Rocky Mountain AAS Guidance, Navigation and Control Conference, Breckenridge, Colorado, February 2-7, 2024.

Park, T. H., D'Amico, S.; 
Rapid Abstraction of Spacecraft 3D Structure from Single 2D Image
AIAA SCITECH 2024 Forum, 2024. DOI: 10.2514/6.2024-2768

Park, T. H., D’Amico, S.;
Adaptive Neural Network-based Unscented Kalman Filter for Robust Pose Tracking of Noncooperative Spacecraft;
Journal of Guidance, Control, and Dynamics, Vol. 46, No. 9, pp. 1671-1688 (2023). DOI: 10.2514/1.G007387

Park, T. H., Märtens, M., Jawaid, M., Wang, Z., Chen, B., Chin., T.-J., Izzo, D., D'Amico, S.;
Satellite Pose Estimation Competition 2021: Results and Analyses;
Acta Astronautica, Volume 204, 2023, Pages 640-665, ISSN 0094-5765, DOI:10.1016/ j.actaastro.2023.01.00

Park, T. H., Märtens, M., Lecuyer, G., Izzo, D., D’Amico, S.;
SPEED+: Next-Generation Dataset for Spacecraft Pose Estimation across Domain Gap;
2022 IEEE Aerospace Conference (AERO), 2022, pp. 1-15, doi: 10.1109/AERO53065.2022.9843439.

Park, T. H., Bosse, J., D’Amico, S.;
Robotic Testbed for Rendezvous and Optical Navigation: Multi-Source Calibration and Machine Learning Use Cases;
2021 AAS/AIAA Astrodynamics Specialist Conference, Big Sky, Virtual, August 9-11 (2021).

Kisantal M., Sharma S., Park T. H., Izzo D., Märtens M., D'Amico S.;
Satellite Pose Estimation Challenge: Dataset, Competition Design and Results;
IEEE Transactions on Aerospace and Electronic Systems, Vol. 56, No. 5, pp. 4083-4098 (2020). DOI: 10.1109/TAES.2020.2989063

Sharma S., D'Amico S.;
Neural Network-Based Pose Estimation for Noncooperative Spacecraft Rendezvous;
IEEE Transactions on Aerospace and Electronic Systems (2020). DOI: 10.1109/TAES.2020.2999148.

Sharma S., D'Amico S.;
Pose Estimation for Non-Cooperative Spacecraft Rendezvous Using Neural Networks;
29th AAS/AIAA Space Flight Mechanics Meeting, Ka'anapali, Maui, HI, January 13-17 (2019).

Sharma S.;
Pose Estimation of Uncooperative Spacecraft using Monocular Vision and Deep Learning;
Stanford University, PhD Thesis (2019).

Sharma S., Koenig A., Sullivan J., D'Amico S.;
Verification of Light-box Devices for Earth Albedo Simulation;
Technical Note, Stanford Space Rendezvous Lab (SLAB), January (2016).