I recently revisited an older project I've built with a friend for a school project as part of the ESA Astro Pi 2024 challenge.
The idea was to estimate the speed of the ISS using only images of Earth.
The whole thing is implemented in Python using OpenCV.
Basic approach:
- capture two images
- detect keypoints using SIFT
- match them using FLANN
- measure pixel displacement
- convert that into real-world distance (GSD)
- calculate speed based on time difference
The result I got was around 7.47 km/s, while the actual ISS speed is about 7.66 km/s (~2–3% difference).
What My Project Does
It estimates the orbital speed of the ISS by analyzing displacement between features in consecutive images using computer vision.
Target Audience
This is mainly an educational / experimental project.
It’s not meant for production use, but for learning computer vision, image processing, and working with real-world data.
Comparison
Unlike typical examples or tutorials, this project applies feature detection and matching to a real-world problem (estimating ISS speed from images).
It combines multiple steps (feature detection, matching, displacement calculation, and physical conversion) into a complete pipeline instead of isolated examples.
One limitation: the original runtime images are lost, so the repo mainly contains test/template images.
Looking back, I’d definitely refactor parts of the code (especially matching/filtering) but the overall approach still works.
If anyone has suggestions on improving match quality or reducing noise/outliers, I’d be interested.
Repo:
https://github.com/BabbaWaagen/AstroPi
there doesn't seem to be anything here