Capture low resolution images and run a simple onboard model to score frames for usefulness (cloud cover, motion blur, target present). Downlink only top scored images plus compressed thumbnails to reduce bandwidth.
Capture low resolution images and run a simple onboard model to score frames for usefulness (cloud cover, motion blur, target present). Downlink only top scored images plus compressed thumbnails to reduce bandwidth.
This is a intermediate-level project with an estimated timeline of 12-18 months using a 0.5U form factor.
Downlink bandwidth is the scarcest resource on most small satellites. A camera might capture hundreds of images per orbit, but only a fraction can be transmitted during the brief minutes of ground station contact. Smart image triage solves this by scoring every captured frame for usefulness before committing it to the downlink queue. A simple onboard model evaluates each image for cloud cover percentage, motion blur, whether the target region is in frame, and overall image quality. High-scoring frames get full-resolution downlink priority; medium-scoring frames are compressed to small thumbnails; and low-scoring frames are discarded entirely. The net effect is a dramatic increase in the useful data yield per downlink pass. Unlike full image classification, triage does not need to identify what is in the image only whether the image is worth sending. This makes the model simpler, faster, and more robust, well within reach of a student team using standard machine learning tools. The experiment validates triage effectiveness by also downlinking a random sample of untriaged images for ground comparison, proving that the onboard scoring is accurately identifying the most valuable frames.
ArduCAM OV2640 (2MP, ~$15) captures images at scheduled intervals. ESP32-S3 co-processor runs a simple CNN or scoring function: cloud cover percentage (histogram-based), blur detection (Laplacian variance), target-of-interest flag (land vs ocean). Score each frame 0-100. Downlink only frames scoring above threshold plus compressed 64x48 thumbnails of all frames for ground verification. Implement JPEG quality scaling: high-score frames at quality 90, low-score at quality 10. Log scoring metadata for ground comparison.
PhiSat-1 proved the concept at professional scale reducing downlink by 50%+ with cloud detection. Student version simplifies to histogram-based scoring (no deep learning required for MVP). Cloud cover can be estimated from blue/white pixel ratio with >80% accuracy using simple thresholds. Blur detection via Laplacian variance is a single OpenCV function. Bandwidth reduction is the primary metric aim for 3-5x reduction in downlinked data volume. Pairs naturally with project 14 (earth camera) as the imaging data source. Cost: $200-$500 for camera + co-processor. Complexity: intermediate.
This project spans 2 disciplines, making it suitable for interdisciplinary student teams.
Ready to take on this project? Here's a general roadmap that applies to most CubeSat missions:
Connect with a Blackwing chapter for mentorship, platform access, and a path to orbit.