Run a lightweight anomaly detector on housekeeping telemetry (power, thermal, ADCS, comms) to flag outliers and trending failures. Compare ML alerts against simple threshold rules and downlink only flagged windows for analysis.
Run a lightweight anomaly detector on housekeeping telemetry (power, thermal, ADCS, comms) to flag outliers and trending failures. Compare ML alerts against simple threshold rules and downlink only flagged windows for analysis.
This is a intermediate-level project with an estimated timeline of 12-18 months using a 0.5U form factor.
Satellites are expensive to build and impossible to repair, so catching problems early before they cascade into mission-ending failures is enormously valuable. Traditional health monitoring uses fixed thresholds: if battery voltage drops below X, trigger an alert. But threshold-based monitoring misses subtle trends, ignores correlations between subsystems, and generates false alarms when normal operating patterns shift with seasons or mission phases. A machine learning anomaly detector learns what normal looks like by studying the satellite's own telemetry during healthy operation, then flags deviations from that learned baseline. The model runs directly on the satellite, examining housekeeping data streams in real time power bus voltage and current, thermal readings, attitude sensor outputs, radio signal metrics and marking windows that deviate significantly from expected patterns. Only flagged windows are prioritized for downlink and ground analysis, reducing the operator workload. The experiment compares ML-based anomaly detection against traditional threshold rules using the same telemetry, quantifying whether the ML approach catches problems that thresholds miss and whether it reduces false alarm rates. This is a software-only payload requiring no additional hardware, making it one of the lowest-cost and lowest-risk experiments available.
Software-only payload running on existing PyCubed OBC or ESP32-S3 co-processor. Ingest housekeeping telemetry streams (battery voltage, solar current, temperature array, gyro rates, radio RSSI) at 1-10 Hz. Train a lightweight autoencoder or isolation forest model on ground using simulated nominal telemetry from flatsat testing. Deploy quantized model (INT8, <50 KB) onboard. Flag anomalous windows (>3? deviation from learned normal). Downlink only flagged telemetry windows plus model confidence scores. Compare ML alerts against simple threshold rules to quantify ML value-add.
Extends project 4 concept to non-imaging domain. ESA Phi-Sat demonstrated value of onboard ML for data triage. Edge Impulse anomaly detection pipeline supports time-series data natively. Key advantage: no additional hardware runs entirely on existing bus sensors and compute. Produces publishable comparison: ML anomaly detection vs threshold rules on real space telemetry. Industry trend: satellite operators increasingly interested in autonomous health monitoring. Could detect early signs of battery degradation, thermal runaway, or ADCS issues before they become critical. Cost: $0-$200 (ESP32-S3 co-processor if needed). Complexity: intermediate ML model training requires some data science skill but Edge Impulse lowers the barrier significantly.
This project spans 3 disciplines, making it suitable for interdisciplinary student teams.
Ready to take on this project? Here's a general roadmap that applies to most CubeSat missions:
Connect with a Blackwing chapter for mentorship, platform access, and a path to orbit.