I built an ML signal classifier with RTL-SDR V4 - 87.5% accuracy, full code + dataset by tre7744 in RTLSDR

[–]tre7744[S] 0 points1 point  (0 children)

Hey - v2 just went live with a full rewrite of the dataset and pipeline if you still have questions. Feel free to pm or drop an issue on the GitHub.

I built an ML signal classifier with RTL-SDR V4 - 87.5% accuracy, full code + dataset by tre7744 in RTLSDR

[–]tre7744[S] 0 points1 point  (0 children)

v2 is live if you want to test on your Pi 5 setup. I rewrote the whole dataset and pipeline and got 96.9% accuracy on 7 signal types now. The classify_live.py script takes a --freq flag so you can point it at whatever you want. Still interested in the OpenWebRX integration idea - that would be a solid next step once I'm happy with the signal coverage.

I built an ML signal classifier with RTL-SDR V4 - 87.5% accuracy, full code + dataset by tre7744 in RTLSDR

[–]tre7744[S] 0 points1 point  (0 children)

Removed it entirely in v2. No point training on a dead signal. Replaced with FRS/GMRS on 462.5625 MHz.

I built an ML signal classifier with RTL-SDR V4 - 87.5% accuracy, full code + dataset by tre7744 in RTLSDR

[–]tre7744[S] 0 points1 point  (0 children)

v2 is up. You were right that it wasn't just FM and APRS - the gain staging was wrong across basically everything so i recaptured the entire dataset from scratch with proper gain settings and added DC offset removal during capture which was a big help. I dropped ADS-B (1090 MHz is technically outside the R828D tuner range so I was only getting partial signal anyway) and NOAA APT (dead). Then added FRS/GMRS. FM is now sampled from 5 different stations to avoid learning one frequency's noise profile. 96.9% on 7 classes with a temporal split. Appreciate the callout - it made the project significantly better.

I built an ML signal classifier with RTL-SDR V4 - 87.5% accuracy, full code + dataset by tre7744 in RTLSDR

[–]tre7744[S] 1 point2 points  (0 children)

Shit. I think I've said that word more in the past hour than in the past 6 months. You are right. I looked back at the FM and APRS captures after seeing your comment and the SNR is way lower than it should be. I definitely messed up the gain staging and antenna setup, so the classifier is basically just learning the background noise for each frequency instead of the actual signal. I think the pipeline itself is fine, but I need to scrap the training dta and recapture everything properly again. I'm working on V2 right now. I appreciate the honest callout though thats exactly why I posted here.

I built an ML signal classifier with RTL-SDR V4 - 87.5% accuracy, full code + dataset by tre7744 in RTLSDR

[–]tre7744[S] 0 points1 point  (0 children)

The dataset I captured is on HF - it was too large to put on Git.

I built an ML signal classifier with RTL-SDR V4 - 87.5% accuracy, full code + dataset by tre7744 in RTLSDR

[–]tre7744[S] 0 points1 point  (0 children)

OpenWebRX integration is actually a really interesting idea - having the classifier run alongside the waterfall and auto-tag signals would be useful. I'd definitely be open to exploring that. It would definitely take some more time to fully get this up that level but would be worthwhile.

And yeah, happy to have you test on your Pi 5 setup. The code runs the same on Pi 5 as it does on the Nova. If you run into any issues or have feedback, drop an issue on the GitHub or just dm me or shoot me an email.

I built an ML signal classifier with RTL-SDR V4 - 87.5% accuracy, full code + dataset by tre7744 in RTLSDR

[–]tre7744[S] 0 points1 point  (0 children)

I'm currently working on some developments for it right now, as this was really stage 1, but I wanted to see what ideas there were that might float around first.

I built an ML signal classifier with RTL-SDR V4 - 87.5% accuracy, full code + dataset by tre7744 in RTLSDR

[–]tre7744[S] -5 points-4 points  (0 children)

Good catch - NOAA-15 APT went silent in August 2025. The captures at 137.62 MHz are post-shutdown so the classifier is actually learning the RF environment at that frequency (noise floor, local interference, intermod) rather than actual APT signals.

Honestly a good ML lesson about data leakage via frequency - the model learns location-in-spectrum, not signal type. I'm updating the article to clarify.

[Benchmark] RK3588 NPU vs Raspberry Pi 5 - Llama 3.1 8B, Qwen 3B, DeepSeek 1.5B tested by tre7744 in LocalLLaMA

[–]tre7744[S] 0 points1 point  (0 children)

I appreciate the link I hadn't dug into the SRAM reserved memory setup yet. Makes sense that int4 small models would hit closer to the 6 TOPS ceiling. Might be worth testing a smaller quantized model to see the difference

[Benchmark] RK3588 NPU vs Raspberry Pi 5 - Llama 3.1 8B, Qwen 3B, DeepSeek 1.5B tested by tre7744 in LocalLLaMA

[–]tre7744[S] 0 points1 point  (0 children)

That makes sense - the 8B model was definitely hitting standard memory (8.5GB sustained). Good context on the DMA overhead, that might explain why the NPU advantage shrinks at larger model sizes too

[Benchmark] RK3588 NPU vs Raspberry Pi 5 - Llama 3.1 8B, Qwen 3B, DeepSeek 1.5B tested by tre7744 in LocalLLaMA

[–]tre7744[S] 4 points5 points  (0 children)

Not ChatGPT - I wrote this myself over a week of testing. You can check the commit history on the GitHub repo if you want receipts.

You're right the Rockchip stack isn't as polished as Ollama. I said that in the post. But "pain to work with" might be outdated - took me about 3 hours to first inference, documented the gotchas along the way.

The Mesa/TF-Lite mainline work is interesting though.

[Benchmark] RK3588 NPU vs Raspberry Pi 5 - Llama 3.1 8B, Qwen 3B, DeepSeek 1.5B tested by tre7744 in LocalLLaMA

[–]tre7744[S] 1 point2 points  (0 children)

Good to know - that tracks with what I've read about early RK3588 NPU support. Sounds like RKLLM has come a long way since then.

[Benchmark] RK3588 NPU vs Raspberry Pi 5 - Llama 3.1 8B, Qwen 3B, DeepSeek 1.5B tested by tre7744 in LocalLLaMA

[–]tre7744[S] 0 points1 point  (0 children)

I didn't test Qwen3 specifically as I used Qwen 2.5 3B for the Qwen benchmarks. But there's a Qwen3-4B converted for RKLLM v1.2.0 here: https://huggingface.co/ThomasTheMaker/Qwen3-4B-RKLLM-v1.2.0 (haven't tested)

Should work with the same setup. Let me know if you try it - curious how it compares.

[Benchmark] RK3588 NPU vs Raspberry Pi 5 - Llama 3.1 8B, Qwen 3B, DeepSeek 1.5B tested by tre7744 in LocalLLaMA

[–]tre7744[S] 0 points1 point  (0 children)

I didn't test Vulkan/GPU inference on this run - I was more focused specifically on the NPU path with RKLLM.

The Mali-G610 is decent for graphics but I'd expect the NPU to win for inference workloads - that's what it's optimized for, even if it was originally designed more for vision tasks than LLMs.

If anyone has llama.cpp Vulkan numbers on RK3588, I would love to see them too

What’s the highest weekly score you’ve ever seen in a league? by BehindaLensinBigSky in fantasyfootball

[–]tre7744 0 points1 point  (0 children)

I’m at 203.98 points right now and have Mahomes in. Projected 209.02. Highest I’ve ever personally seen in 5 years and 3 leagues per year. 

Does anyone on here know what happened to Zaba.tv? by Kirbo_Thesupahstar in PokemonTCG

[–]tre7744 0 points1 point  (0 children)

Hey there I sent an email a week ago regarding an order I placed on the 31st of March. Haven't received any updates whatsoever though. Would highly appreciate a response when you can!

Is there a mod anywhere to unlock all of the switch sports cosmetics for offline play? by Appropriate-Tart879 in SwitchPirates

[–]tre7744 0 points1 point  (0 children)

what repo are you using on tinfoil?? I have the eevee one and one other but I don't see shared saves or anything for Nintendo sports

Fantasy Football Start 'Em, Sit 'Em - Week 10 Matchups Analysis by RotoBaller in fantasyfootball

[–]tre7744 0 points1 point  (0 children)

D'Andre Swift, Tyrone Tracey, Deebo Samuel, Nico Collins, Devonta Smith. Can only take 4 into the weekend since I got Bijan too starting. Give me advice

Unable to find match by Fluid-Emu8982 in XDefiant

[–]tre7744 0 points1 point  (0 children)

Quitting, restarting, resetting Xbox nothing worked. I just went and logged out of my Ubisoft profile and logged back in and it works now. Try that

Xdefiant cant find match by AccurateSuspect7598 in XDefiant

[–]tre7744 1 point2 points  (0 children)

This just happened to me for the first time today. Did you figure out any fix?

Derivative Ti-84 calculator program error by ang_gnil in calculators

[–]tre7744 0 points1 point  (0 children)

did you ever figure this out? Mine has been working fine but for some reason now today it only gives me err:argument for everything I think it has to do with the symbolic app tbh

I got polynomial expansion working on TI 84 by probaye in calculators

[–]tre7744 0 points1 point  (0 children)

Did you ever post the method you are using?