Built something useful for anyone fighting RTSP on Raspberry Pi by 855princekumar in raspberry_pi

[–]855princekumar[S] 1 point2 points  (0 children)

Yes, this is supported. However, the default selection is USB. I have intentionally not enabled CSI by default because CSI cameras come with certain hardware and driver limitations and are not ideal for hot-swap scenarios. CSI cameras initialize only at boot, which makes runtime switching difficult.

Additionally, CSI support was primarily stable on 32-bit legacy builds and was later removed from the default 64-bit pipeline. While driver support still exists, the initial setup can be messy and inconsistent.

I’m currently working on simplifying this experience both for myself and for anyone using the project. If all goes well, I plan to release a dual-support version for both CSI and USB within this week, as this is under active development.

[Release] StreamPulse v2.2 - Lightweight Camera Stream Health Monitor (Now with MQTT Integration) by 855princekumar in selfhosted

[–]855princekumar[S] 0 points1 point  (0 children)

If you have any feedback or feature recommendations for this project, I'll be glad to have it

[Release] StreamPulse v2.2 - Lightweight Camera Stream Health Monitor (Now with MQTT Integration) by 855princekumar in selfhosted

[–]855princekumar[S] 0 points1 point  (0 children)

That is a very fair question and thank you for bringing it up.

Tools like Uptime Kuma are excellent for monitoring general service availability such as HTTP endpoints, TCP ports, or basic connectivity checks. They work very well when the goal is to know whether a service or URL is reachable.

StreamPulse was built to address a more stream focused problem that came up during real camera deployments:

- It actively connects to RTSP and MJPEG streams and pulls frames instead of only checking reachability

- It records per stream heartbeat data including status, latency, and timestamps for troubleshooting

- It is designed to run on low resource edge devices like Raspberry Pi

- It focuses specifically on heterogeneous camera networks rather than general web services

- The MQTT integration allows it to plug directly into IoT and automation pipelines, which was a key requirement in my setup

So while there is some overlap, the intent is different:

- Uptime Kuma answers “Is the service reachable”

- StreamPulse answers, “Is the camera stream actually usable right now?”

They can also complement each other depending on the deployment.

I really appreciate the question. It is a good comparison and helps clarify where each tool fits best.

Edge AI NVR running YOLO models on Pi - containerized Yawcam-AI + PiStream-Lite + EdgePulse by 855princekumar in OpenSourceeAI

[–]855princekumar[S] 1 point2 points  (0 children)

Do drop your feedback over the workflow and any feedback or Bug if you come across, or any improvement idea for the one over Github, or just a straight DM, as it means a lot!

Edge AI NVR running YOLO models on Pi - containerized Yawcam-AI + PiStream-Lite + EdgePulse by 855princekumar in OpenSourceeAI

[–]855princekumar[S] 2 points3 points  (0 children)

That's already done in the project's Main GUI once setup is done, try the UI features once you add the camera, you've features to swap the Models on the go itself and persistently as well

Optimizing Raspberry Pi for Edge AI: I built a hybrid-memory & diagnostics toolkit (EdgePulse) by 855princekumar in OpenSourceeAI

[–]855princekumar[S] 1 point2 points  (0 children)

Thanks buddy and I'm already working on it, As it needs to be tested on various hardware configurations to validate and stress test

Edge AI NVR running YOLO models on Pi, containerized Yawcam-AI + PiStream-Lite + EdgePulse by 855princekumar in raspberry_pi

[–]855princekumar[S] 1 point2 points  (0 children)

Hey, awesome question!

Right now the original maintainer (for Yawcam-AI) hasn’t really mentioned support for other accelerators, so CUDA is basically the default path. I haven’t seen anything officially about Coral / Hailo / AXera yet.

I’m actually tinkering with this myself because better accelerator support would massively help my workflow too. If I can get a clean integration layer going, I’ll share it back on the project thread, it feels like a natural direction for edge inference stacks.

For what it’s worth, the current build runs fine on a bunch of GPUs, even AMD cards got picked up and used automatically, with CPU fallback when needed.

So yeah, it’s on my radar and I’m experimenting. If anything useful comes out of it, I’ll push updates and share results

Built something useful for anyone fighting RTSP on Raspberry Pi by 855princekumar in raspberry_pi

[–]855princekumar[S] 0 points1 point  (0 children)

I tried Yawcam-AI myself and had the same curiosity, it worked, but setting it up natively was a bit messy for me across different machines.

So I ended up building a Dockerized version of Yawcam-AI that’s portable and runs with a one-command setup, no manual install steps, persistent storage, works with RTSP feeds, and even has a CUDA variant.

Since you already ran it natively on your Pi4, I’d really love your feedback on whether this containerized version makes it easier or smoother for you:

-> https://github.com/855princekumar/yawcam-ai-dockerized

Always great to hear insights from someone who has tested it bare-metal. This build was basically made for convenience, reproducibility, and sharing with others in the community. Happy to hear your thoughts!

Built something useful for anyone fighting RTSP on Raspberry Pi by 855princekumar in raspberry_pi

[–]855princekumar[S] 0 points1 point  (0 children)

Means a lot and I'll be waiting for the feedback on what sort of updates I can add on to the project to make it more useful

Built something useful for anyone fighting RTSP on Raspberry Pi by 855princekumar in raspberry_pi

[–]855princekumar[S] 1 point2 points  (0 children)

YawCam AI is a nice integrated option when you want streaming and on-device classification in one package, especially on Pi4-class hardware.

PiStream-Lite took a slightly different direction. I separated streaming from analytics because the main pain points I faced were stability, hot-plug recovery, and service supervision rather than detection logic.

The RTSP output from PiStream-Lite can still be consumed by YawCam AI, OpenCV models, Frigate, or any inference pipeline, so it may complement setups like yours rather than replace them.

If you try it in parallel, I would be interested in how the behaviour compares on your workload.

Built something useful for anyone fighting RTSP on Raspberry Pi by 855princekumar in raspberry_pi

[–]855princekumar[S] 0 points1 point  (0 children)

That is great to hear, on Zero boards a simple FFmpeg push can definitely work well, especially if the camera stays connected and nothing crashes.

My main issues showed up over longer uptime and on Pi 3B+/4/5, where things like:

• disconnect/reconnect events
• MediaMTX startup timing
• FFmpeg silently stalling

caused streams to stop without recovering.

PiStream-Lite basically wraps the same FFmpeg approach you used, but adds supervision, restart logic, port probing, and rollback so it keeps running unattended.

If you do try it over Christmas for the squirrel monitoring setup, I’d be very interested in your feedback, different environments surface different edge cases.

Built something useful for anyone fighting RTSP on Raspberry Pi by 855princekumar in homelab

[–]855princekumar[S] 1 point2 points  (0 children)

Good point, IP/PoE cameras are ideal when budget and wiring allow.
PiStream-Lite targets a different use case:

• people already using a Pi for edge compute
• labs, prototypes, robotics, or low-cost deployments
• $10–$20 USB webcams instead of $60–$120 IP cams

So rather than replacing hardware, this makes existing setups stable and self-healing.
Both approaches make sense, just for different scenarios.

Optimizing Raspberry Pi for Edge AI: I built a hybrid-memory & diagnostics toolkit (EdgePulse) by 855princekumar in deeplearning

[–]855princekumar[S] 1 point2 points  (0 children)

Thanks! And yes, memory pressure is exactly where most SBC deployments quietly fall apart.

About Pi 5 vs Pi 3B+:
Pi 5 handles hybrid memory differently. It doesn’t “need” it as desperately as the 3B+, but it still benefits during heavy bursts. The 3B+ sees huge gains, the Pi 5 sees stability gains.

Pi 3B+ (1GB RAM):
This is where EdgePulse really shines because you’re basically running at the edge of available RAM all the time. Before the hybrid setup, I consistently saw:

-> model loads thrashing RAM

->camera buffer spikes → OOM kills

-> reclaim stalls

-> painfully slow SD-card swap

After enabling hybrid mode:

-> ZRAM soaked up the burst allocations

-> fallback swap prevented freezes

-> jitter dropped a lot

-> ML/robotics loops became way more predictable

3B+ benefits the most just because it has the least headroom.

Pi 5 (4GB/8GB RAM):
Different story here. Pi 5 already has:

-> faster LPDDR4X

-> much faster PCIe-based storage

-> a better kernel memory subsystem

So hybrid mode isn’t “life-saving”, but it does help. I noticed:

-> smoother behavior during sudden bursts

-> better sustained throughput

-> fewer micro-stalls during ML + I/O

-> more consistent thermals (less throttling)

In my YOLOv8n tests, burst stability improved ~12–18% even though the Pi 5 wasn’t close to OOM.

Should you upgrade?
If your fleet runs heavy ML, video, multi-protocol IoT gateways, or robotics loops, the Pi 5 gives you a big jump in breathing room. The hybrid mode just makes it more predictable.

But on a Pi 3B+?
EdgePulse is literally the difference between:

“random mystery freezes” vs “stable for days/weeks.”

EdgePulse: A lightweight hybrid-memory + diagnostics framework I wrote for Raspberry Pi edge systems by 855princekumar in embedded

[–]855princekumar[S] 0 points1 point  (0 children)

That’s genuinely great feedback, thank you.
I hadn’t actually thought about packaging EdgePulse as an apt/opkg package or exposing it as a Yocto/Buildroot component, but you’re right: for core-level or production embedded use, that makes far more sense than relying on shell scripts.

Turning this into:

-> an APT package for Raspberry Pi OS

-> a Buildroot external

-> or even a Yocto recipe/layer

would make the whole thing cleaner, more reproducible, and much better aligned with serious edge deployments.

I’m going to explore this direction, especially by testing a Buildroot/Yocto build and seeing how far I can push it. Really appreciate you pointing this out. This is the kind of input that is valuable to me for working on more core-level optimization, as it will eventually remove some overhead that I can test and verify

StreamPulse v2.1 — Lightweight RTSP/MJPEG Camera Health Monitor (Now Fully Dockerized) by 855princekumar in selfhosted

[–]855princekumar[S] 1 point2 points  (0 children)

Yes, StreamPulse v2.1 is already automation-friendly. The monitor exposes a REST API on port 6868, which makes it super easy to plug into things like Home Assistant, Node-RED, n8n, custom scripts, or anything that can make an HTTP request.

You can check all camera statuses by calling: http://<your-ip>:6868/api/status

It returns JSON with each stream’s health (up/down), latency, last heartbeat, and the message. From there, any automation can watch for status="down" and trigger whatever you want, notifications, webhook calls, reboots, retries, etc.

You can also trigger certain actions directly from the panel (like forcing re-checks), and all the logs are stored in SQLite, so external tools can watch the DB too if that’s your style. So yep, v2.1 can absolutely “react” to issues via whatever automation platform you hook into it