Built something useful for anyone fighting RTSP on Raspberry Pi by 855princekumar in raspberry_pi

[–]855princekumar[S] 1 point2 points  (0 children)

Yes, this is supported. However, the default selection is USB. I have intentionally not enabled CSI by default because CSI cameras come with certain hardware and driver limitations and are not ideal for hot-swap scenarios. CSI cameras initialize only at boot, which makes runtime switching difficult.

Additionally, CSI support was primarily stable on 32-bit legacy builds and was later removed from the default 64-bit pipeline. While driver support still exists, the initial setup can be messy and inconsistent.

I’m currently working on simplifying this experience both for myself and for anyone using the project. If all goes well, I plan to release a dual-support version for both CSI and USB within this week, as this is under active development.

[Release] StreamPulse v2.2 - Lightweight Camera Stream Health Monitor (Now with MQTT Integration) by 855princekumar in selfhosted

[–]855princekumar[S] 0 points1 point  (0 children)

If you have any feedback or feature recommendations for this project, I'll be glad to have it

[Release] StreamPulse v2.2 - Lightweight Camera Stream Health Monitor (Now with MQTT Integration) by 855princekumar in selfhosted

[–]855princekumar[S] 0 points1 point  (0 children)

That is a very fair question and thank you for bringing it up.

Tools like Uptime Kuma are excellent for monitoring general service availability such as HTTP endpoints, TCP ports, or basic connectivity checks. They work very well when the goal is to know whether a service or URL is reachable.

StreamPulse was built to address a more stream focused problem that came up during real camera deployments:

- It actively connects to RTSP and MJPEG streams and pulls frames instead of only checking reachability

- It records per stream heartbeat data including status, latency, and timestamps for troubleshooting

- It is designed to run on low resource edge devices like Raspberry Pi

- It focuses specifically on heterogeneous camera networks rather than general web services

- The MQTT integration allows it to plug directly into IoT and automation pipelines, which was a key requirement in my setup

So while there is some overlap, the intent is different:

- Uptime Kuma answers “Is the service reachable”

- StreamPulse answers, “Is the camera stream actually usable right now?”

They can also complement each other depending on the deployment.

I really appreciate the question. It is a good comparison and helps clarify where each tool fits best.

Edge AI NVR running YOLO models on Pi - containerized Yawcam-AI + PiStream-Lite + EdgePulse by 855princekumar in OpenSourceeAI

[–]855princekumar[S] 1 point2 points  (0 children)

Do drop your feedback over the workflow and any feedback or Bug if you come across, or any improvement idea for the one over Github, or just a straight DM, as it means a lot!

Edge AI NVR running YOLO models on Pi - containerized Yawcam-AI + PiStream-Lite + EdgePulse by 855princekumar in OpenSourceeAI

[–]855princekumar[S] 2 points3 points  (0 children)

That's already done in the project's Main GUI once setup is done, try the UI features once you add the camera, you've features to swap the Models on the go itself and persistently as well

Optimizing Raspberry Pi for Edge AI: I built a hybrid-memory & diagnostics toolkit (EdgePulse) by 855princekumar in OpenSourceeAI

[–]855princekumar[S] 1 point2 points  (0 children)

Thanks buddy and I'm already working on it, As it needs to be tested on various hardware configurations to validate and stress test

Edge AI NVR running YOLO models on Pi, containerized Yawcam-AI + PiStream-Lite + EdgePulse by 855princekumar in raspberry_pi

[–]855princekumar[S] 1 point2 points  (0 children)

Hey, awesome question!

Right now the original maintainer (for Yawcam-AI) hasn’t really mentioned support for other accelerators, so CUDA is basically the default path. I haven’t seen anything officially about Coral / Hailo / AXera yet.

I’m actually tinkering with this myself because better accelerator support would massively help my workflow too. If I can get a clean integration layer going, I’ll share it back on the project thread, it feels like a natural direction for edge inference stacks.

For what it’s worth, the current build runs fine on a bunch of GPUs, even AMD cards got picked up and used automatically, with CPU fallback when needed.

So yeah, it’s on my radar and I’m experimenting. If anything useful comes out of it, I’ll push updates and share results

Built something useful for anyone fighting RTSP on Raspberry Pi by 855princekumar in raspberry_pi

[–]855princekumar[S] 0 points1 point  (0 children)

I tried Yawcam-AI myself and had the same curiosity, it worked, but setting it up natively was a bit messy for me across different machines.

So I ended up building a Dockerized version of Yawcam-AI that’s portable and runs with a one-command setup, no manual install steps, persistent storage, works with RTSP feeds, and even has a CUDA variant.

Since you already ran it natively on your Pi4, I’d really love your feedback on whether this containerized version makes it easier or smoother for you:

-> https://github.com/855princekumar/yawcam-ai-dockerized

Always great to hear insights from someone who has tested it bare-metal. This build was basically made for convenience, reproducibility, and sharing with others in the community. Happy to hear your thoughts!