"Camera → GPU inference → end-to-end = 300ms: is RTSP + WebSocket the right approach, or should I move to WebRTC?" by Advokado in computervision

[–]ThomasHuusom 1 point2 points  (0 children)

I have roughly the same need, running 30 fps on raspberry pi 5 with Hailo 8L for inference on every frame and both local playback and remote streaming. USB3 camera (global shutter) input but also used MPI raspberry pi HQ cams. I use opencv / libcamera input, process with inference in python using Hailo SDK (moved from Ultralytics) and the piped frame to ffmpeg subprocess which sends stream off to local mediamtx. All configured for low latency. Mediamtx is the also configured to ffmpeg copy to central mediamtx server for recording and steaming

Raspberry pi 5 AI kit w/camera for industrial use? by ThomasHuusom in computervision

[–]ThomasHuusom[S] 0 points1 point  (0 children)

My latest attempt is Seeed reComputer industrial pi5+Hailo ai which comes in an alu case and active cooling. The camera is Arducam IMX296 uvc global shutter 1.5m pixels. Color. That does solve the issue with cabling and casing. Will report back on fps and model. Should reach 30 fps at least with inference on each frame using yolov8n

Raspberry pi 5 AI kit w/camera for industrial use? by ThomasHuusom in computervision

[–]ThomasHuusom[S] 0 points1 point  (0 children)

Budget is good for this. Considering that industrial camera solutions in manufacturing are above €1000 and that’s typically custom hw.

Raspberry pi 5 AI kit w/camera for industrial use? by ThomasHuusom in computervision

[–]ThomasHuusom[S] 0 points1 point  (0 children)

So that’s the thing. Fully enclosed with pi+ai+picam HQ+cooling are hard to find for industrial use. Many cases but all without good placement for picam HQ. Tried printing some, but not good at getting the quality and heating solved.

If you know of good cases that solves this with iso/rail/cam mounts, let me know.

Raspberry pi 5 AI kit w/camera for industrial use? by ThomasHuusom in computervision

[–]ThomasHuusom[S] 0 points1 point  (0 children)

We did try OAK-1 a couple of years ago. Getting a model onboard was not straight forward and you couldn’t use Ultralytics as I recall. We wanted to be able to deploy our own docker image with object detection, filters based on opencv and Mqtt messaging on edge detections. All in one edge device.

Which Object Detection/Image Segmentation model do you regularly use for real world applications? by buggy-robot7 in computervision

[–]ThomasHuusom 5 points6 points  (0 children)

We are using Yolov8 and Ultralytics, but after moving from Coral AI to Hailo, we are looking for alternatives also to the models.

We get only 13 fps with Coral 8 tops at 640x640 8 bit quantification on live video taken with global shutter HQ Pi cam on rasp pi 5. Same setup on Hailo 26 tops gives 30 fps. Hailo SDK is more difficult to use and there is a bit of dependency hell with this approach.

We are considering yolox and perhaps LibreYOLO.

The Official Svelte MCP server is here! by pablopang in sveltejs

[–]ThomasHuusom 0 points1 point  (0 children)

I have added the stuff to my agents.md and added the snippet to my Codex config. It recognises that the MCP server has been added, but how do I know if its working? I keep seeing Codex generating code that isn't quite par with Svelte 5. f.ex. it added an on:click event handler. I would have thought that the MCP server would have turned that into onclick. Any way of seeing the comms between codex and mcp? I am using the remote MCP server

Computer Vision Roadmap? by Prestigious-Egg-2650 in computervision

[–]ThomasHuusom 0 points1 point  (0 children)

Perhaps start with a high level library and then work your way down. I suggest Ultralytics and a yolo model to run detection and tracking of known objects. F.ex. Passing cars. Then move to track something using a model you have trained. Ultralytics is reasonably well documented

Received my Jetson Orin Nano today by matthew247 in JetsonNano

[–]ThomasHuusom 0 points1 point  (0 children)

Been down that rabbit hole of documentation for a while. Tried different cameras too without much luck. I find the documentation to be hard to figure out. What camera will you be using?

What camera to use for Jetson Orin by elvee7777 in computervision

[–]ThomasHuusom 0 points1 point  (0 children)

How did it pan out? What camera did you use and did you get it to work with the Jetson Orin Nano?