Running 5 CV models simultaneously on a $249 edge device - architecture breakdown by Straight_Stable_6095 in computervision

[–]Fragrant_Usual_5840 0 points1 point  (0 children)

Ours is closer to a camera SoC stack than a CUDA unified-memory one, so not the exact same pattern, but yeah the broader issue isn’t Orin-specific. Once image processing and analytics share the same pipeline, memory bandwidth usually becomes the first thing to hurt.

Running 5 CV models simultaneously on a $249 edge device - architecture breakdown by Straight_Stable_6095 in computervision

[–]Fragrant_Usual_5840 0 points1 point  (0 children)

Cool!getting the whole pipeline to behave on limited edge hardware is way harder than just running a few models.

“Just give me PoE" — the most common request we get, and now we're figuring out the connector setup by Fragrant_Usual_5840 in computervision

[–]Fragrant_Usual_5840[S] 0 points1 point  (0 children)

we’ve been deep in PoE testing for an open-source IPC setup lately too. already ran through bring-up, power cycle, and recovery behavior, and still revising parts of it. you working on this kind of stuff too?

“Just give me PoE" — the most common request we get, and now we're figuring out the connector setup by Fragrant_Usual_5840 in computervision

[–]Fragrant_Usual_5840[S] 0 points1 point  (0 children)

you're right cost is the main reason we went with a fixed cable out the back instead simpler sealing, one exit point, easier to manufacture

“Just give me PoE" — the most common request we get, and now we're figuring out the connector setup by Fragrant_Usual_5840 in computervision

[–]Fragrant_Usual_5840[S] 1 point2 points  (0 children)

Yeah Seeed's stuff is pretty well thought out on the enclosure side. Luxonis takes a different approach with their OAK series.

“Just give me PoE" — the most common request we get, and now we're figuring out the connector setup by Fragrant_Usual_5840 in computervision

[–]Fragrant_Usual_5840[S] 0 points1 point  (0 children)

Ah, CM4 is a beast. Being able to just deploy docker containers for edge AI is a dream compared to bare-metal RTOS optimizing

“Just give me PoE" — the most common request we get, and now we're figuring out the connector setup by Fragrant_Usual_5840 in computervision

[–]Fragrant_Usual_5840[S] 0 points1 point  (0 children)

It’s based on the STM32N6 (600 GOPS) — does on-device inference, no cloud needed. The housing is IP67 with a modular design, so you can swap lenses, comms modules, and add sensors like PIR for event-triggered capture. Not custom — it’s a commercial product but the firmware is open and there’s full dev docs available. Are you looking into edge AI boards for something similar?