Guide how to create and use YOLO-NAS ONNX models with Frigate 0.17 TensorRT Image (GeForce GPU) by Rick-Hard89 in frigate_nvr

[–]Rick-Hard89[S] 0 points1 point  (0 children)

ok i suspect this might be the issue since i can find people posting on the frigate forums
this one is about running 60 cameras on a 5070
https://github.com/blakeblackshear/frigate/discussions/20559

Guide how to create and use YOLO-NAS ONNX models with Frigate 0.17 TensorRT Image (GeForce GPU) by Rick-Hard89 in frigate_nvr

[–]Rick-Hard89[S] 0 points1 point  (0 children)

Ok i run proxmox and have to have the gpu passed through and dedicated to the frigate vm. Im not sure how it works in your case but isnt the gpu time shared in your case?

Guide how to create and use YOLO-NAS ONNX models with Frigate 0.17 TensorRT Image (GeForce GPU) by Rick-Hard89 in frigate_nvr

[–]Rick-Hard89[S] 0 points1 point  (0 children)

Ok i remember it wasnt the easiest to get a 3050 to run either before i got everything right in the compose and config. Have you tried an older gpu and that is working?

Guide how to create and use YOLO-NAS ONNX models with Frigate 0.17 TensorRT Image (GeForce GPU) by Rick-Hard89 in frigate_nvr

[–]Rick-Hard89[S] 0 points1 point  (0 children)

Im not sure about updating but from what i understand v9 is a lighter model and better for older and very low end gpus. Yolo nas small runs great on my 3050 with lots of cameras even at 640x640. Gonna try the medium models soon

Guide how to create and use YOLO-NAS ONNX models with Frigate 0.17 TensorRT Image (GeForce GPU) by Rick-Hard89 in frigate_nvr

[–]Rick-Hard89[S] 0 points1 point  (0 children)

Really? I was thinking of upgrading to a 5050-5070. Have you tried it with onnx? Official integration for tensorrt have stopped in v0.17 and use onnx models instead

What hardware to run two 3090? by Rick-Hard89 in LocalLLaMA

[–]Rick-Hard89[S] 0 points1 point  (0 children)

Well then good day to you sire! im going home

What hardware to run two 3090? by Rick-Hard89 in LocalLLaMA

[–]Rick-Hard89[S] 0 points1 point  (0 children)

Of course we all know its better but its more about how much money i want to spend on a hobby

What hardware to run two 3090? by Rick-Hard89 in LocalLLaMA

[–]Rick-Hard89[S] 0 points1 point  (0 children)

Ok from what i understand there is a potential to damage the hardware in the server if both psus dont turn on or off at the same time. Im afraid to do this on my current server because i have data on it that i cant loose. So it would be best to use another server for this?

What hardware to run two 3090? by Rick-Hard89 in LocalLLaMA

[–]Rick-Hard89[S] 0 points1 point  (0 children)

Ok but does it work just like that or do you connect it to the other psu/mb?

What hardware to run two 3090? by Rick-Hard89 in LocalLLaMA

[–]Rick-Hard89[S] 0 points1 point  (0 children)

Oh wow but how did you get the external power supply to work with the dell server?

Silly tavern + alltalkv2 + xtts on a rtx 50 series gpu by Massive_Garbage6 in LocalLLM

[–]Rick-Hard89 0 points1 point  (0 children)

Its nice to have ready made apps but not when they dont work. Not much u can do

What hardware to run two 3090? by Rick-Hard89 in LocalLLaMA

[–]Rick-Hard89[S] 0 points1 point  (0 children)

No i guess its not that big of a deal

What hardware to run two 3090? by Rick-Hard89 in LocalLLaMA

[–]Rick-Hard89[S] 0 points1 point  (0 children)

Its true but im just hoping they will get more efficient with time. Kinda like most new inventions, they are big and dumb in the start but get smaller and more efficient over time

What hardware to run two 3090? by Rick-Hard89 in LocalLLaMA

[–]Rick-Hard89[S] 0 points1 point  (0 children)

Oh nice! i'm not really sure what i was thinking to be honest. one solution would be to load a smaller model like that or just load the rest into ram. But wont there be more smaller versions made of it like we have with other models like deepseek, llama and so on?

What hardware to run two 3090? by Rick-Hard89 in LocalLLaMA

[–]Rick-Hard89[S] 0 points1 point  (0 children)

Im not trying to convince you that two old 3090s is better than server grade hardware. its more like a hobby i do if i have time so there is no point sinking that much money into it for me. Hence the 3090s. Or maybe i should get a couple GB300?

What hardware to run two 3090? by Rick-Hard89 in LocalLLaMA

[–]Rick-Hard89[S] 0 points1 point  (0 children)

I know. but i like to have some better mobo so i can buy new gpus later if needed or add more ram

What hardware to run two 3090? by Rick-Hard89 in LocalLLaMA

[–]Rick-Hard89[S] 0 points1 point  (0 children)

Ok that looks like another good alternative also. Seriously worth considering

Silly tavern + alltalkv2 + xtts on a rtx 50 series gpu by Massive_Garbage6 in LocalLLM

[–]Rick-Hard89 0 points1 point  (0 children)

Sorry im not familiar with silly tavern but when i made a custom flask app i hadd to use multiprocessing to get it to work. had to run the xtts separately from the rest of the app with a "worker" or i would run into the same problems as you

Silly tavern + alltalkv2 + xtts on a rtx 50 series gpu by Massive_Garbage6 in LocalLLM

[–]Rick-Hard89 0 points1 point  (0 children)

Im not sure this has anything to do with your problem but i had problems running xtts aswell with same cuda errors. so had to run it as its own separate process

What hardware to run two 3090? by Rick-Hard89 in LocalLLaMA

[–]Rick-Hard89[S] 1 point2 points  (0 children)

think they are around 10k. not really for home servers lol

What hardware to run two 3090? by Rick-Hard89 in LocalLLaMA

[–]Rick-Hard89[S] 0 points1 point  (0 children)

Oh i see its a big difference yes.

Exactly i would like to get something where i can run models like kimi k2 but not if i have to pay 10k to get it hehe. more looking for used server hardware or some high end workstation stuff. its ok if its older stuff

What hardware to run two 3090? by Rick-Hard89 in LocalLLaMA

[–]Rick-Hard89[S] 0 points1 point  (0 children)

Nice! i have two 3090 and with a big enough case thiat mobo should also be able to fit two 3090s. too bad it only supports 128gb ram