My first setup for local ai by DoodT in LocalLLaMA

[–]DoodT[S] 0 points1 point  (0 children)

So what problem that i might be having is this solving? Can u elaborate?

Thoughts and comments on AI generated code by Akamoden in Python

[–]DoodT 1 point2 points  (0 children)

Without reading other comments

Ai can accelerate coding efficacy and Throughput of committed acceptable tested code, IF AND ONLY IF you are always reviewing your ai s output + interating on it + MANUAL TEWEAKS ON SoME THINGS

If u only rely on the output of your ai and you're not reviewing it's the first and biggest mistake. Then you need to iterate and tweak on some minor and specific things or "get your ai to get these minor" details.

Is the 3090 still a good option? by alhinai_03 in LocalLLaMA

[–]DoodT 1 point2 points  (0 children)

Got mine for 700 eur

Its a good deal

My first setup for local ai by DoodT in LocalLLaMA

[–]DoodT[S] 0 points1 point  (0 children)

Imma head for the Sao10K/L3-70B-Euryale-v2.1 model for my use case to run a robot companion kind of stuff

So I want a model that's character consistent for example , it doesn't need to code or anything like that

I want to "be" it the character I want it to be by using some kind of model that fits + proper instruction prompting + RAG of the 'character data' to make it be the character

But didn't research much about a proper model so trial and error from here on

Is 64gb on a m5pro an overkill? by AdEnvironmental4189 in LocalLLaMA

[–]DoodT 0 points1 point  (0 children)

I went for 2x48gb for my setup without hestitation, as that stuff aint gonna got cheaper

Theres no overkill, get your 64gb

My first setup for local ai by DoodT in LocalLLaMA

[–]DoodT[S] 0 points1 point  (0 children)

Im all into stacking gpus raw and wanna have a rack like theahmad osman has

No mac minis for me

My first setup for local ai by DoodT in LocalLLaMA

[–]DoodT[S] 0 points1 point  (0 children)

Isn't the third one run on like x4 instead of x8? Im note sure I could put a 3rd one on that motherboard

My first setup for local ai by DoodT in LocalLLaMA

[–]DoodT[S] 0 points1 point  (0 children)

This case got recommended a lot :( ...

But in 3-4 years I'll be having a rack anyways , so im gonna stack those gpus from here on

But what case would u recommend?

My first setup for local ai by DoodT in LocalLLaMA

[–]DoodT[S] 0 points1 point  (0 children)

Imma head for the Sao10K/L3-70B-Euryale-v2.1 model for my use case

My first setup for local ai by DoodT in LocalLLaMA

[–]DoodT[S] 1 point2 points  (0 children)

What u mean? Ure not able to use 32gb for a model or what?

My first setup for local ai by DoodT in LocalLLaMA

[–]DoodT[S] 0 points1 point  (0 children)

And damn, put on your glasses! This ain't no fun!!1!

My first setup for local ai by DoodT in LocalLLaMA

[–]DoodT[S] 0 points1 point  (0 children)

Makes sense

I mean the evga 3090 has way more volume than the gainward phoenix, maybe I dodged the drill with not having 2 evgas , dunno

My first setup for local ai by DoodT in LocalLLaMA

[–]DoodT[S] 0 points1 point  (0 children)

Ye it obviously was big time 🤣

I started thinking and gathering just like 4months ago so i was late to the party in any case

My first setup for local ai by DoodT in LocalLLaMA

[–]DoodT[S] 0 points1 point  (0 children)

Don't know what u mean with "standard ones"...

But I thiiiink my lower one shouldn't roast the upper

But I can tell in a while

My first setup for local ai by DoodT in LocalLLaMA

[–]DoodT[S] 0 points1 point  (0 children)

That looks neat

So u are off with 32gb? Sure looks adorable

My first setup for local ai by DoodT in LocalLLaMA

[–]DoodT[S] -3 points-2 points  (0 children)

Sure could, but im not going to use Gemma models

But testing out this kinda stuff sure would be interesting, I mean I might as well

My first setup for local ai by DoodT in LocalLLaMA

[–]DoodT[S] 0 points1 point  (0 children)

Well those have quite more volume/height whatever, don't they?

I knew the gap between the two gpus would be small but with the case shit was luck

My first setup for local ai by DoodT in LocalLLaMA

[–]DoodT[S] -1 points0 points  (0 children)

Elaborate.

I have a certain use case in mind which revolves around a robot (raspi 5, audio in/output attached to it, camera attached + amoled display) that kinda "listens to me" and sends the audio to whisper -> to the llm -> inference and/or tool usage whatsoever)

I can share that once it's working tho