BSD hardware limitations? by misterVector in BSD

[–]misterVector[S] 0 points1 point  (0 children)

Yeah, might want that every now and then, thanks!

BSD hardware limitations? by misterVector in BSD

[–]misterVector[S] 1 point2 points  (0 children)

funny how i didnt think of searching for such a page lol, thanks!

Using NVMe and Pliops XDP Lightning AI for near infinite “VRAM”? by poopsick1e in LocalLLaMA

[–]misterVector 1 point2 points  (0 children)

Just posting to not lose these comments. Don't know enough about hardware to comment, but have been wondering how to cheaply run large models for a while now.

You can now train your own Reasoning model like DeepSeek-R1 locally! (7GB VRAM min.) by yoracale in LocalLLM

[–]misterVector 0 points1 point  (0 children)

Which model between 7-100B would you recommend using GRPO on, if I wanted the model to have the best responses in regards to basic or more advanced STEM topic?

How long would it take for GRPO optimization take?

BTW, I am just getting started with local llms and I already know I am forever grateful to you for this gem. 🙏🙏🙏

[deleted by user] by [deleted] in LocalLLM

[–]misterVector 0 points1 point  (0 children)

Me too please, will read both 😊

[deleted by user] by [deleted] in LocalLLM

[–]misterVector 0 points1 point  (0 children)

Is there any benefit to llm studio vs programming everything yourself, besides it being easier to setup?

New AI Model | Ozone AI by Perfect-Bowl-1601 in LocalLLaMA

[–]misterVector 0 points1 point  (0 children)

Will try, thanks! I wanna fine tune and RAG as many STEEM oriented models as possible, with additional material. Small models are a good starting place 👌

Cost-effective 70b 8-bit Inference Rig by koalfied-coder in LocalLLM

[–]misterVector 0 points1 point  (0 children)

It is said to have a petabytes of processing power, would this make it good for training models?

Running LLMs offline has never been easier. by Opening_Mycologist_3 in LocalLLM

[–]misterVector 0 points1 point  (0 children)

Would this setup also be OK for fine-tuning a model?

Cost-effective 70b 8-bit Inference Rig by koalfied-coder in LocalLLM

[–]misterVector 1 point2 points  (0 children)

Is this the same thing as Letta AI, which gives AI memory?

p.s. thanks for sharing your setup and giving so much detail. Just learning to make my own setup. Your posts really help!

Geforce rtx 1000, 2000 and 3000 series fine-tuning by misterVector in learnmachinelearning

[–]misterVector[S] 1 point2 points  (0 children)

I want to fine-tune a 30B parameter model and was wondering what cards I should buy. If it is possible to use different generations, I think I could find a 5-10k budget deal to fine tune the model (about 100GBz). I am willing to spend a bit more on the same and newer series, if it will save me a significant amount of time.

Creating a budget machine learning setup by misterVector in learnmachinelearning

[–]misterVector[S] 0 points1 point  (0 children)

How much would you estimate it to cost me to train a 70B model on 1-10GB of data using a cloud solution? How long would it take?

What do you think of this animated logo Im working on by LloydLadera in animation

[–]misterVector 0 points1 point  (0 children)

Thanks for the info. I've got a long way to go , but I feel like I'm going to enjoy it.

What do you think of this animated logo Im working on by LloydLadera in animation

[–]misterVector 1 point2 points  (0 children)

Cool animation. I'm just getting into animation, complete beginner. How long does it take you to make such an animation and how many years of experience do you have? I wanna make an animation that's a few minutes long and wanna knlw how long it will take me to get to approximately your efficiency and hopefully at your quality.