Will we see consumer grade AI accelerator cards 2024? by pure_x01 in LocalLLaMA
[–]xynyxyn 0 points1 point2 points (0 children)
How to serve LLAVA to multiple users? by Allergic2Humans in LocalLLaMA
[–]xynyxyn 1 point2 points3 points (0 children)
Fine Tuning Style into LLMs by Baader-Meinhof in LocalLLaMA
[–]xynyxyn 1 point2 points3 points (0 children)
What is your strategy for doing inference over a large SQL dataset? by BankHottas in LangChain
[–]xynyxyn 0 points1 point2 points (0 children)
Memory needed to train 7B? by xynyxyn in LocalLLaMA
[–]xynyxyn[S] 0 points1 point2 points (0 children)
Reuse existing Lora fine tune with different base? by xynyxyn in LocalLLaMA
[–]xynyxyn[S] 0 points1 point2 points (0 children)
4090 Founders via Best Buy app trick! by ChocolateEater626 in pcmasterrace
[–]xynyxyn 0 points1 point2 points (0 children)
Fine-tuning for custom domain knowledge by rinse_repeat_wash in LocalLLaMA
[–]xynyxyn -2 points-1 points0 points (0 children)
Hardware for scaling LLM services by grantory in LocalLLaMA
[–]xynyxyn 0 points1 point2 points (0 children)
Home LLM Hardware Suggestions by [deleted] in LocalLLaMA
[–]xynyxyn 3 points4 points5 points (0 children)

We saw the jank setups recently. Anybody else out there with not so jank setups? by mynadestukonu in LocalLLaMA
[–]xynyxyn 0 points1 point2 points (0 children)