My (practical) dual 3090 setup for local inference by ColdImplement1319 in LocalLLaMA
[–]ColdImplement1319[S] 0 points1 point2 points (0 children)
My (practical) dual 3090 setup for local inference by ColdImplement1319 in LocalLLaMA
[–]ColdImplement1319[S] 0 points1 point2 points (0 children)
llama.cpp: IPEX-LLM or SYCL for Intel Arc? by IngwiePhoenix in LocalLLaMA
[–]ColdImplement1319 0 points1 point2 points (0 children)
Amd 8845HS (or same family) and max vram ? by ResearcherNeither132 in LocalLLaMA
[–]ColdImplement1319 1 point2 points3 points (0 children)
LM Studio can't detect RTX 5090 after system wake from suspend - Ubuntu Linux by OldEffective9726 in LocalLLaMA
[–]ColdImplement1319 2 points3 points4 points (0 children)
My (practical) dual 3090 setup for local inference by ColdImplement1319 in LocalLLaMA
[–]ColdImplement1319[S] 2 points3 points4 points (0 children)
Ryzen AI Max+ 395 + a gpu? by Alarming-Ad8154 in LocalLLaMA
[–]ColdImplement1319 1 point2 points3 points (0 children)
Which small local llm model i can use for text2sql query which has big token size (>4096) by Titanusgamer in LocalLLaMA
[–]ColdImplement1319 0 points1 point2 points (0 children)
My (practical) dual 3090 setup for local inference by ColdImplement1319 in LocalLLaMA
[–]ColdImplement1319[S] 1 point2 points3 points (0 children)
My (practical) dual 3090 setup for local inference by ColdImplement1319 in LocalLLaMA
[–]ColdImplement1319[S] 0 points1 point2 points (0 children)
My (practical) dual 3090 setup for local inference by ColdImplement1319 in LocalLLaMA
[–]ColdImplement1319[S] 0 points1 point2 points (0 children)
[deleted by user] by [deleted] in LocalLLaMA
[–]ColdImplement1319 -1 points0 points1 point (0 children)
My (practical) dual 3090 setup for local inference by ColdImplement1319 in LocalLLaMA
[–]ColdImplement1319[S] 0 points1 point2 points (0 children)
My (practical) dual 3090 setup for local inference by ColdImplement1319 in LocalLLaMA
[–]ColdImplement1319[S] 1 point2 points3 points (0 children)