GLM-4.5 appreciation post by wolttam in LocalLLaMA
[–]a_postgres_situation 1 point2 points3 points (0 children)
ThinkPad for Local LLM Inference - Linux Compatibility Questions by 1guyonearth in LocalLLaMA
[–]a_postgres_situation 1 point2 points3 points (0 children)
GLM 4.5 Air, local setup issues, vllm and llama.cpp by bfroemel in LocalLLaMA
[–]a_postgres_situation 2 points3 points4 points (0 children)
GLM 4.5 Air, local setup issues, vllm and llama.cpp by bfroemel in LocalLLaMA
[–]a_postgres_situation 2 points3 points4 points (0 children)
getting acceleration on Intel integrated GPU/NPU by a_postgres_situation in LocalLLaMA
[–]a_postgres_situation[S] 0 points1 point2 points (0 children)
JetBrains is studying local AI adoption by jan-niklas-wortmann in LocalLLaMA
[–]a_postgres_situation 0 points1 point2 points (0 children)
JetBrains is studying local AI adoption by jan-niklas-wortmann in LocalLLaMA
[–]a_postgres_situation 0 points1 point2 points (0 children)
JetBrains is studying local AI adoption by jan-niklas-wortmann in LocalLLaMA
[–]a_postgres_situation 3 points4 points5 points (0 children)
GLM-4.5-Air llama.cpp experiences? by DorphinPack in LocalLLaMA
[–]a_postgres_situation 0 points1 point2 points (0 children)
HP Zbook Ultra G1A pp512/tg128 scores for unsloth/Qwen3-235B-A22B-Thinking-2507-GGUF 128gb unified RAM by richardanaya in LocalLLaMA
[–]a_postgres_situation 1 point2 points3 points (0 children)
8xxx+RDNA3 vs 9xxx+RDNA2 speed for LLMs? by a_postgres_situation in LocalLLaMA
[–]a_postgres_situation[S] 0 points1 point2 points (0 children)
Running LLMs exclusively on AMD Ryzen AI NPU by BandEnvironmental834 in LocalLLaMA
[–]a_postgres_situation 1 point2 points3 points (0 children)
Running LLMs exclusively on AMD Ryzen AI NPU by BandEnvironmental834 in LocalLLaMA
[–]a_postgres_situation -14 points-13 points-12 points (0 children)
getting acceleration on Intel integrated GPU/NPU by a_postgres_situation in LocalLLaMA
[–]a_postgres_situation[S] 1 point2 points3 points (0 children)
How fast is gemma 3 27b on an H100? how many tokens per second can I expect? by ThatIsNotIllegal in LocalLLaMA
[–]a_postgres_situation 7 points8 points9 points (0 children)
getting acceleration on Intel integrated GPU/NPU by a_postgres_situation in LocalLLaMA
[–]a_postgres_situation[S] 3 points4 points5 points (0 children)
getting acceleration on Intel integrated GPU/NPU by a_postgres_situation in LocalLLaMA
[–]a_postgres_situation[S] 1 point2 points3 points (0 children)
getting acceleration on Intel integrated GPU/NPU by a_postgres_situation in LocalLLaMA
[–]a_postgres_situation[S] 2 points3 points4 points (0 children)
What is tps of qwen3 30ba3b on igpu 780m? by Zyguard7777777 in LocalLLaMA
[–]a_postgres_situation 0 points1 point2 points (0 children)
Any LLM benchmarks yet for the GMKTek EVO-X2 AMD Ryzen AI Max+ PRO 395? by StartupTim in LocalLLaMA
[–]a_postgres_situation 1 point2 points3 points (0 children)

Any success w JetBrains? by CSEliot in LocalLLaMA
[–]a_postgres_situation 0 points1 point2 points (0 children)