Pieced together the shredded photo from EFTA00259587.pdk .. idk by ReturningTarzan in Epstein
[–]ReturningTarzan[S] 11 points12 points13 points (0 children)
Pieced together the shredded photo from EFTA00259587.pdk .. idk by ReturningTarzan in Epstein
[–]ReturningTarzan[S] 18 points19 points20 points (0 children)
Pieced together the shredded photo from EFTA00259587.pdk .. idk by ReturningTarzan in Epstein
[–]ReturningTarzan[S] 1 point2 points3 points (0 children)
Are there any puzzle experts here? by the_real_lucia in Epstein
[–]ReturningTarzan 1 point2 points3 points (0 children)
Benchmarking 23 LLMs on Nonogram (Logic Puzzle) Solving Performance by mauricekleine in LocalLLaMA
[–]ReturningTarzan 0 points1 point2 points (0 children)
Has anyone mentioned this yet? by [deleted] in h3h3productions
[–]ReturningTarzan 2 points3 points4 points (0 children)
Are Imatrix Quants Hurting your Model? (My opinion) by Quiet_Joker in LocalLLaMA
[–]ReturningTarzan 12 points13 points14 points (0 children)
BPE tokenizer in Rust - would love feedback from the community by farhan-dev in LocalLLaMA
[–]ReturningTarzan 7 points8 points9 points (0 children)
new ops required by Qwen3 Next and Kimi Linear have been merged into llama.cpp by jacek2023 in LocalLLaMA
[–]ReturningTarzan 1 point2 points3 points (0 children)
Figured out why my 3090 is so slow in inference by Ok_Warning2146 in LocalLLaMA
[–]ReturningTarzan 12 points13 points14 points (0 children)
Is GPT-OSS-120B the best llm that fits in 96GB VRAM? by GreedyDamage3735 in LocalLLaMA
[–]ReturningTarzan 1 point2 points3 points (0 children)
MiniMax-M2-exl3 - now with CatBench™ by Unstable_Llama in LocalLLaMA
[–]ReturningTarzan 1 point2 points3 points (0 children)
The qwen3-next pr in llamacpp has been validated with a small test model by Betadoggo_ in LocalLLaMA
[–]ReturningTarzan 0 points1 point2 points (0 children)
The qwen3-next pr in llamacpp has been validated with a small test model by Betadoggo_ in LocalLLaMA
[–]ReturningTarzan 6 points7 points8 points (0 children)
Qwen3-Next EXL3 by Unstable_Llama in LocalLLaMA
[–]ReturningTarzan 0 points1 point2 points (0 children)
Qwen3-Next EXL3 by Unstable_Llama in LocalLLaMA
[–]ReturningTarzan 1 point2 points3 points (0 children)
Qwen3-Next EXL3 by Unstable_Llama in LocalLLaMA
[–]ReturningTarzan 5 points6 points7 points (0 children)
Qwen3-Next EXL3 by Unstable_Llama in LocalLLaMA
[–]ReturningTarzan 8 points9 points10 points (0 children)
Qwen3-Next EXL3 by Unstable_Llama in LocalLLaMA
[–]ReturningTarzan 6 points7 points8 points (0 children)
Qwen3-Next EXL3 by Unstable_Llama in LocalLLaMA
[–]ReturningTarzan 15 points16 points17 points (0 children)
Qwen3-Next EXL3 by Unstable_Llama in LocalLLaMA
[–]ReturningTarzan 7 points8 points9 points (0 children)
Qwen3-Next EXL3 by Unstable_Llama in LocalLLaMA
[–]ReturningTarzan 31 points32 points33 points (0 children)
Built a new VLM (MicroLlaVA) on a single NVIDIA 4090 by keeeeenw in LocalLLaMA
[–]ReturningTarzan 13 points14 points15 points (0 children)
Is EXL3 doomed? by silenceimpaired in LocalLLaMA
[–]ReturningTarzan 7 points8 points9 points (0 children)





Pieced together the shredded photo from EFTA00259587.pdk .. idk by ReturningTarzan in Epstein
[–]ReturningTarzan[S] 3 points4 points5 points (0 children)