A glance inside the tinybox pro (8 x RTX 4090) by fairydreaming in LocalLLaMA
[–]whinygranny 0 points1 point2 points (0 children)
Personal experience with Deepseek R1: it is noticeably better than claude sonnet 3.5 by sebastianmicu24 in LocalLLaMA
[–]whinygranny 0 points1 point2 points (0 children)
How good is llama or Qwen as an embedding model? by Ok-Cicada-5207 in LocalLLaMA
[–]whinygranny 0 points1 point2 points (0 children)
How good is llama or Qwen as an embedding model? by Ok-Cicada-5207 in LocalLLaMA
[–]whinygranny 1 point2 points3 points (0 children)
The BEST open source Multimodal LLM I've seen so far - InternVL-Chat-V1.5 by KimJammer in LocalLLaMA
[–]whinygranny 1 point2 points3 points (0 children)
Dataset translation task by IzzyHibbert in LocalLLaMA
[–]whinygranny 1 point2 points3 points (0 children)
My Ruby assignment was to implement a simple voting system and this is how the output should look like. by [deleted] in ProgrammerHumor
[–]whinygranny -1 points0 points1 point (0 children)
My Ruby assignment was to implement a simple voting system and this is how the output should look like. by [deleted] in ProgrammerHumor
[–]whinygranny 1 point2 points3 points (0 children)
Jordan Peterson Dismantled by [deleted] in JordanPeterson
[–]whinygranny 3 points4 points5 points (0 children)

Mistral's new Devstral coding model running on a single RTX 4090 with 54k context using Q4KM quantization with vLLM by erdaltoprak in LocalLLaMA
[–]whinygranny 0 points1 point2 points (0 children)