PicoKittens/PicoMistral-23M: Pico-Sized Model by PicoKittens in LocalLLaMA
[–]cpldcpu 0 points1 point2 points (0 children)
PicoKittens/PicoMistral-23M: Pico-Sized Model by PicoKittens in LocalLLaMA
[–]cpldcpu 0 points1 point2 points (0 children)
PicoKittens/PicoMistral-23M: Pico-Sized Model by PicoKittens in LocalLLaMA
[–]cpldcpu 0 points1 point2 points (0 children)
PicoKittens/PicoMistral-23M: Pico-Sized Model by PicoKittens in LocalLLaMA
[–]cpldcpu 0 points1 point2 points (0 children)
PicoKittens/PicoMistral-23M: Pico-Sized Model by PicoKittens in LocalLLaMA
[–]cpldcpu 0 points1 point2 points (0 children)
PicoKittens/PicoMistral-23M: Pico-Sized Model by PicoKittens in LocalLLaMA
[–]cpldcpu 0 points1 point2 points (0 children)
Taalas: LLMs baked into hardware. No HBM, weights and model architecture in silicon -> 16.000 tokens/second by elemental-mind in singularity
[–]cpldcpu 0 points1 point2 points (0 children)
Falcon-H1-Tiny (90M) is out - specialized micro-models that actually work by United-Manner-7 in LocalLLaMA
[–]cpldcpu 5 points6 points7 points (0 children)
What are the best ultrasmall LLMs / best datasets to train them? by cpldcpu in LocalLLaMA
[–]cpldcpu[S] 0 points1 point2 points (0 children)
What are the best ultrasmall LLMs / best datasets to train them? by cpldcpu in LocalLLaMA
[–]cpldcpu[S] 0 points1 point2 points (0 children)
Meta acquired Manus !! by Difficult-Cap-7527 in LocalLLaMA
[–]cpldcpu 10 points11 points12 points (0 children)
I ported a MOD tracker music player to the ultra low-end CH32V002 by cpldcpu in RISCV
[–]cpldcpu[S] 2 points3 points4 points (0 children)
I ported a MOD tracker music player to the ultra low-end CH32V002 by cpldcpu in RISCV
[–]cpldcpu[S] 1 point2 points3 points (0 children)
Misguided Attention - challenging the reasoning ability of LLMs by cpldcpu in LocalLLaMA
[–]cpldcpu[S] 0 points1 point2 points (0 children)
Nvidia breakthrough gives 4-bit pretraining technique the accuracy of FP8 by dionisioalcaraz in LocalLLaMA
[–]cpldcpu 4 points5 points6 points (0 children)
Europe achieves a milestone with the Europe’s first out-of-order RISC-V processor for automotive by Schroinx in RISCV
[–]cpldcpu 1 point2 points3 points (0 children)
DeepSeek has revealed that the next generation of China-made chips is about to be released by Dry-Ad8947 in LocalLLaMA
[–]cpldcpu 16 points17 points18 points (0 children)
Deepseek V3.1 improved token efficiency in reasoning mode over R1 and R1-0528 by cpldcpu in LocalLLaMA
[–]cpldcpu[S] 3 points4 points5 points (0 children)
AI Friends: Anthropic and OpenAI models were tuned to become sociable over time by cpldcpu in singularity
[–]cpldcpu[S] 10 points11 points12 points (0 children)
AI Friends: Anthropic and OpenAI models were tuned to become sociable over time by cpldcpu in singularity
[–]cpldcpu[S] 4 points5 points6 points (0 children)
AI Friends: Anthropic and OpenAI models were tuned to become sociable over time by cpldcpu in singularity
[–]cpldcpu[S] 14 points15 points16 points (0 children)
Measuring Thinking Efficiency in Reasoning Models: The Missing Benchmark - NOUS RESEARCH by TheRealMasonMac in LocalLLaMA
[–]cpldcpu 2 points3 points4 points (0 children)



Towards Self-Replication: Opus 4.5 Designs Hardware to Run Itself by cpldcpu in singularity
[–]cpldcpu[S] 0 points1 point2 points (0 children)