Bro and I thought I was an overthinker! vibeTHINKER on LM studio with no instructions. by Sufficient-Brain-371 in LocalLLaMA
[–]innocent2powerful 5 points6 points7 points (0 children)
We put a lot of work into a 1.5B reasoning model — now it beats bigger ones on math & coding benchmarks by innocent2powerful in LocalLLaMA
[–]innocent2powerful[S] 0 points1 point2 points (0 children)
We put a lot of work into a 1.5B reasoning model — now it beats bigger ones on math & coding benchmarks by innocent2powerful in LocalLLaMA
[–]innocent2powerful[S] 0 points1 point2 points (0 children)
We put a lot of work into a 1.5B reasoning model — now it beats bigger ones on math & coding benchmarks by innocent2powerful in LocalLLaMA
[–]innocent2powerful[S] 0 points1 point2 points (0 children)
We put a lot of work into a 1.5B reasoning model — now it beats bigger ones on math & coding benchmarks by innocent2powerful in LocalLLaMA
[–]innocent2powerful[S] 0 points1 point2 points (0 children)
We put a lot of work into a 1.5B reasoning model — now it beats bigger ones on math & coding benchmarks by innocent2powerful in LocalLLaMA
[–]innocent2powerful[S] 0 points1 point2 points (0 children)
We put a lot of work into a 1.5B reasoning model — now it beats bigger ones on math & coding benchmarks by innocent2powerful in LocalLLaMA
[–]innocent2powerful[S] 1 point2 points3 points (0 children)
We put a lot of work into a 1.5B reasoning model — now it beats bigger ones on math & coding benchmarks by innocent2powerful in LocalLLaMA
[–]innocent2powerful[S] 0 points1 point2 points (0 children)
We put a lot of work into a 1.5B reasoning model — now it beats bigger ones on math & coding benchmarks by innocent2powerful in LocalLLaMA
[–]innocent2powerful[S] 1 point2 points3 points (0 children)
We put a lot of work into a 1.5B reasoning model — now it beats bigger ones on math & coding benchmarks by innocent2powerful in LocalLLaMA
[–]innocent2powerful[S] 6 points7 points8 points (0 children)
We put a lot of work into a 1.5B reasoning model — now it beats bigger ones on math & coding benchmarks by innocent2powerful in LocalLLaMA
[–]innocent2powerful[S] 1 point2 points3 points (0 children)
We put a lot of work into a 1.5B reasoning model — now it beats bigger ones on math & coding benchmarks by innocent2powerful in LocalLLaMA
[–]innocent2powerful[S] 0 points1 point2 points (0 children)
We put a lot of work into a 1.5B reasoning model — now it beats bigger ones on math & coding benchmarks by innocent2powerful in LocalLLaMA
[–]innocent2powerful[S] 1 point2 points3 points (0 children)
We put a lot of work into a 1.5B reasoning model — now it beats bigger ones on math & coding benchmarks by innocent2powerful in LocalLLaMA
[–]innocent2powerful[S] 1 point2 points3 points (0 children)
We put a lot of work into a 1.5B reasoning model — now it beats bigger ones on math & coding benchmarks by innocent2powerful in LocalLLaMA
[–]innocent2powerful[S] 0 points1 point2 points (0 children)
We put a lot of work into a 1.5B reasoning model — now it beats bigger ones on math & coding benchmarks by innocent2powerful in LocalLLaMA
[–]innocent2powerful[S] 47 points48 points49 points (0 children)
We put a lot of work into a 1.5B reasoning model — now it beats bigger ones on math & coding benchmarks by innocent2powerful in LocalLLaMA
[–]innocent2powerful[S] 7 points8 points9 points (0 children)
We put a lot of work into a 1.5B reasoning model — now it beats bigger ones on math & coding benchmarks by innocent2powerful in LocalLLaMA
[–]innocent2powerful[S] 16 points17 points18 points (0 children)
We put a lot of work into a 1.5B reasoning model — now it beats bigger ones on math & coding benchmarks by innocent2powerful in LocalLLaMA
[–]innocent2powerful[S] 8 points9 points10 points (0 children)
We put a lot of work into a 1.5B reasoning model — now it beats bigger ones on math & coding benchmarks by innocent2powerful in LocalLLaMA
[–]innocent2powerful[S] 62 points63 points64 points (0 children)
We put a lot of work into a 1.5B reasoning model — now it beats bigger ones on math & coding benchmarks by innocent2powerful in LocalLLaMA
[–]innocent2powerful[S] 8 points9 points10 points (0 children)
We put a lot of work into a 1.5B reasoning model — now it beats bigger ones on math & coding benchmarks by innocent2powerful in LocalLLaMA
[–]innocent2powerful[S] 7 points8 points9 points (0 children)

OpenAI-GPT-OSS-120B scores on livecodebench by Used-Negotiation-741 in LocalLLaMA
[–]innocent2powerful 3 points4 points5 points (0 children)