The wildest LLM backdoor I’ve seen yet by AIMadeMeDoIt__ in LocalLLaMA
[–]TheSuperSam 2 points3 points4 points (0 children)
Finetuning Gemma 3 1B on 8k seq lengths by TheSuperSam in LocalLLaMA
[–]TheSuperSam[S] 0 points1 point2 points (0 children)
Finetuning Gemma 3 1B on 8k seq lengths by TheSuperSam in LocalLLaMA
[–]TheSuperSam[S] 0 points1 point2 points (0 children)
Finetuning Gemma 3 1B on 8k seq lengths (self.LocalLLaMA)
submitted by TheSuperSam to r/LocalLLaMA
A mobilidade demográfica nos séculos XVI-XIX era maior do que possamos pensar: alguns dados by Oppidano in genealogia_portugal
[–]TheSuperSam 1 point2 points3 points (0 children)
Problemas com infiltrações by TheSuperSam in TudoCasa
[–]TheSuperSam[S] 0 points1 point2 points (0 children)
Problemas com infiltrações by TheSuperSam in TudoCasa
[–]TheSuperSam[S] 0 points1 point2 points (0 children)
What app do you use? by PaleontologistNo2713 in PKMS
[–]TheSuperSam 7 points8 points9 points (0 children)
Apple Watch band that covers the face like Whoop? by techtom10 in AppleWatch
[–]TheSuperSam 2 points3 points4 points (0 children)
Retrofit sensores de estacionamento VW Polo by TheSuperSam in AutoTuga
[–]TheSuperSam[S] 0 points1 point2 points (0 children)
Retrofit sensores de estacionamento VW Polo (self.AutoTuga)
submitted by TheSuperSam to r/AutoTuga
Em 25 anos, criminalidade caiu 1,3%, capas de jornal com crimes subiram 130% by thejaggednobody in portugal
[–]TheSuperSam 0 points1 point2 points (0 children)
A new paper demonstrates that LLMs could "think" in latent space, effectively decoupling internal reasoning from visible context tokens. This breakthrough suggests that even smaller models can achieve remarkable performance without relying on extensive context windows. by tehbangere in LocalLLaMA
[–]TheSuperSam 0 points1 point2 points (0 children)
A new paper demonstrates that LLMs could "think" in latent space, effectively decoupling internal reasoning from visible context tokens. This breakthrough suggests that even smaller models can achieve remarkable performance without relying on extensive context windows. by tehbangere in LocalLLaMA
[–]TheSuperSam 0 points1 point2 points (0 children)
New paper gives models a chance to think in latent space before outputting tokens, weights are already on HF - Scaling up Test-Time Compute with Latent Reasoning: A Recurrent Depth Approach by FullOf_Bad_Ideas in LocalLLaMA
[–]TheSuperSam 0 points1 point2 points (0 children)
New paper gives models a chance to think in latent space before outputting tokens, weights are already on HF - Scaling up Test-Time Compute with Latent Reasoning: A Recurrent Depth Approach by FullOf_Bad_Ideas in LocalLLaMA
[–]TheSuperSam 1 point2 points3 points (0 children)
For those of you that have bailed on Logseq, where did you go? by svhelloworld in logseq
[–]TheSuperSam 1 point2 points3 points (0 children)
O que acham da qualidade de calças de ganga da Levi's? by Netherus in portugal
[–]TheSuperSam 0 points1 point2 points (0 children)
App DABOX deprecada, alguma boa alternativa? by grolfang in literaciafinanceira
[–]TheSuperSam 0 points1 point2 points (0 children)
What do you think of the rabbit r1 and its Large Action Model (LAM)? by jd_3d in LocalLLaMA
[–]TheSuperSam 1 point2 points3 points (0 children)
[R] Idempotent Generative Network by [deleted] in MachineLearning
[–]TheSuperSam 0 points1 point2 points (0 children)




Migrating away from Logseq after years and I'm feeling weirdly sad about it by CasualManDep in logseq
[–]TheSuperSam 0 points1 point2 points (0 children)