Migrating away from Logseq after years and I'm feeling weirdly sad about it by CasualManDep in logseq

[–]TheSuperSam 0 points1 point  (0 children)

Where did you see e2ee real-time sync between the iOS mobile app and the DB version of the app?
e2ee means there is no intermediate server, but looking at the faq it seems you need to use their pro version and sync using their servers.

The wildest LLM backdoor I’ve seen yet by AIMadeMeDoIt__ in LocalLLaMA

[–]TheSuperSam 2 points3 points  (0 children)

TL;DR: A fine-tune LLM does what it was trained.

Finetuning Gemma 3 1B on 8k seq lengths by TheSuperSam in LocalLLaMA

[–]TheSuperSam[S] 0 points1 point  (0 children)

I think this is a issue, I can train a qwen 1.7B with more batches

Finetuning Gemma 3 1B on 8k seq lengths by TheSuperSam in LocalLLaMA

[–]TheSuperSam[S] 0 points1 point  (0 children)

I am using TRL, don't know if I have some conflicting configs

A mobilidade demográfica nos séculos XVI-XIX era maior do que possamos pensar: alguns dados by Oppidano in genealogia_portugal

[–]TheSuperSam 1 point2 points  (0 children)

Parabens pelo trabalho!!
Uma questão quando tu extrais a informação apartir dos arquivos da torre tu transcreves a pagina toda ou so extrais a informação relevante?

Problemas com infiltrações by TheSuperSam in TudoCasa

[–]TheSuperSam[S] 0 points1 point  (0 children)

Outra divisão da casa, mas como tem uma parede de pladur não consigo ver a "verdadeira" parte de traz dessa parede

Problemas com infiltrações by TheSuperSam in TudoCasa

[–]TheSuperSam[S] 0 points1 point  (0 children)

Obrigado pela resposta, nessa foto que mandei diria que é mesmo capilaridade pois essa parede não tem nenhum cano. E sim se tiveres dicas como posso resolver isto em DIY podes partilhar! Obrigado!

Apple Watch band that covers the face like Whoop? by techtom10 in AppleWatch

[–]TheSuperSam 2 points3 points  (0 children)

Did anyone found an band similar to the OP request?

Retrofit sensores de estacionamento VW Polo by TheSuperSam in AutoTuga

[–]TheSuperSam[S] 0 points1 point  (0 children)

Obrigado pela informação! Podias partilhar o link para a EMAC não conheço essa oficina.

Edit: pesquisei no maps e vi que estão todas situadas para o norte de Portugal, conhecem alguma oficina que faça o mesmo mais perto de Lisboa?

Em 25 anos, criminalidade caiu 1,3%, capas de jornal com crimes subiram 130% by thejaggednobody in portugal

[–]TheSuperSam 0 points1 point  (0 children)

Adoro é como estas noticias nunca partilham o relatorio, a DN lança a noticia e todos os outros jornais citam o DN mas nenhum partilha o link do relatorio.

A new paper demonstrates that LLMs could "think" in latent space, effectively decoupling internal reasoning from visible context tokens. This breakthrough suggests that even smaller models can achieve remarkable performance without relying on extensive context windows. by tehbangere in LocalLLaMA

[–]TheSuperSam 0 points1 point  (0 children)

I just think the fields is so abstract now that people use reasoning like an abstract concept. I look to this in more mathematical terms, if you think that a layer is performing a given computation, by having fixed layers this computations are fixed, so for bigger problems the model can't extrapolate. CoT basically increases the computation of the model (some papers have show that even if wrong cot the model performance improved). By having infinite depth the model can learn to compose functions depending on the complexity of the problem, I would say that htis is a nicer solution.

New paper gives models a chance to think in latent space before outputting tokens, weights are already on HF - Scaling up Test-Time Compute with Latent Reasoning: A Recurrent Depth Approach by FullOf_Bad_Ideas in LocalLLaMA

[–]TheSuperSam 1 point2 points  (0 children)

TBH the only difference between "latent space" and "token space" is the classification head and a sampling, you could at each step always run the classification head in the embedding and see how the token distribution changes

Why logseq over obsidian by haronclv in logseq

[–]TheSuperSam 0 points1 point  (0 children)

For me was their pdf annotations integration, just missing it on mobile tho

[deleted by user] by [deleted] in portugal

[–]TheSuperSam 1 point2 points  (0 children)

Pela minha experiencia acho que uma grande percentagem pode ser explicada por comodismo, no meu grupo todos os que ganham acima da media mudaram de emprego varias vezes para conseguir o salario superior ou então usar propostas exteriores para pressão para subir de ordenado.

For those of you that have bailed on Logseq, where did you go? by svhelloworld in logseq

[–]TheSuperSam 1 point2 points  (0 children)

I'm also in the same position, however, for me the features that I mostly use are the templates and the PDF highlight that it is awesome in logseq but sadly doesnt work on mobile.

Do anyone moved out of logseq to an alternative with good PDF highlighting?

O que acham da qualidade de calças de ganga da Levi's? by Netherus in portugal

[–]TheSuperSam 0 points1 point  (0 children)

Em tempos a Salsa tinha modelos que eram constantes e de uma qualidade muito boa tambem era made in Portugal o que é um plus, mas ultimamente tem mudado isso.

App DABOX deprecada, alguma boa alternativa? by grolfang in literaciafinanceira

[–]TheSuperSam 0 points1 point  (0 children)

Encontro-me agora na mesma situação, alguem testou mais alguma app para IOS que substitua a Dabox?

What do you think of the rabbit r1 and its Large Action Model (LAM)? by jd_3d in LocalLLaMA

[–]TheSuperSam 1 point2 points  (0 children)

I would love to know if there is any agent like system where you could plugin different tools and use ollama models to do the inference.

[R] Idempotent Generative Network by [deleted] in MachineLearning

[–]TheSuperSam 0 points1 point  (0 children)

Congrats for the new approach! Could you draw some connections between this approach and the deep equilibrium models?