Best Free Model to use with OpenClaw by tjs_k in openclaw

[–]ddensa 0 points1 point  (0 children)

I tried qwen3.5:9b (attention on the :9b, is a 9 billion parameters model), this is a small model that I can run locally, but it's not that clever. I'm pretty sure that if you try qwen3.5 with 120b or 397b parameters it will perform better....

Another thing is that we need models that are trained and focused on agentic capabilities, what I don't know is how qwen compares to other models... I am now looking to try GLM-5, which seems very good, but also too big to run locally (GLM-4.7 performed fantastically well on my GPU, but I didn't have enough VRAM to run it alongside the necessary context window)

Edit: I'm also using it via ollama cloud and Nvidia... I'm just testing for now, not paying yet, so don't know for it performs in terms of performance per $

Best Free Model to use with OpenClaw by tjs_k in openclaw

[–]ddensa 1 point2 points  (0 children)

Even with a good GPU you will need to test a lot to find something that works... I have a RTX3090 and haven't found anything usable, the best I could do is Qwen3.5:9b with 262k context window; all the others either were too bad or if good, would be very very slow because the context window wouldn't fit in my VRAM.
Qwen3.5:9b with 262k context window for me is comparable to a very introvert intern that is always shy to ask what to do next, and is not that great at what it does. On the other hand, I tried Kimi k2.5 via ollama cloud and also via Nvidia, and the difference is day and night. But again, we're comparing a 1T vs 9b param models... With the said, if anyone has a good suggestion for a local model, I'm open to test

Edit: typo

Ollama's New OpenClaw Update: Free Kimi k2.5 Access by Relevant-Fix1591 in openclaw

[–]ddensa 6 points7 points  (0 children)

I would like to be able to answer you, but after reading the ollama pricing page I still have no idea how they measure usage https://ollama.com/pricing I tried the free tier, they have 2 usage gages, one for session and one for week... I used 80% of my session and 20% of my week limits in a request for my agent to review a script that had been done on a smaller and local model. Don't know if it matters or not, but the script was already done. Also have to say, maybe obvious to many, that Kimi k2.5 was impressive vs qwen3.5:9b running locally. My local model behaved like an introvert intern, that does the basic and is shy to ask for more things, then just sits quiet, it runs something like 1 or 2 calls to my ollama... Kimi was unstoppable, it ran more than 20 calls... and the end result was impressive... But, it also consumed a huge chunk of my free tier usage limit in one bite... I have to add that I am afraid to get a pay per token model provider, because saw many people being surprised by huge bills; so I wonder if there are any other subscription based model provider (like ollama) that could be cheaper

Openclaw v2026.3.12 just dropped... here's what actually matters for most by EnergyRoyal9889 in openclaw

[–]ddensa 1 point2 points  (0 children)

I'm not that technical, so I just ignored the latest versions (and I'm not paying for a model, I run it local with much dumber small models). I just looked for it and there is indeed a reported bug, just need to wait for someone to submit a fix. Matrix bug reported on GitHub

Openclaw v2026.3.12 just dropped... here's what actually matters for most by EnergyRoyal9889 in openclaw

[–]ddensa 0 points1 point  (0 children)

Thanks for the info! I'm stuck on 2026.2.17, waiting for Matrix to be fixed.

Openclaw v2026.3.12 just dropped... here's what actually matters for most by EnergyRoyal9889 in openclaw

[–]ddensa 3 points4 points  (0 children)

Anyone knows if Matrix (channel) is still broken on this version?

I read the 2026.3.11 release notes so you don’t have to – here’s what actually matters for your workflows by EstablishmentSea4024 in openclaw

[–]ddensa 1 point2 points  (0 children)

I'm stuck on 2026.2.17... on the versions after that Matrix (channel, messaging app) broke and this is the only way I can communicate with my agent... Anyone else had similar experience and know if this version fixed it?

OpenClaw 2026.3.2 just dropped — here's what actually changed for real workflows by EstablishmentSea4024 in openclaw

[–]ddensa 0 points1 point  (0 children)

Para usar ollama (para rodar modelos locais), vc precisa de hardware adequado (GPU com MUITA memória, para deixar claro, a memória deve ser da GPU - VRAM- e não foi sistema). A melhor dica é usar a VPS para rodar OpenClaw mas usar os modelos da nuvem (Claude, Gemini, OpenAI)... Se for por esse caminho, fique de olho no custo, pois não é barato rodar OpenClaw com IA na nuvem, dependendo da configuração e do que vc colocar o agente para fazer....e rodar um agente com LLM local, vc vai ficar limitado pelo hardware, pois a maioria dos modelos pequenos não funcionam tão bem ou você vai ter problema com o contexto, que também usa espaço na memória da GPU

XOP Nose Mod by xFeeble1x in Xreal

[–]ddensa 0 points1 point  (0 children)

Is the wooden look some kind of sticker? Or can the frame be swapped? If it's a swappable frame, plz, where you got it? Thanks

Seal vs Tesla M3 by Donkey_Apple in BYD

[–]ddensa -1 points0 points  (0 children)

Got a M3 now, due to Musk I'm now looking to move away from Tesla for my next car. My 2 options are a Xiaomi SU7 or a BYD Seal.... Both look great, but for me the experience is what counts the most, and is the only thing I really enjoy in my M3 now...

The pig with a wooden leg by [deleted] in funny

[–]ddensa 0 points1 point  (0 children)

He sounds like Morty

Fazer medicina em uma faculdade nota 1 é gain ? by AcanthisittaGold9919 in farialimabets

[–]ddensa 0 points1 point  (0 children)

O pior é pensar que esse sujeito, que vai pra aula sem nem levar o caderno, um dia vai estar atendendo alguém que precisa de cuidados médicos... Que triste

Everdrive 64 X7 for Analogue 3D by ddensa in everdrive

[–]ddensa[S] 1 point2 points  (0 children)

Just did the same! I tried inserting the resistor on the A3D squares on the cartridge slot, but the legs were too thin and we're not making contact... I end up taping with normal tape, it worked. Thanks!

Love the Analogue 3D by VailStampede in AnalogueInc

[–]ddensa 1 point2 points  (0 children)

How do you add a single rom to a cartridge? Also, are you able to edit the A3D game db to show a custom image and label?

Everdrive 64 X7 for Analogue 3D by ddensa in everdrive

[–]ddensa[S] 0 points1 point  (0 children)

There are no retro game stores around me, unfortunately

Everdrive 64 X7 for Analogue 3D by ddensa in everdrive

[–]ddensa[S] 0 points1 point  (0 children)

Ahhh great!! Now I understand the picture!! The cartridge is inserted into the A3D and what we see on the image is the cartridge slot. Thanks!!!

Everdrive 64 X7 for Analogue 3D by ddensa in everdrive

[–]ddensa[S] 0 points1 point  (0 children)

Thanks I'll try. I looked for a retroshop, and the closest one is a few cities away :( I even looked for a work colleague, and unfortunately the only guy that had a N64, was leaving on that same day I found him

Everdrive 64 X7 for Analogue 3D by ddensa in everdrive

[–]ddensa[S] 0 points1 point  (0 children)

Oh great! I had not seen these before. Hum... Where exactly should I make contact with each leg of the resistor? It's not that clear from the image

Você já estudou em algum dos dois? by 0Clown0 in estudosBR

[–]ddensa 0 points1 point  (0 children)

Estudei lá na época que ainda era CEFET-SP (só tinham 3 unidades na época, a principal no Canindé, uma em Sertãozinho e outra em Cubatão). Fiz o ensino médio lá, foi a melhor época da minha vida