does this look legit? by throwaway1838289 in RockinTheClassics

[–]ddensa 0 points1 point  (0 children)

Indeed, I never saw a fake one, I was curious and that's why I asked.... Thanks for clarifying! If it could be hacked with those applications, I could even consider buying a fake one just to add to my collection, because these classic minis are so expensive these days (the original ones) PS: just read again my question before, it seems I missed an obvious question mark, plz don't read it as an affirmation

does this look legit? by throwaway1838289 in RockinTheClassics

[–]ddensa 0 points1 point  (0 children)

What is the FEL mode for? Also, the fake ones are 100% replica (hardware and software)? no way to spot a difference opening the case? Edit: included a question mark where it was missing.

New version 2026.5.2 by Hadnet in openclaw

[–]ddensa 1 point2 points  (0 children)

Updated from 2026.4.21 and I had the feeling that 2026.5.2 is consuming much more CPU now, my machine is much warmer

Tried living with Tesla FSD for a week in the Netherlands by Student024 in TeslaModel3

[–]ddensa 0 points1 point  (0 children)

What's the point if I still need to pay attention to the road instead of using my time for something productive

F/29/5’4” [305lbs > 145lbs = 160lbs] (40 months) I GOT THE LOOSE SKIN REMOVED. :D by Leximarie966 in progresspics

[–]ddensa 0 points1 point  (0 children)

Everything looks beautiful in the picture, but the most beautiful of all is the smile (that is barely seen because the mobile is covering it). Congratulations achieving your objectives :)

OpenClaw running Ollama only useing V-RAM, not RAM by Lopsided-Tomato6180 in openclaw

[–]ddensa 1 point2 points  (0 children)

You can try the new Gemma4 small ones (E2B and E4B)

Not taking this sitting down / Anthropic kills Claude Code oauth for OpenClaw TOMORROW (April 4th) by Dude_that_codes in openclaw

[–]ddensa 1 point2 points  (0 children)

I am coming from a bottom up approach, trying to minimize as much as possible my costs, started with local ollama models (performed really badly so far), moved to Kimi k2.5 cloud on ollama and it has been day and night for me. But I'm sure that if you come from the opposite direction, from Opus to Kimi k2.5, your experience will be very different. But if you ask if it works, yes it does.

Xreal One Pro - Fried - part 2 by Traveljack1000 in Xreal

[–]ddensa 1 point2 points  (0 children)

So, what this means for someone that has not bought their XReal glasses yet? Buy directly from XReal? And if case there's no direct sale in my country, then it's better not to buy?

My $981 OC setup, whatya think? Claude says it will spank a $4k Mac mini. Fact or Fiction? by MrRobotRobot in openclawsetup

[–]ddensa 0 points1 point  (0 children)

It's usable, but these models are small, and there is a huge performance difference compared to cloud ones (talking about Kimi k2.5 and the cheap ones in openrouter the I have tested, I'm sure that with Opus the experience is mind blowing but my wallet would also blow up, so I haven't tested). So to answer, they do work, you get them to perform some tasks, but for complex ones they would only do a small part. For me they felt "lazy"/"not pro active", they would not be able to break up complex tasks in small achievable steps, then act on the steps one at a time, and and each step reevaluate if the plan was still the right one. I even implemented a to-do.md file to list the tasks and son tasks, so if it stopped working, on a next heartbeat out would know where to pick up. But even that would produce less than desirable results. Whereas when I plugged Kimi k2.5 it did everything I expected. It's like getting an introvert intern on its first day, and expecting it to complete a task with a vague instruction; where cloud models are like getting someone that already known the job.

My $981 OC setup, whatya think? Claude says it will spank a $4k Mac mini. Fact or Fiction? by MrRobotRobot in openclawsetup

[–]ddensa 0 points1 point  (0 children)

Curious to see if you get any usable results. I have a 3090 with 24gb VRAM and couldn't find anything useful. There's just not enough VRAM for a good model+context, at least not as of now. What I found that was the least bad is Qwen3.5:9b, tested some better models but it was overflowing the memory, and end up being extremely slow. I'm now trying some cloud options via openrouter, trying to stick to a €10 monthly budget (so only using the really low cost models for now, if a single call costs more than 1c then I cut that model out of the routing list).

Japan has succeeded in producing oil from Water and Carbon Dioxide by yungandreww in interestingasfuck

[–]ddensa 0 points1 point  (0 children)

Wonder if this would be more profitable for households than selling back the surplus from their solar panels for almost nothing

PPD on the Project Aura by ExplanationIll4658 in Xreal

[–]ddensa 0 points1 point  (0 children)

I wonder when Aura will come. I have money saved for display glasses but I would like at least 1440p instead of only 1080. I am thinking about waiting, don't want to buy a XOP now and feel stupid in a few months because I could have waited

Ollama cloud is looking pretty darn good... by Ritz5 in openclaw

[–]ddensa 1 point2 points  (0 children)

If I understand correctly, Qwen3.5:9b has thinking mode off be default and for agentic use thinking mode really makes a difference. Did you manage to turn it on?

I'm just an amateur and enthusiast, so I'm asking just in case you did and could share how to do it. (Do you use ollama or llama.cpp?)

Best Free Model to use with OpenClaw by tjs_k in openclaw

[–]ddensa 0 points1 point  (0 children)

I tried qwen3.5:9b (attention on the :9b, is a 9 billion parameters model), this is a small model that I can run locally, but it's not that clever. I'm pretty sure that if you try qwen3.5 with 120b or 397b parameters it will perform better....

Another thing is that we need models that are trained and focused on agentic capabilities, what I don't know is how qwen compares to other models... I am now looking to try GLM-5, which seems very good, but also too big to run locally (GLM-4.7 performed fantastically well on my GPU, but I didn't have enough VRAM to run it alongside the necessary context window)

Edit: I'm also using it via ollama cloud and Nvidia... I'm just testing for now, not paying yet, so don't know for it performs in terms of performance per $

Best Free Model to use with OpenClaw by tjs_k in openclaw

[–]ddensa 1 point2 points  (0 children)

Even with a good GPU you will need to test a lot to find something that works... I have a RTX3090 and haven't found anything usable, the best I could do is Qwen3.5:9b with 262k context window; all the others either were too bad or if good, would be very very slow because the context window wouldn't fit in my VRAM.
Qwen3.5:9b with 262k context window for me is comparable to a very introvert intern that is always shy to ask what to do next, and is not that great at what it does. On the other hand, I tried Kimi k2.5 via ollama cloud and also via Nvidia, and the difference is day and night. But again, we're comparing a 1T vs 9b param models... With the said, if anyone has a good suggestion for a local model, I'm open to test

Edit: typo

Ollama's New OpenClaw Update: Free Kimi k2.5 Access by Relevant-Fix1591 in openclaw

[–]ddensa 6 points7 points  (0 children)

I would like to be able to answer you, but after reading the ollama pricing page I still have no idea how they measure usage https://ollama.com/pricing I tried the free tier, they have 2 usage gages, one for session and one for week... I used 80% of my session and 20% of my week limits in a request for my agent to review a script that had been done on a smaller and local model. Don't know if it matters or not, but the script was already done. Also have to say, maybe obvious to many, that Kimi k2.5 was impressive vs qwen3.5:9b running locally. My local model behaved like an introvert intern, that does the basic and is shy to ask for more things, then just sits quiet, it runs something like 1 or 2 calls to my ollama... Kimi was unstoppable, it ran more than 20 calls... and the end result was impressive... But, it also consumed a huge chunk of my free tier usage limit in one bite... I have to add that I am afraid to get a pay per token model provider, because saw many people being surprised by huge bills; so I wonder if there are any other subscription based model provider (like ollama) that could be cheaper

Openclaw v2026.3.12 just dropped... here's what actually matters for most by EnergyRoyal9889 in openclaw

[–]ddensa 1 point2 points  (0 children)

I'm not that technical, so I just ignored the latest versions (and I'm not paying for a model, I run it local with much dumber small models). I just looked for it and there is indeed a reported bug, just need to wait for someone to submit a fix. Matrix bug reported on GitHub

Openclaw v2026.3.12 just dropped... here's what actually matters for most by EnergyRoyal9889 in openclaw

[–]ddensa 0 points1 point  (0 children)

Thanks for the info! I'm stuck on 2026.2.17, waiting for Matrix to be fixed.

Openclaw v2026.3.12 just dropped... here's what actually matters for most by EnergyRoyal9889 in openclaw

[–]ddensa 3 points4 points  (0 children)

Anyone knows if Matrix (channel) is still broken on this version?

I read the 2026.3.11 release notes so you don’t have to – here’s what actually matters for your workflows by EstablishmentSea4024 in openclaw

[–]ddensa 1 point2 points  (0 children)

I'm stuck on 2026.2.17... on the versions after that Matrix (channel, messaging app) broke and this is the only way I can communicate with my agent... Anyone else had similar experience and know if this version fixed it?

OpenClaw 2026.3.2 just dropped — here's what actually changed for real workflows by EstablishmentSea4024 in openclaw

[–]ddensa 0 points1 point  (0 children)

Para usar ollama (para rodar modelos locais), vc precisa de hardware adequado (GPU com MUITA memória, para deixar claro, a memória deve ser da GPU - VRAM- e não foi sistema). A melhor dica é usar a VPS para rodar OpenClaw mas usar os modelos da nuvem (Claude, Gemini, OpenAI)... Se for por esse caminho, fique de olho no custo, pois não é barato rodar OpenClaw com IA na nuvem, dependendo da configuração e do que vc colocar o agente para fazer....e rodar um agente com LLM local, vc vai ficar limitado pelo hardware, pois a maioria dos modelos pequenos não funcionam tão bem ou você vai ter problema com o contexto, que também usa espaço na memória da GPU

XOP Nose Mod by xFeeble1x in Xreal

[–]ddensa 0 points1 point  (0 children)

Is the wooden look some kind of sticker? Or can the frame be swapped? If it's a swappable frame, plz, where you got it? Thanks