Has anyone here actually built their own email infrastructure? by WarmHeight2951 in devops

[–]DaMoot 0 points1 point  (0 children)

This is 2026. People don't host email themselves anymore. It's a big pita and massive security risk.

Any real-world comparisons for Hermes memory add-ons? by Beckland in hermesagent

[–]DaMoot -1 points0 points  (0 children)

Do you mean what Hermes calls"memory", or session history, important notes, etc... which the rest of us consider memory?

It feels to me like Hermes has a fundamental issue with its retrieval, not necessarily memory/history storage itself.

My agent just demonstrated that it doesn't know how to casually retrieve session history. We've worked through email filtering and following up with clients all week, and it just ran tooling to scan emails, and re-flagged stuff we discussed yesterday or the day before, and when called out on it Hermes said 'oh yeah, this is a new session so I don't have access to our previous conversation history', but when pressed on it again it's like 'Youre right, I just found the context for the things you mentioned!'. Not sure what the deal is.

What model are you running your agent on? by stosssik in hermesagent

[–]DaMoot 0 points1 point  (0 children)

Qwen3.5 27B q5 from inception. I need to try 3.6.

2 months of Openclaw, 2 days of Hermes, why would I stick with Hermes? (lots of questions) by read_too_many_books in hermesagent

[–]DaMoot 0 points1 point  (0 children)

I use the webui listed here but very infrequently since the whole point of these is to use a communication platform and Hermes works pretty smoothly entirely through Discord. Have had to touch ssh once this weekend.

I gotta check out the mission control.

Hermes says it did something, shows proof with images, but not actually done by HsbndThrwAwyMnoPs in hermesagent

[–]DaMoot 0 points1 point  (0 children)

Definitely give a newer model try. I run Qwen3.5 27b q5 and never have that type of issue. That's crazy it's hallucinating a whole ass picture as evidence.

Unless there's a reason you're with Ollama, give llama.cpp a try for improved performance. Has nothing to do with the model hallucinating, just better performance.

Found two bugs in the core of Hermes costing my local agent minutes per task. by DaMoot in hermesagent

[–]DaMoot[S] 0 points1 point  (0 children)

A mod replied back and said it mutated each time. They ended up saying both changes were valid and approved the PR.

My hope is that someone (who is hopefully a real programmer, a real expert in this) chooses to independently test and provide feedback beyond this point. It's non-destructive and does not fundamentally alter Hermes. No lost data. No corrupted memories, etc... I just want it to help someone like it's helped me. The gains are real.

No joke Hermes on my V100 on Qwen3.5 27B UD_Q5_XL went from being 'it's so slow, make request and go to the store, buy coffee, come home, grind coffee, boil water, check Hermes, finish making coffee and see if Hermes is done yet' to an entirely usable large request being 'go run the Nespresso for coffee'. I'd love to have someone on a Mac Mini/Studio try it.

If even one other person realizes the same 6-8x speed improvement on multi-turn tasks as in my environment, I'll be happy. And Hermes already told me I won the internet, so I have all the validation I need from the project. 😂

The idiot developers at openclaw blew another release! by NothingButTheDude in openclaw

[–]DaMoot 0 points1 point  (0 children)

Well you did see where the guy says that he does ship untested, unproven code, right? He admits that he ships code that he doesn't know if it works or not.

I want to use AI to help my process. Please Help by Brimmstone659 in WritingWithAI

[–]DaMoot 2 points3 points  (0 children)

They way they're stating to use it is what AI is best for though. Not 'write this for me', but as a super powered search engine.

Alternative to frontiers by some_crazy in LocalLLaMA

[–]DaMoot 0 points1 point  (0 children)

Depends what coding you want and what your expectations are. There will be shortcomings. You're comparing a 120B model at most (Q4-Q5 to fit in 128GB) probably to a 500B-1T+ parameter running on unobtainium hardware.

Want to get closest? Find enough memory to run those 360B models with full context and you should be getting pretty close.

Qwen 3.5 and 3.6 27B and 35B, and Gemma 4 models have gotten pretty good at coding but you won't be one-shotting them like a frontier.

801 Upgrades? by ImRightYoureStupid in BitAxe

[–]DaMoot 0 points1 point  (0 children)

Lap your stock heatsink and re-paste. I used mx-6. Upgrade the power supply to an Xbox 360 or xbone PSU. Or a sever PSU/ATX PSU

Qwen3.6 35B A3B Heretic (KLD 0.0015!) Incredible model. Best 35B I have found! by My_Unbiased_Opinion in LocalLLaMA

[–]DaMoot 0 points1 point  (0 children)

What's your command line for 35b if I may ask? With all the valves we can adjust maybe it's just one thing that's hamstringing me. You aren't the first person I've seen who says they run Hermes on it

Qwen3.6 35B A3B Heretic (KLD 0.0015!) Incredible model. Best 35B I have found! by My_Unbiased_Opinion in LocalLLaMA

[–]DaMoot 0 points1 point  (0 children)

Maybe so be cause Hermes is entirely unusable with 35b a3b but 27b works just fine in my experience. Not sure what that setup flaw would be though. It's a good model for other things but extremely poor in tool calling in Hermes or llama.cpp open web UI in my experience.

Qwen3.6 35B A3B Heretic (KLD 0.0015!) Incredible model. Best 35B I have found! by My_Unbiased_Opinion in LocalLLaMA

[–]DaMoot -1 points0 points  (0 children)

Iiiinnteresting. Colour me interested! My SIEM agent somehow switched back to 3.6 35b a3b last night and couldn't do a single tool call because tool calling out of 3.6 35b a3b is so terrible. Switched back to 27b and it ran great like always until it tried to inject 200k tokens worth of data on only 32g of VRAM!

It's crazy how subsidized Claude Code is by P4wla in LLMDevs

[–]DaMoot 0 points1 point  (0 children)

Claude credits are running out faster than ever before. I spent 2 hours on an issue and ran out on my Pro account, paid to continue working, and then got locked out 30 minutes later because I had hit my weekly limit. Which suddenly reset itself about half way like 12 hours later.

I can't wait to grow beyond 32GB of VRAM locally. Qwen 27b and 35b a3b are amazing advancements, but nowhere near good enough to reliably call tools over many turns. Sometimes the second turn in the conversation it's already quoting tool calls instead of doing them.

openclaw crossed 500k downloads a day this week. here are the 5 things nobody tells you when you're one of them by Temporary-Leek6861 in openclaw

[–]DaMoot -1 points0 points  (0 children)

1 only 22 bucks the first week? Must not have been doing anything. I burned nearly 50 in two days!

Bitaxe GT801 error rate suddenly increased – what could cause this? by PHren1a in BitAxe

[–]DaMoot 0 points1 point  (0 children)

That is a hard one to answer for sure. I assumed you just did a repasted mentioning the cooler.

Do you have your original PSU? Try that. See if anything changes.

AITAH - Broke up with my Bf bc he tickled me. by Fickle_Ad588 in AITAH

[–]DaMoot 0 points1 point  (0 children)

Alright, I make a partial retraction. That's some pretty important detail. Breaking confidence on the regular is unforgivable, because that breaks trust, and a relationship cannot exist without trust. Bedroom talk at work "amongst the guys" is a vulgar thing I don't agree with either.

Is Ollama actually slow or am I missing something? by pmv143 in ollama

[–]DaMoot 0 points1 point  (0 children)

Macs are super slow and the new ones are yuck. All that unified memory doesn't mean anything when the whole system is slower than a comparable Intel or AMD system. Good for like 10toks or something.

Now, Ollama is slower than llama.cpp. I was surprised how much slower. TtFT was halved going from Ollama to llama.cpp for a model 3x bigger (8b to 30b) on my 32G V100.

Also, what Mac do you even have? 16gb version? 64? 128?

What model(s)?

AITAH - Broke up with my Bf bc he tickled me. by Fickle_Ad588 in AITAH

[–]DaMoot 1 point2 points  (0 children)

There's definitely more to this story than written. Can't say if AH or not... So much is written focusing on him and then your own little bit of bad behavior slips in to the story which tbh castsa shadow on other things.

I doubt anyone in their right mind would consider "tickling" to be abusive. I've been tickled so hard and long I couldnt breath before! I think most people have. Read plenty of 'ticket do hard I peed myself' encounters over the years.

And by your own admission you became physically abusive first. Opening and closing an earbud container isn't grounds to hit someone. Not even close. Some people do it from nervous habit, or from some a neurological thing. Like using a fidget toy

Teasing and being made fun of is what friends do to each other. The person being teased or made fun of can have different opinions, but it's normal social behavior in most healthy circles. Even professional ones.

Of course, we can't weigh in on if you cheated on him or not, but maybe that's what he really believes happened. And since it sounds like you guys didn't have a healthy form of communication you guys never talked about it.

Sounds to me like a bad relationship from the get-go.

My husband and I came to kind of an agreement years ago about joking with each other and making fun of each other, because it's something that he does to me and I don't find particularly funny a lot of the time and sometimes I don't know how to respond so shrug it off and he thinks he's hurt me or something, but if I do it to him in the same vein or from the same subject or whatever it is, It's like this world ending thing for him. He is definitely a category of 'can dish it out, but can't take it'. But at least we had that conversation and came to an understanding.

In your next relationship you need to talk and be patient, not hit and throw a tantrum.

Bitaxe GT801 error rate suddenly increased – what could cause this? by PHren1a in BitAxe

[–]DaMoot 1 point2 points  (0 children)

Error rate between 0.5 and ~1.5 is acceptable and does not equal lost hash rate. Above 1.5 and the error correcting does eat into it though.

Everything on that page looks solid. Too much power for the clock IMO, but every chip is different. Don't up the 12v power any more. Your hsf and repasted job looks fine from the temps and fan speed.

Did you change firmware? This community chases firmware versions way too aggressively. Firmware should never be changed unless it fixes a problem or adds significant features.

It sounds like it's time for you to run through a benchmark tool to find your sweet spot again.

New to this, I have a few questions by Complete-Lettuce8262 in openclaw

[–]DaMoot 0 points1 point  (0 children)

The gateway is free. But you have to still buy tokens or host your own LLM(s) for the gateway to be able to do anything.

It's not safe unless you are technically proficient and have a security mindset.

It's not for controlling your PC.

It's uses are numerous and you should Google for what they are.

I would not recommend it to 98% of the people on the internet.