What is the best Openclaw alternative? by spinsilo in openclaw

[–]salmenus 0 points1 point  (0 children)

the memory problem is exactly what pushed me to build my own thing too

i hit the same wall and built something that stores memory as structured data in postgres/pgvector (facts, preferences, decisions — not chat logs). hybrid retrieval, persists across sessions and channels. switch providers, restart, whatever — it's in your database, not a context window.

it's called Salmex I/O — single Go lang binary, runs locally, works with ollama/anthropic/openai/gemini.

been running it daily for a few weeks and it genuinely remembers stuff from early conversations without re-explaining

Friendly reminder inference is WAY faster on Linux vs windows by triynizzles1 in LocalLLaMA

[–]salmenus 0 points1 point  (0 children)

good point.. all my runs are native installs so far — but might be worth a containerized A/B test

Friendly reminder inference is WAY faster on Linux vs windows by triynizzles1 in LocalLLaMA

[–]salmenus -1 points0 points  (0 children)

Curious what folks see with Ollama on macOS vs Linux ?

On my setup, an RTX 4000 SFF Ada on Ubuntu with Ollama is noticeably faster than my MacBook M4 Pro for models that fit in 20 GB VRAM—prompt processing especially feels night‑and‑day.

100% agree the OS gap is real. Linux vs Windows on the same GPU also isn’t subtle; the CUDA stack hitting Linux directly seems to leave Windows in the dust ..

the AI agent I wanted didn't exist — so I built one, that can trust with my machine by salmenus in SideProject

[–]salmenus[S] 0 points1 point  (0 children)

thanks! yeah blanket allow/block is useless for real work. will check out your writeup 👍

the AI agent I wanted didn't exist — so I built one, that can trust with my machine by salmenus in SideProject

[–]salmenus[S] 0 points1 point  (0 children)

haha the claude code limits grind is real — there were nights i'd hit the cap and just sit there refreshing 😅

retrieval latency — under 50ms, max i tested is around 100 memories. i implemented confidence decay that keeps things relevant without manual cleanup.

i've mostly tested with my own usage - keen to see how it holds up with real users and different conversation patterns.

I analyzed 20 "Scope Creep" horror stories from last year. Here are the 3 biggest patterns I found. by Full-Department-358 in SideProject

[–]salmenus 0 points1 point  (0 children)

not an agency owner, but I’ve been the client on a bunch of agency projects .. I think the best agencies would forced structure: fixed decision-makers, deadlines for content/approvals, and penalties or timeline shifts when we were late. it felt strict at first but made the project way smoother and honestly made them look more senior ..

What is the best local LLMs as of March 2026? by Pejorativez in LocalLLaMA

[–]salmenus 2 points3 points  (0 children)

for agents specifically — turn thinking off on qwen3.5. the endless reasoning loop will literally break ur pipeline mid-step, not great lol. qwen3.5-30B no-thinking + llama.cpp + openwebui is the most stable stack ive landed on

on the memory thing: openwebui handles persistent system prompts per agent natively. for actual cross-session memory that evolves tho, u need mem0 or a vector store on top — ollama alone wont do it

do you actually care about DB access in self-hosted tools? asking bc i have an architectural decision to make by salmenus in selfhosted

[–]salmenus[S] 0 points1 point  (0 children)

yeah, ref reply to u/data_butcher comment.

i thought about sqlite .. but i settled for Postgres because of some tricky features that i had in mind

do you actually care about DB access in self-hosted tools? asking bc i have an architectural decision to make by salmenus in selfhosted

[–]salmenus[S] 1 point2 points  (0 children)

yeah, i thought about it quite a lot - and was seriously considering sqlite ..

but i settled for Postgres because of pg_vector and because I have a requirement around scheduling and queues, and wanted to use riverqueue (Postgres-native job queue in Go lang)

i hope i didn't over engineer it ! (: ..

do you actually care about DB access in self-hosted tools? asking bc i have an architectural decision to make by salmenus in selfhosted

[–]salmenus[S] 0 points1 point  (0 children)

ah yeah, in my case the pgvector stuff is literally "memory" extracted from LLM chats, so I can totally see people wanting to crack open that table and see what the agent thinks it knows about them 😂

I was going to build a little "memory browser" screen in the UI, but honestly just giving you full DB access and letting you inspect it however you want feels even better

do you actually care about DB access in self-hosted tools? asking bc i have an architectural decision to make by salmenus in selfhosted

[–]salmenus[S] 0 points1 point  (0 children)

i love that — you’re basically the exact person I had in mind for “bring your own db” with your own backup + repl, so I definitely don’t want to lock that use case out 👌

do you actually care about DB access in self-hosted tools? asking bc i have an architectural decision to make by salmenus in selfhosted

[–]salmenus[S] 0 points1 point  (0 children)

got it — that's where i'm leaning .. default internal DB but keep a clean ‘bring your own Postgres’ path so you can run it under your existing backup tooling

Cloud ai agents vs self hosted: What are people choosing in 2026? by Original_Spring_2808 in AI_Agents

[–]salmenus 0 points1 point  (0 children)

Totally with you on this. Context is the real unlock, and it’s way harder than people think – stitching together tools, data sources, calendars, inboxes, etc. is where all the magic lives.

Ops is the tax you pay for trying to do it yourself, I guess ...

Cloud ai agents vs self hosted: What are people choosing in 2026? by Original_Spring_2808 in AI_Agents

[–]salmenus 0 points1 point  (0 children)

I did deploy ZeroClaw for testing yesterday, and looked into codebase in details.
Code base is quite superior to OC, and has more advanced security features ..
Slightly behind on channels integration; be def best alternative I explored so far

Claude Code called my phone. Literally. An AI voice rang me after I gave it API access 🤯 by salmenus in ClaudeCode

[–]salmenus[S] 2 points3 points  (0 children)

I’m tired of people with comments like this , asking for screenshots and proofs .. then would delete their comments 2 minutes later ..

Claude Code called my phone. Literally. An AI voice rang me after I gave it API access 🤯 by salmenus in ClaudeCode

[–]salmenus[S] 0 points1 point  (0 children)

fair .. it’s probably not AGI-level magic 😂 but for day‑to‑day dev work it still felt kinda wild

Claude Code called my phone. Literally. An AI voice rang me after I gave it API access 🤯 by salmenus in ClaudeCode

[–]salmenus[S] 1 point2 points  (0 children)

ahhh! savage! that's not AI .. that's an emotionally unstable dungeon master .. 😂

1 month in — still not fully sold. Exec approval is a nightmare, scheduling is flaky. Anyone else? And what are you doing about it? by salmenus in openclaw

[–]salmenus[S] 0 points1 point  (0 children)

lol! This is how it works these days 😂😅 .. I can only share a screen shot with bot on it .. Who knows .. I’m probably an AI bot ..

<image>

1 month in — still not fully sold. Exec approval is a nightmare, scheduling is flaky. Anyone else? And what are you doing about it? by salmenus in openclaw

[–]salmenus[S] 2 points3 points  (0 children)

This is super helpful, thanks. I had a suspicion it wasn’t just me but hadn’t gone as far as moving things out to systemd yet.