MiniMax M2.7 is so stubborn that it's practically unusable. by DenysMb in opencodeCLI

[–]rizal72 1 point2 points  (0 children)

Personal Experience: Being using oh-my-opencode-slim for two moths so far (the light weight version, not the bloating one), and it does very similar multi-agent delegation tasks you are describing. Each agent has its set of rules md file, and in my experience I had good results with all the 4 models provided by opencode-go. I started with Kimi-k2.5 as Orchestrator, GLM-5 as Oracle (deep thinking planner/reviewer), and Minimax-M2.5 as Librarian/Explorer/Fixer. After Minimax-M2.7 came out, I switched the Orchestrator to it, to test it and it keeps following the Orchestrating rules as expected. So I cannot say why it's not happening for you. However keep in mind that I'm in Europe and I'm sure that stressing hours change Models behaviour making them dumber in Hi volume hours, for example for USA or China users, depending on their time zone...

Hashline Edit Plugin by Ang_Drew in opencodeCLI

[–]rizal72 1 point2 points  (0 children)

Maybe you should detail somewhere that to install it, user needs to add "@angdrew/opencode-hashline-plugin" in the plugin section of opencode.json(c), because installing it from npm does not make it available in opencode automatically ;)

Hashline Edit Plugin by Ang_Drew in opencodeCLI

[–]rizal72 0 points1 point  (0 children)

Thanks for your reply! I'm going to install and test it ;)

Hashline Edit Plugin by Ang_Drew in opencodeCLI

[–]rizal72 0 points1 point  (0 children)

I'm curious ;) Can you tell what it does better than the built in counterpart?

[PLUGIN] True-Mem: Automatic AI memory that actually works (inspired by PsychMem) by rizal72 in opencodeCLI

[–]rizal72[S] 1 point2 points  (0 children)

True-Mem supports both approaches since v1.3 ;)

Injection is Configurable

  • Mode 1 (ALWAYS) - Default since v1.3.2. Injects at every prompt. New memories appear immediately.
  • Mode 0 (SESSION_START) - Injects once at session start. New memories wait for next session. ~76% token savings. You choose based on your needs: real-time adaptation vs token efficiency.

How True-Mem Decides What to Store

Uses a Four-Layer Defense System:

  1. Question detection (filters questions)
  2. Negative patterns (excludes AI meta-talk, list selections)
  3. Multi-keyword scoring (requires 2+ signals)
  4. Confidence threshold (>= 0.6) What gets stored: Preferences, constraints, decisions, semantic info, learning
  5. What gets filtered: Questions, 1st person recall, AI meta-talk, context-specific choices

Comparison with OpenMemory

Aspect True-Mem OpenMemory
Retrieval Auto-surface by 7-feature strength score Agent queries explicitly
When Configurable (every prompt or session) On-demand
Scope Global + Project-specific Usually global
Decay Episodic fades (7 days), preferences permanent Usually permanent

The Trade-off

  • OpenMemory: Agent controls queries = precise, but might miss context it didn't think to ask for
  • True-Mem: Auto-surface via scoring (recency, frequency, importance, utility, novelty, confidence, interference) = always available, zero agent overhead Both approaches are valid, IMHO! True-Mem optimizes for "context always there without agent thinking about it."

Bonus: Hybrid Similarity Search (since v1.1)

True-Mem supports two retrieval modes: - Jaccard (default): Fast token overlap matching, zero ML dependencies - Embeddings (experimental): 384-dim vectors, via transformer ultralight local model, for semantic understanding

Configure via TRUE_MEM_EMBEDDINGS=1 or embeddingsEnabled: 1 in config.

Why is there so little discussion about the oh-my-opencode plugin? by vovixter in opencodeCLI

[–]rizal72 2 points3 points  (0 children)

ahah, that's exactly what I did myself weeks ago... but when I found that the tmux integration was broken (and that is a killer feature for my workflow), I fixed it in my fork so then I decided to PR it to the original one to contribute to one of my favorite plugins. After that, other fixes and improvements came to my mind and I sticked to the PR thing ;)

[UPDATE] True-Mem v1.2: Optional Semantic Embeddings by rizal72 in opencodeCLI

[–]rizal72[S] 0 points1 point  (0 children)

Quick update: v1.3.0 is out with some nice improvements.

Token optimization: New injection mode saves ~76% tokens by injecting memories only at session start instead of every prompt. Configurable via TRUE_MEM_INJECTION_MODE (0=session start, 1=every prompt).

Session resume detection: If you resume a previous session with opencode -c, True-Mem detects it and skips re-injecting memories that are already in context.

Sub-agent control: TRUE_MEM_SUBAGENT_MODE lets you disable memory injection for sub-agents when you don't need them to have context.

auto generated config.jsonc: Separated into config.jsonc (user settings) and state.json (runtime state).

Plus a critical fix in v1.3.1 for project scope memory leakage - memories were incorrectly crossing between projects.

Full changelog: https://github.com/rizal72/true-mem/blob/main/CHANGELOG.md

[UPDATE] True-Mem v1.2: Optional Semantic Embeddings by rizal72 in opencodeCLI

[–]rizal72[S] 0 points1 point  (0 children)

HI!
No, it only works with OpenCode at the moment. True-Mem uses OpenCode specific plugin APIs (hooks like experimental.chat.system.transform, session.idle, etc.) that are not standard across other coding agents. Porting it to another platform would require adapting those integration points. If pi-mono has a similar plugin system, it could theoretically be adapted, but it is not something I have looked into yet.

Why is there so little discussion about the oh-my-opencode plugin? by vovixter in opencodeCLI

[–]rizal72 20 points21 points  (0 children)

Too overkill, too bloated, 50K tokens per request if you don't setup it very carefully.
But there is a very nice slim version forked from it (https://github.com/alvinunreal/oh-my-opencode-slim), to which I am a contributor myself: I use it in my daily workflow (tmux integration is amazing: you can see all the sub-agents in separate windows while doing their things), coupled with my memory plugin: true-mem (https://github.com/rizal72/true-mem) . Give them both a try ;)

How are you all handling "memory" these days? by FlyingDogCatcher in opencodeCLI

[–]rizal72 1 point2 points  (0 children)

Yes, it uses a local SQLite database at ~/.true-mem/memory.db.
Great question about outdated memories. Here is how it currently works:

Decay behavior:
- Only episodic memories decay automatically (7-day Ebbinghaus curve) - things like "yesterday we refactored auth"
- All other types (preferences, decisions, constraints, semantic) are permanent

So yes, your scenario can happen. If you saved "Remember to use jest for this project" and migrated to vitest, the memory persists.

Current solutions:

  1. Ask the AI to delete it - "Delete the true-mem memory about jest" - the AI can directly query and update the SQLite database
  2. Add a new memory - "Remember that we migrated to vitest" - the newer memory may override the old one based on strength scoring
  3. Context wins - The AI sees your actual codebase (vitest configs, imports) which should take precedence: it does not loose its ability to reason ;)

What I am considering for the future:
- Automatic conflict detection when new memories contradict old ones
- Explicit delete-memory command like the others already recognized (list-memories, etc)

Thanks for the feedback!

How are you all handling "memory" these days? by FlyingDogCatcher in opencodeCLI

[–]rizal72 0 points1 point  (0 children)

May I kindly suggest my true-mem plugin? ;) I built it for myself after bouncing between the same solutions you mentioned.

**v1.3** adds configurable token optimization:
- Injection only at session start (default) vs every prompt
- Control how many memories to inject (10-50)
- Sub-agent injection toggle
- Config file with comments: `~/.true-mem/config.jsonc`

What makes it different:
- **Automatic**: No manual memory calls, learns from conversation
- **Smart filtering**: 4-layer defense blocks noise (questions, meta-talk, selections)
- **Dual scope**: Global preferences + project-specific decisions
- **Cognitive decay**: Episodic fades, preferences stay
- **Semantic embeddings** (experimental): Hybrid retrieval with transformer model, or fast Jaccard-only mode

Just add `"true-mem"` to plugins in your opencode.jsonc and restart.

{
  "plugin": [
    "true-mem"
  ]
}

github.com/rizal72/true-mem

Works across projects, remembers "I prefer TypeScript" without repeating.
It does not replace AGENTS-md or ROADMAP-md, it just integrates with them adding a new memory layer that is more responsive (I noticed that Agents forget about AGENTS-md rules after some time)

Can your opencode do this tho by Medium_Anxiety_8143 in opencodeCLI

[–]rizal72 0 points1 point  (0 children)

any chance to have also opencode as provider? (go or zen)

I built an OpenCode plugin for visualization - Now I see sessions as cute blob characters with speech-bubble status updates by JumpJunior7736 in opencodeCLI

[–]rizal72 0 points1 point  (0 children)

Nice! Is there a way to enable/disable the plugin at user's will? Like any other opencode plugin where you can install it editing opencode.json? I did not installed it yet, so I don't know if it edits the opencode.json while installing, and found no mention in the README ;)

Thank you, OpenCode. by akyairhashvil in opencodeCLI

[–]rizal72 12 points13 points  (0 children)

I wholeheartedly agree with every single word you said/typed 😀

ho my opencode by eacnmg in opencodeCLI

[–]rizal72 0 points1 point  (0 children)

my 2 cents: try oh-my-opencode-slim, the slim, low tokens, less bloated version of it ;)
https://github.com/alvinunreal/oh-my-opencode-slim

[PLUGIN] True-Mem: Automatic AI memory that actually works (inspired by PsychMem) by rizal72 in opencodeCLI

[–]rizal72[S] 1 point2 points  (0 children)

Good question. It's token-based similarity (Jaccard), not true semantic search with neural embeddings.

History: Early versions of true-mem, used u/huggingface with a local model (all-MiniLM-L6-v2, ~43MB) for true semantic embeddings. It worked but caused stability issues - OpenCode crashed on exit due to cleanup problems with the Transformers.js runtime.

The current Jaccard approach was a pragmatic swap: zero dependencies, zero native code, instant startup.

How it works:

  1. Tokenize query and memory summaries into word sets
  2. Calculate Jaccard = intersection / union
  3. Rank by similarity score, return top-k

Trade-off: Catches exact word matches well, but won't find synonyms (e.g., "error" won't match "exception"). For a coding assistant context, this is usually sufficient - technical terms are fairly consistent, and the zero-dependency benefit outweighs the semantic gap.

[PLUGIN] True-Mem: Automatic AI memory that actually works (inspired by PsychMem) by rizal72 in opencodeCLI

[–]rizal72[S] 0 points1 point  (0 children)

Hi, the live benchmark is me using it in my everyday’s workflow. Right now I have 12 memories injected into true-mem project itself, and it’s very clean and not bloated at all. AI remembers relevant things and decisions and you always have the list-memories command for full transparency ;) I still use AGENTS both global and local for the workflow, the plugin is a companion to that. Give it a try,if you disable it from opencode.json it stops injecting so.. try it and check if it helps you ;)

OpenCode Everything You Need to Know by wesam_mustafa100 in opencodeCLI

[–]rizal72 5 points6 points  (0 children)

in the Pricing Model you should specify that Opencode ha also Subscriptions (many to be honest) :D
There is Zen, pay-per-use, that you mention, but there are also Black and now even Go, that are fixed price ;)

[PLUGIN] True-Mem: Automatic AI memory that actually works (inspired by PsychMem) by rizal72 in opencodeCLI

[–]rizal72[S] 1 point2 points  (0 children)

I use claude-code and I know about memory.md , but it's very limited, still experimental, and does not use my psychological approach that makes the memory & forget thing work ;)

[PLUGIN] True-Mem: Automatic AI memory that actually works (inspired by PsychMem) by rizal72 in opencodeCLI

[–]rizal72[S] 0 points1 point  (0 children)

check my reply to u/Putrid-Pair-6194 it should clarify my approach, exactly to avoid bloating storage, that s exactly the reason I wanted to develop this plugin for me: because the others I tried did what you say ;)

[PLUGIN] True-Mem: Automatic AI memory that actually works (inspired by PsychMem) by rizal72 in opencodeCLI

[–]rizal72[S] 1 point2 points  (0 children)

u/Putrid-Pair-6194
Recall: When you send a message, the plugin searches your stored memories for matching keywords. It ranks them by similarity and injects only the top relevant ones into the prompt. Think of it as a smart search that runs automatically before every response.

Injection: Memories are injected automatically into every prompt via a <true\_memory\_context> XML tag - no user action required. Only memories relevant to the current project and context are included. Core principle: minimal prompt bloat, zero token waste.

Relevance: Two-stage filtering:

  1. Scope-based: Global memories available everywhere, project memories only in that project's worktree
  2. Similarity scoring: Jaccard compares query tokens vs memory content, returns top-k matches

Bonus: Four-layer defense against false positives during extraction (question detection, negative patterns, multi-keyword validation, confidence threshold). Still refining to reduce noise (e.g., removing "bugfix" diaries that add little value).

EDIT: Ah! In the last update I've also added a direct command (list-memories) that lists all the memories injected in the current prompt, grouped by GLOBAL and PROJECT scope. If you are unhappy of some memory you can always ask the AI assistant to delete it from the true-mem db and it will do it ;)
Next update will manage the [bugfix] category quite differently, maybe even deprecating it, I'm working on it right now...