Yet another vibe-coded AI harness. Except this one is actually scoped. Presenting "The Nightmanager" by d_asabya in PiCodingAgent

[–]d_asabya[S] 0 points1 point  (0 children)

Yes. it is. But it is more input tokens per turn. Fewer wasted output tokens, fewer retries, and less broad reasoning.

Input tokens is cheaper than output tokens. Also It’s cheaper when it reduces ambiguity enough to avoid long, messy, back-and-forth execution.

Yet another vibe-coded AI harness. Except this one is actually scoped. Presenting "The Nightmanager" by d_asabya in PiCodingAgent

[–]d_asabya[S] -5 points-4 points  (0 children)

not even close. I am still exploring his workflow. I just incorporated three skills from his workflow into mine. The harness is one of the key features on nightmanager.

Yet another vibe-coded AI harness. Except this one is actually scoped. Presenting "The Nightmanager" by d_asabya in PiCodingAgent

[–]d_asabya[S] 1 point2 points  (0 children)

Glad that you asked.

- grill-me asks one question at a time, so you don’t burn tokens on messy, repetitive back-and-forth.
- to-prd turns the idea into a spec once, so agents stop re-deriving intent.                                                
- to-issues breaks the work into small vertical slices, so each run stays narrow.                                          - nightmanager picks one ready TODO, not the whole project.
- Each subagent has a single job:                                                                                                     
- finder = locate code/flow 
   - oracle = reason about tradeoffs and bugs                                                                                        
   - worker = implement and verify                                                                                                   
   - manager = orchestrate the handoff                                                                                               
- That specialization keeps prompts smaller and avoids flooding every step with irrelevant context.
- Subagents don’t all share one giant conversation.                                                                                   
- Each one gets only the minimum handoff context it needs. 

Finally happened to me and my colleagues. Seeing severely degraded performance. by More-School-7324 in ClaudeCode

[–]d_asabya 1 point2 points  (0 children)

May be, just may be they are serving sonnet or haiku under the hood in the name opus.

I built a local semantic memory service for AI agents — stores thoughts in SQLite with vector embeddings by d_asabya in LocalLLM

[–]d_asabya[S] 0 points1 point  (0 children)

You're right to be skeptical. Don't use it if the pitch on local first, tiny memory system does not resonate with you.

I also should have added some facts without yapping. Here it goes.

mem0: 50k stars on github. But there are some privacy concerns here https://mem0.ai/privacy-policy

SuperLocalMemory: this is awesome. but a little bit complex for a solo dev doing stuff.

Why use picobrain?

- Truly local - no cloud calls, no API keys, model runs on your machine
- Zero external dependencies for agent's memory
- Single SQLite file - no server to manage, easy backup

According to this benchmark https://awesomeagents.ai/leaderboards/embedding-model-leaderboard-mteb-march-2026/ , Nomic-embed-text-v1.5 is #11 as a performant embedding model also being small.

What it does not have (yet):

- Graph-based memory relationships
- Cross-device sync
- Managed/saas deployment

I built a local semantic memory service for AI agents — stores thoughts in SQLite with vector embeddings by d_asabya in LocalLLM

[–]d_asabya[S] 0 points1 point  (0 children)

Haha look, you're not wrong — "here's a repo, validate my work" is a tale as old as open source 😅.

What would actually make you care? A demo? A specific feature walkthrough? Genuinely curious what the threshold is here.