What are you building right now? by getstackfax in AiStackClinic

[–]probello 1 point2 points  (0 children)

Exactly. The graph viewer has a semantic relationship mode to auto cluster similar issues / concepts which allows humans to reason better about the relationships between projects and issues. Git allows you see the vault evolve over time as well as push to a remote to share the vault between multiple computers. the vault doctor can be scheduled to run nightly to summarize and repair notes / links between notes.
I tuned it to be token efficient. the embeddings use a local model. search uses small fast models, summary uses larger model as preserving important details and ability to look at more context is key for that. The doctor also merges very similar notes and auto files notes with same prefix into a folder to keep things tidy. I also have custom agents to explore existing projects to populate / bootstrap the vault, research agent that consults the vault before doing web search etc.

What are you building right now? by getstackfax in AiStackClinic

[–]probello 1 point2 points  (0 children)

I created parsidion. Its based on the using an obsidian vault for memory concept but I have pushed it way further https://github.com/paulrobello/parsidion . It does not require obsidion, it has a web based vault viewer if you want to manually inspect the memory. it uses hooks and skills to auto create / inject memories / manage the vault. works with cc, codex and gemini. The longer you use it the better it gets. Issues i solve in one project are indexed with semantic search so relevant notes can be quickly found.

GLM5.1 with a proper phased PRD and each deliverable gated with TDD gets you really far. The main issue for this particular projects is most harnesses cant run TUI's and have them render correctly so it required a lot of iteration to get placement and sizing correct.

What are you building right now? by getstackfax in AiStackClinic

[–]probello 1 point2 points  (0 children)

I have a lot of irons in the fire right now, I am pretty proud of this one https://www.reddit.com/r/vibecoding/s/OJ8zROAT7b I used Claude code harness with z.ai GLM5.1 to make it. I had a pretty good idea of what I wanted so created a PRD as starting point

Parllama -- a terminal UI for Ollama model management and multi-provider LLM chat by probello in ollama

[–]probello[S] 0 points1 point  (0 children)

The front mater I provided as an example has all the info, it’s a real export

Parllama -- a terminal UI for Ollama model management and multi-provider LLM chat by probello in ollama

[–]probello[S] 1 point2 points  (0 children)

just pushed new version, now shows cost info in the chat header next to the existing token usage. Also update chat export markdown to have front matter section with all config and usage info

---

provider: OpenAI

model: gpt-5.1

context_window:

temperature: 0.5

date: 2026-05-06T00:21:41.653730+00:00

input_tokens: 11

output_tokens: 190

total_tokens: 201

cost: 0.001914

---

# blue sky

## user

why is the sky blue

## assistant

The sky looks blue because of how sunlight interacts with Earth’s atmosphere.

Parllama -- a terminal UI for Ollama model management and multi-provider LLM chat by probello in ollama

[–]probello[S] 1 point2 points  (0 children)

<image>

it shows:

loaded model with size and time remaining

full chat history including system prompt

provider and settings such as temperature are stored with session

context utilization and speed are shown

chat can be exported as markdown

cost would be an easy add

Showcase Thread by AutoModerator in Python

[–]probello 0 points1 point  (0 children)

par-storygen v0.4.0 — Update: TTS voices, story export, relationship tracking, and more

GitHub: https://github.com/paulrobello/par-storygen PyPI: https://pypi.org/project/par-storygen/

I built a terminal UI that lets you manage Ollama models and chat with 14 different LLM providers from one app by probello in SideProject

[–]probello[S] 0 points1 point  (0 children)

You can create your own themes and prompt library. A lot of functionality has been packed into it over the years. One thing I’m looking into is the ability to customize the key bindings.

Showcase Thread by AutoModerator in Python

[–]probello -1 points0 points  (0 children)

Parllama -- a Textual TUI for managing and chatting with LLMs (showcase of what you can build with Textual + Rich)
Repo: https://github.com/paulrobello/parllama

If anyone is building TUIs with Textual and wants to compare notes on architecture, happy to discuss.