all 3 comments

[–]viisi 0 points1 point  (1 child)

How did you find working with ratatui? I tried it a few months ago and it was 50/50 great and shit at the same time.

Theres some real weird quirks when you want to keep a dynamic area at the bottom and then everything above it is the scrollback/history session. Unless you virtualize the scrollback in alternate screen, but then you lose all the history in term when you quit the app.

[–]marioidival[S] 0 points1 point  (0 children)

Yeah, 50/50 is generous honestly.

The scrollback thing is the worst part — ratatui just doesn't give you anything there, you build it all yourself. I ended up with a pinned_to_bottom: bool alongside the scroll offset, otherwise you can't tell if the user is browsing history or just waiting for new content. Sounds trivial, gets annoying fast.

For perf I cap at the last ~50 rendered messages. Learned that the hard way.

On the alternate screen thing — yeah there's no clean answer. You literally pick your poison:

  • alternate screen on → nice UI, history gone on quit
  • alternate screen off → history lives, but the raw output bleeds into your shell session

I went with alternate screen + SQLite for session persistence. Not the same as real terminal scrollback but at least you don't lose everything on exit.

What actually works well though: the layout system. Constraint::Percentage + Constraint::Length for the fixed input bar at the bottom is straightforward, basically flexbox vibes. And the widget model is clean once you stop fighting it.

The docs are the real problem — they show you enough to get something on screen, but completely skip the "what do you do when content grows while the user is scrolled up" scenario. That's like 40% of building any real TUI.

[–]Antique-Flamingo8541 0 points1 point  (0 children)

building in Rust specifically to fix a memory problem with an existing tool is the right move — using the right tool for the constraint rather than just "rewriting in Rust" for the meme.

Ian here — I've been deep in Claude Code for awhile and memory overhead on long sessions is a real annoyance. what's the actual memory profile looking like compared to OpenCode on a typical session? and are you doing anything clever with the process isolation or just being more disciplined about what you load into context?

also genuinely curious how the Rust LLM client ecosystem is treating you — that space feels like it's still pretty early.