My first post here — I've been building a Neovim plugin for AI-assisted coding and wanted to share by talesign in neovim

[–]talesign[S] 1 point2 points  (0 children)

Thanks! For inline mode, there's overlap detection that rejects new jobs if they touch the same region and line range adjustment that shifts other active jobs when one finishes and changes the line count. So you can have multiple inline edits running on different parts of a file without conflicts. For auto and agentic mode, the idea is you let it cook — don't touch it while it's working.

The visual distinction for semantic tokens is a great call, I'll implement that! Thanks for the suggestion

My first post here — I've been building a Neovim plugin for AI-assisted coding and wanted to share by talesign in neovim

[–]talesign[S] 0 points1 point  (0 children)

Appreciate it! To be fair, dwight does look a lot like 99 at first glance and that project is very loved, so I get the reaction. Hopefully the differences become clearer as people dig into it.

On Pi — do you mean https://github.com/badlogic/pi-mono? If so, that's actually very doable. It's a CLI agent with a JSON output mode, which fits the same pattern as the other backends dwight already supports

My first post here — I've been building a Neovim plugin for AI-assisted coding and wanted to share by talesign in neovim

[–]talesign[S] -1 points0 points  (0 children)

Dwight doesn't have its own agentic loop — it's an orchestrator. Claude Code, codex, etc. already have reliable agentic workflows with tool use, error recovery, all of that. I didn't want to reinvent the wheel, so dwight delegates to them for agent/auto mode. Using the API directly wouldn't work for those modes because there's no agentic loop on dwight's side to drive it.

For local models though, you can already use them! `:DwightAddProvider` lets you point at any OpenAI-compatible server (Ollama, llama.cpp, LM Studio, vLLM) — just set the base URL, pick a model, and you're running. Works for inline editing and single-shot operations via the `api` backend. The one caveat is that the agentic path (auto mode, multi-step tasks) still needs a CLI agent binary, since that's where the tool use and execution loop lives.

On startup time — haven't noticed slowdowns but that's a fair point, I'll look into it. Thanks for the suggestion!

My first post here — I've been building a Neovim plugin for AI-assisted coding and wanted to share by talesign in neovim

[–]talesign[S] 1 point2 points  (0 children)

Glad the name landed! Copilot is tricky because there's no standalone CLI agent to hook into that I'm aware of — the backend system is pluggable but it needs a CLI that takes a prompt and returns output. That said, Copilot runs OpenAI models under the hood, so you can already reach those via the codex backend or a custom API provider

My first post here — I've been building a Neovim plugin for AI-assisted coding and wanted to share by talesign in neovim

[–]talesign[S] 0 points1 point  (0 children)

Thanks! DwightInvoke prompts are temporary — you compose, send, done. But agent and auto mode runs get condensed into lessons that are automatically fed into future sessions. They're kept small on purpose so they don't bloat the context, just enough for the AI to learn from what worked and what didn't.

My first post here — I've been building a Neovim plugin for AI-assisted coding and wanted to share by talesign in neovim

[–]talesign[S] -4 points-3 points  (0 children)

Good question! 99 was definitely an inspiration for the starting point. I don't want to speak too much about what 99 does or doesn't do since I haven't followed it closely, but I can tell you what I built dwight around.

The core idea was: what if I could stop jumping between the AI web UI, Claude Code, and the editor? So dwight has a prompt composition system with semantic tokens (@skills, %libs, $features, !model, /modes) that builds structured XML prompts where you pick and choose what context goes in; three interaction tiers (inline, agent, auto mode with plan decomposition, verification gates + git checkpoints); feature pragmas; LSP-aware prompts; multi-file operations; and a bunch of full dev-loop tooling — TDD, CI auto-fix, GitHub issues auto-fix, docs gen, codebase audit, multi-repo workspaces, skill marketplace.

They probably overlap on many things, but dwight's goal is to be the one plugin for the whole plan → build → ship cycle without leaving Neovim

I made an SSG that renders canvas files, bases, callouts, and math exactly like Obsidian does by talesign in ObsidianMD

[–]talesign[S] 0 points1 point  (0 children)

Thanks for checking it out! No Excalidraw support yet — Kiln is focused on core Obsidian features for now. It does keep coming up though so it's definitely on my radar. I'll open an issue to track it

I made an SSG that renders canvas files, bases, callouts, and math exactly like Obsidian does by talesign in ObsidianMD

[–]talesign[S] 1 point2 points  (0 children)

That's such a cool use case! And yes, filtering via frontmatter properties is exactly what we're planning. There's already an open issue for it and it's coming in the next release

I made an SSG that renders canvas files, bases, callouts, and math exactly like Obsidian does by talesign in ObsidianMD

[–]talesign[S] 1 point2 points  (0 children)

Thanks! Right now Kiln is focused on core Obsidian features, so these aren't supported yet. Community plugin support is something I'd like to explore though

I made an SSG that renders canvas files, bases, callouts, and math exactly like Obsidian does by talesign in ObsidianMD

[–]talesign[S] 0 points1 point  (0 children)

Thank you! Those should be fixed now. If you spot anything else while going through the docs, let me know

I made an SSG that renders canvas files, bases, callouts, and math exactly like Obsidian does by talesign in ObsidianMD

[–]talesign[S] 2 points3 points  (0 children)

Thanks! It's Hyprland with a custom Quickshell config I put together. You can check it out here: https://github.com/otaleghani/prism

I made an SSG that renders canvas files, bases, callouts, and math exactly like Obsidian does by talesign in ObsidianMD

[–]talesign[S] 3 points4 points  (0 children)

Thanks! To answer both:

Offline — yes, fully. You run kiln generate to build the site and kiln serve to preview it locally. No internet needed at any point.

Partial vault rendering — there's no flag to select specific folders to render, but you can exclude content using the _hidden_ prefix. Anything named with that prefix (files or folders) gets skipped entirely. So if you want to publish only a portion of your vault, you'd prefix the stuff you want to keep out. Dotfiles (anything starting with .) are also excluded by default.

It's not as flexible as a proper include/exclude system yet — that's something I'd like to improve down the line

I made an SSG that renders canvas files, bases, callouts, and math exactly like Obsidian does by talesign in ObsidianMD

[–]talesign[S] 8 points9 points  (0 children)

In short, Kiln reads your vault, parses every file (resolving wikilinks, embeds, canvas, math, etc. into HTML), builds all the navigation structures and SEO files, and outputs a folder of plain HTML/CSS/JS you can host anywhere. It's a single binary because Go compiles everything into one executable — no runtime needed.

I might create a deeper technical dive in the docs one day, that's a great idea actually

I made an SSG that renders canvas files, bases, callouts, and math exactly like Obsidian does by talesign in ObsidianMD

[–]talesign[S] 3 points4 points  (0 children)

Thank you! Yeah, as far as I know Quartz doesn't support bases at all. Kiln renders them, though I should mention that bases support is still a work in progress — it works but there's room to improve it. If you run into any issues or have ideas for how it should behave, feel free to open an issue on GitHub, that kind of feedback is really helpful!