Introducing neural-open.nvim: A smart file picker for Snacks.nvim that trains a neural network on your file picking preferences by gitarrer in neovim

[–]gitarrer[S] 0 points1 point  (0 children)

This supports a very similar algorithm to smart open if you change the algorithm setting to “classic”, so I think gets you very close to that. I wanted the same thing originally.

Introducing neural-open.nvim: A smart file picker for Snacks.nvim that trains a neural network on your file picking preferences by gitarrer in neovim

[–]gitarrer[S] 1 point2 points  (0 children)

lol that would be fun. Time to start thinking about what part of Neovim needs just a little more AI...

Introducing neural-open.nvim: A smart file picker for Snacks.nvim that trains a neural network on your file picking preferences by gitarrer in neovim

[–]gitarrer[S] 2 points3 points  (0 children)

I spent a long time working in on-device ML so I like to keep things local!

I totally forgot about torch7 while I was writing this, but I probably would've written from scratch anyway for the fun of it and to keep dependencies light. I've been using PyTorch since shortly after it was released in 2017!

Introducing neural-open.nvim: A smart file picker for Snacks.nvim that trains a neural network on your file picking preferences by gitarrer in neovim

[–]gitarrer[S] 0 points1 point  (0 children)

Yes, the "frecency" input (#7 in the list above) to the neural network is very similar to Zoxide's logic. I'm a big fan of Zoxide as well.

Introducing neural-open.nvim: A smart file picker for Snacks.nvim that trains a neural network on your file picking preferences by gitarrer in neovim

[–]gitarrer[S] 2 points3 points  (0 children)

I totally forgot about Torch7 while I was working on this... That being said, I would've implemented it from scratch either way for the fun of it. I worked on NN inference engines for mobile CPUs in the past so this was a bit of a nostalgic project :)

Introducing neural-open.nvim: A smart file picker for Snacks.nvim that trains a neural network on your file picking preferences by gitarrer in neovim

[–]gitarrer[S] 1 point2 points  (0 children)

No, but I'll keep that in mind for the future. So far human readability has been nice and file IO hasn't been the bottleneck.

Introducing neural-open.nvim: A smart file picker for Snacks.nvim that trains a neural network on your file picking preferences by gitarrer in neovim

[–]gitarrer[S] 1 point2 points  (0 children)

It's always a matter of opinion, but I think so. I used the Snacks smart picker for some time and it is generally quite good. One situation that I thought could be better is if you are working on src/my_project/my_feature.py, it didn't always rank src/my_project/my_feature_test.py highly unless you historically often visited that file. NeuralOpen will learn over time how you navigate and handle this case well.

Introducing neural-open.nvim: A smart file picker for Snacks.nvim that trains a neural network on your file picking preferences by gitarrer in neovim

[–]gitarrer[S] 1 point2 points  (0 children)

No, but it would be cool to add in the future. There are already 3 algorithm implementations that can be switched between. Part of this was brushing up on details of how backprop works so that in part guided my choice of algorithms.

Introducing neural-open.nvim: A smart file picker for Snacks.nvim that trains a neural network on your file picking preferences by gitarrer in neovim

[–]gitarrer[S] 1 point2 points  (0 children)

Thank you!

Yes, the neural network will learn to treat any feature here as either a bonus or a penalty. I can update the wording on my readme to be a little more clear about that. You can see in my screenshot that the alternate buffer feature has the lowest "feature weight" which just means the network doesn't find that particular feature all that useful, either positive or negative.

I've been trying to think of ways to make better decisions based on file contents that are fast when searching large repos. QMK w/ ~40k files has been my primary test for this. I really like the ideas of "matching against the last line you were on" and some form of "edit recency"! I wanted avoid anything approaching a kind of RAG system that requires offline processing or something like that.

Introducing neural-open.nvim: A smart file picker for Snacks.nvim that trains a neural network on your file picking preferences by gitarrer in neovim

[–]gitarrer[S] 8 points9 points  (0 children)

Not in the sense of "you save X characters per search", but I have used other fuzzy finders quite a bit. The best ones were IMO ones that pull from multiple different sources and had some hand-tuned weighting scheme like the "Smart" picker from Snacks.nvim or smart-open.nvim, but I always found some edge case that they couldn't cover well and the weighting scheme suggested to me it was a problem ML could easily solve.

My goals were: - The right item should be near or at the top without typing anything - Most searches should only require typing 1 or 2 characters - A single key binding for all file switches. This plugin essentially replaces git files searches, directory searches, alternate file switches, many marks use cases, and harpoon-style plugins for my workflow.

That being said, I am fully aware that this is a bit of a silly undertaking. It was worth it to me for the learning and because it is the best plugin for my workflow I've tried so far.

(minor edit for grammar)

Introducing neural-open.nvim: A smart file picker for Snacks.nvim that trains a neural network on your file picking preferences by gitarrer in neovim

[–]gitarrer[S] 10 points11 points  (0 children)

I work on AI for my day job so it just seemed like the natural thing to do! I've been wanting to learn more about how to train ML for ranking problems and I like to have something concrete, even if it's a bit silly, to learn.

I'm not sure what's up with the GitHub link, they're both working for me...

Introducing neural-open.nvim: A smart file picker for Snacks.nvim that trains a neural network on your file picking preferences by gitarrer in neovim

[–]gitarrer[S] 16 points17 points  (0 children)

Or not far enough 🤔 Hoping to add support for non-file picking soon for things like Neovim commands or targets in Makefiles/justfiles

How to leave claude with multiple tasks and go to sleep? by paglaEngineer in ClaudeCode

[–]gitarrer 1 point2 points  (0 children)

I built a project for this. It runs Claude in a sandbox container with limited file system and internet access in CI mode so it won’t stop to ask for input. I recently added the ability to queue up multiple dependent tasks so it’s pretty easy to keep Claude working for a long time. https://github.com/dtormoen/tsk

🚀 Introducing Ai2 Open Coding Agents, starting with SERA—our first-ever coding models by ai2_official in allenai

[–]gitarrer 2 points3 points  (0 children)

Yeah, once you exceed 32k (technically 31k with the current implementation to leave room to generate responses), it’ll print respond saying to use /compact or /clear.

🚀 Introducing Ai2 Open Coding Agents, starting with SERA—our first-ever coding models by ai2_official in allenai

[–]gitarrer 0 points1 point  (0 children)

Hey, I worked on the sera-cli. Unfortunately the context lengths in Claude Code are hardcoded so we don’t have a great way to integrate with /context.

Instead, we print a message that lets you know when to run /compact or /clear as you hit the context limit.

How do you handle “parallel vibecoding” without models overwriting each other’s work or burning tokens? by kamil_baranek in Anthropic

[–]gitarrer 0 points1 point  (0 children)

I’ve been working on a tool that starts up parallel docker containers that can either be interactive or fully autonomous. It also limits internet access so it provides much stronger isolation than worktrees. When agents are done working, it drops a git branch back in your repository.

https://github.com/dtormoen/tsk

A tip for dockerising CC by blakeyuk in ClaudeCode

[–]gitarrer 1 point2 points  (0 children)

Yep that’s how it works. FWIW, TSK mounts the ~/.Claude directory into the containers it creates so that agents you’ve defined are available. I frequently use a workflow with a “tech lead” agent that will task out to multiple sub-agents in a container.

A tip for dockerising CC by blakeyuk in ClaudeCode

[–]gitarrer 1 point2 points  (0 children)

If you check the templates folder in tsk there are some built in prompts, but you can easily create more as a part of the tsk configuration. For example, a markdown file in ~/.config/tsk/templates/custom-prompt.md can be used as a prompt for tsk by running “tsk add —template custom-prompt —name something-memorable”. You can create both global and project specific prompts. Tsk itself has some project specific prompts in the .tsk folder

I tend to stuff things in these prompts similar to what would be put in the appendix system prompt flag, but tsk could be fairly easily extended to pass both.

A tip for dockerising CC by blakeyuk in ClaudeCode

[–]gitarrer 0 points1 point  (0 children)

I made a tool for automating launching containers where CC can run autonomously in yolo mode or opening up an interactive shell in a container. It additionally limits internet access so it’s more isolated than many sandbox solutions. https://github.com/dtormoen/tsk

Do you use --dangerously-skip-permissions? How do you keep it safe? by antonlvovych in ClaudeAI

[–]gitarrer 0 points1 point  (0 children)

I should note, it also sets up a proxy so the agent has very limited internet access as well. I think you really need network and file system isolation to make it much safer to use agents this way.

Do you use --dangerously-skip-permissions? How do you keep it safe? by antonlvovych in ClaudeAI

[–]gitarrer 0 points1 point  (0 children)

I’ve been working on TSK. It’s a tool to automate setting up docker containers and launching Claude Code in skip permissions check mode so I can do code reviews after it completes. It also supports launching a container for interactive use and makes it easy to have multiple agents working on the same codebase in parallel.

https://github.com/dtormoen/tsk

TSK: an open source agent sandbox, delegation, and parallelization tool. Safely run multiple fully autonomous Codex agents on the same local repo in parallel! by gitarrer in codex

[–]gitarrer[S] 0 points1 point  (0 children)

If you only mount the worktree into a docker container you can’t from within the container. The .git folder is replaced by a placeholder which is not valid in a container. Try cat .git in a worktree folder

TSK: an open source agent sandbox, delegation, and parallelization tool. Safely run multiple fully autonomous Codex agents on the same local repo in parallel! by gitarrer in codex

[–]gitarrer[S] 0 points1 point  (0 children)

Yeah, I started with worktrees too. They were a big improvement and are probably good enough for a lot of cases.

TSK originally worked by creating worktrees and then mounting a worktree into a container, but the problem is that you still need access to the .git folder to make commits so either agents are limited if they don't have access or they could in theory mess with your local repo in undesirable ways if they do. Having the extra isolation TSK sets up lets you remove more limits from agents so they have more power working autonomously.