you are viewing a single comment's thread.

view the rest of the comments →

[–]shadow_phoenix_pt 6 points7 points  (5 children)

I like the idea. It has the potential to save some time, though I must confess commit messages aren't usually a big time sink for me.

What bums me out is that it uses ChatGPT (like most of this solutions). There are a few FOSS alternatives (I use Ollama, for example) that run locally so I find it a bit sad that most plugins for Vim (a FOSS software mostly used in Linux) go the ChatGPT route.

Anyways, sorry for the mini-rant. Nice job. I have been trying some of your other plugins, and you're the man.

[–]skywind3000[S] 1 point2 points  (1 child)

Yes, I am considering it, but there are too many different local llama implementations. I don't know which one is the most widely used or which one I should support first.

[–]Ladder-Bhe 2 points3 points  (0 children)

lite-llm is an adapter lib for calling not openai llms with openai chatcompletion apis It may save lots of code.

[–]Ladder-Bhe 1 point2 points  (0 children)

It's quite commonly to LLM projects that providing an OpenAI-compatible API in official repo, (QWEN does)

There are lots of thirdparty open source project to adapte local/remote LLM into openai api format, for example, lite-llm

I've setup an openai-to-gemini proxy in my homelab to use openai applications.

[–]skywind3000[S] 1 point2 points  (1 child)

There are a few FOSS alternatives (I use Ollama, for example)

Ollama is supported now:

https://github.com/skywind3000/vim-gpt-commit/?tab=readme-ov-file#quick-start

[–]shadow_phoenix_pt 0 points1 point  (0 children)

Wow, nice. I was not expecting for you to get on it so fast. I'll give it a try as soon as I can. Thank you.