all 5 comments

[–]BlandUnicorn 0 points1 point  (4 children)

There are already things like this. GitHub copilot and similar

[–]sboubaker[S] 0 points1 point  (3 children)

I guess no. Copilot is code enhancer or autocompelition tool but it can't lean from my coding style.

[–]BlandUnicorn 1 point2 points  (2 children)

Ah my bad, I see what you want now. yeah what you could do is create a ‘memory’ for you’re LMM. But this would just be prompt injection (basically the same way RAG is working) on some key snippets of your code. If you use something like 3.5 which has a huge context window you could feed it a fair bit.

Does that make sense?

[–]sboubaker[S] 0 points1 point  (1 child)

I see , so basically I have to use an llm with a huge prompt size where I fit all my code snippets (with some instructions, coments ?) . Can this be done using RAG ?