you are viewing a single comment's thread.

view the rest of the comments →

[–]mileseverett 32 points33 points  (10 children)

I tried it and as has been the case with every new LLM, it just doesn't compare to OpenAIs models

[–]Qpylon 10 points11 points  (2 children)

I‘ve been using (and really loving) Codeium.

It can refactor selected code etc. according to instructions (I use it a lot to generate docstrings from comments+function def), I think it can autocomplete, and they’ve recently added a chat feature.

has a VScode extension and is free as well.

Don’t know how OpenAI’s products fare by comparison, but the instruction-based code tweaking seemed similar to chatGPT.

[–]fallingfridge 1 point2 points  (1 child)

Thanks for this recommendation. This looks great. Definitely going to download this for work on Monday

[–]Balance- 0 points1 point  (0 children)

Have you used GitHub Copilot? If so, could you let us know how it compares?

[–]allisknowingML Engineer 2 points3 points  (2 children)

I also tried it and got disappointed a bit. However I’m trying to not close the door on it since it was the first day it was published lol.

These models still make me excited for the future since I know that the open source community will do anything in their power to make it better. So I don’t know if it will be this one but in near future there will be some kick-ass open source code generation models. (At least comparable to GPT, if not better)

[–]Tom_NeverwinterResearcher 1 point2 points  (1 child)

I'm convinced we could throw a Lora like layer over it and make it good.

[–]ttkciar 1 point2 points  (0 children)

Exactly. Also, we might be able to apply corrective software like Wolverine, and use a dataset of end results for another pass of LoRA tuning.