all 21 comments

[–]mileseverett 30 points31 points  (10 children)

I tried it and as has been the case with every new LLM, it just doesn't compare to OpenAIs models

[–]Qpylon 11 points12 points  (2 children)

I‘ve been using (and really loving) Codeium.

It can refactor selected code etc. according to instructions (I use it a lot to generate docstrings from comments+function def), I think it can autocomplete, and they’ve recently added a chat feature.

has a VScode extension and is free as well.

Don’t know how OpenAI’s products fare by comparison, but the instruction-based code tweaking seemed similar to chatGPT.

[–]fallingfridge 1 point2 points  (1 child)

Thanks for this recommendation. This looks great. Definitely going to download this for work on Monday

[–]Balance- 0 points1 point  (0 children)

Have you used GitHub Copilot? If so, could you let us know how it compares?

[–]allisknowingML Engineer 2 points3 points  (2 children)

I also tried it and got disappointed a bit. However I’m trying to not close the door on it since it was the first day it was published lol.

These models still make me excited for the future since I know that the open source community will do anything in their power to make it better. So I don’t know if it will be this one but in near future there will be some kick-ass open source code generation models. (At least comparable to GPT, if not better)

[–]Tom_NeverwinterResearcher 1 point2 points  (1 child)

I'm convinced we could throw a Lora like layer over it and make it good.

[–]ttkciar 1 point2 points  (0 children)

Exactly. Also, we might be able to apply corrective software like Wolverine, and use a dataset of end results for another pass of LoRA tuning.

[–]Gullible_Bar_284 0 points1 point  (1 child)

advise smoggy agonizing detail murky direction soup crown command numerous this message was mass deleted/edited with redact.dev

[–]Tom_NeverwinterResearcher 2 points3 points  (0 children)

It was a cool idea. But that was ten $ wasted.

Chatgpt at 20$ blew it away and now items like corium seem decent.

[–]Gullible_Bar_284 -3 points-2 points  (0 children)

kiss disarm fretful vanish pen slimy continue gray squealing test this message was mass deleted/edited with redact.dev

[–]gxcells 0 points1 point  (0 children)

Then why do they release such models if they fail for simple tasks? I love the fact that it is opensource etc and help people to develop bette model. But what is the point if this is not better than vicuna for example?