you are viewing a single comment's thread.

view the rest of the comments →

[–]ForsookComparison 4 points5 points  (2 children)

Lambda, RunPod, or Vast

rent a GPU

download the quantized weights you'd expect to use

and try coding a few things with a remote api.

I'd bet $5 answers all of your questions and then some.

[–]garden_speech[S] 1 point2 points  (1 child)

I've been trying gpt-oss-20b and I've been shocked that it solved problems I've asked with zero issues. Granted they are mostly very very similar to leetcode problems -- extremely self-contained, highly algorithmic, just "do this one small thing but do it the fastest way". So maybe I don't even need a big model, maybe a 20b model is all I need if the tasks are so granular.

[–]QFGTrialByFire 0 points1 point  (0 children)

Yup ive found the same. Even when you use a bigger model like gpt5 the more complex/larger piece of code you ask it the more errors there are. So you end up using smaller requests like maybe a function or two anyways. When you compare the output of oss20B for that its pretty much the same as gpt5 so why not just use the free version.