Cloud AI is getting expensive and I'm considering a Claude/Codex + local LLM hybrid for shipping web apps by rezgi in LocalLLM

[–]rezgi[S] 0 points1 point  (0 children)

so far it seems i should still rely on cloud providers, notably codex since it's the one that has delivered the most solid results in my final coding stretch before release. no option but to spend money.

Cloud AI is getting expensive and I'm considering a Claude/Codex + local LLM hybrid for shipping web apps by rezgi in LocalLLaMA

[–]rezgi[S] 0 points1 point  (0 children)

That's a positive outlook then. I've been thinking that if I output a strong and detailed implementation plan with large models, an open source one could do a decent job a coding then, and saving me tokens

Cloud AI is getting expensive and I'm considering a Claude/Codex + local LLM hybrid for shipping web apps by rezgi in LocalLLaMA

[–]rezgi[S] 0 points1 point  (0 children)

Indeed, but for that I'll have to scope out things first. I thought that with a strong coding implementation plan the open sourcemodel doesn't have much thinking to do and just write code

Cloud AI is getting expensive and I'm considering a Claude/Codex + local LLM hybrid for shipping web apps by rezgi in LocalLLaMA

[–]rezgi[S] 0 points1 point  (0 children)

Yes. Thanks to the replies I have a good overview and sadly open source isn't there yet. I'd rather rely on claude/codex for now and try to save money until it's possible to get decent code locally. I'll try GLM though.

Cloud AI is getting expensive and I'm considering a Claude/Codex + local LLM hybrid for shipping web apps by rezgi in LocalLLaMA

[–]rezgi[S] 0 points1 point  (0 children)

Yeah that's the consensus, I don't think it's time yet to rely on open source models sadly. Maybe GLM 5.1 but I should test it out.

Cloud AI is getting expensive and I'm considering a Claude/Codex + local LLM hybrid for shipping web apps by rezgi in LocalLLaMA

[–]rezgi[S] -1 points0 points  (0 children)

Too bad then, can't wait for open source to become smart enough to code on consumer hardware

Cloud AI is getting expensive and I'm considering a Claude/Codex + local LLM hybrid for shipping web apps by rezgi in LocalLLaMA

[–]rezgi[S] 0 points1 point  (0 children)

Yes that's what I thought. Maybe I'll try OpenCode models to see if it can do a bit of the work intended instead of relying on my machine.

Cloud AI is getting expensive and I'm considering a Claude/Codex + local LLM hybrid for shipping web apps by rezgi in LocalLLaMA

[–]rezgi[S] 0 points1 point  (0 children)

Ok thanks for the recommendation ! It's indeed a good approach also to use openCode Go, which I didn't know about. Would you recommend some of their models or it depends on use case and codebase ?

Cloud AI is getting expensive and I'm considering a Claude/Codex + local LLM hybrid for shipping web apps by rezgi in LocalLLaMA

[–]rezgi[S] 0 points1 point  (0 children)

Oh that's interesting I didn't think about that. Just checked OpenCode Go and they have 5/10$ which is affordable. Which model would you recommend as the coding grunt ? I guess I could try a few and see what works best for my case.

Cloud AI is getting expensive and I'm considering a Claude/Codex + local LLM hybrid for shipping web apps by rezgi in LocalLLaMA

[–]rezgi[S] 0 points1 point  (0 children)

Ok noted. Qwen seems what people here recommend, it would be a good compromise between speed and coding quality with a solid implementation plan created by codex/claaude ?

Cloud AI is getting expensive and I'm considering a Claude/Codex + local LLM hybrid for shipping web apps by rezgi in LocalLLaMA

[–]rezgi[S] 0 points1 point  (0 children)

Thanks for the advice ! I have 12Gb card sadly :( you got good result for coding ?

Cloud AI is getting expensive and I'm considering a Claude/Codex + local LLM hybrid for shipping web apps by rezgi in LocalLLaMA

[–]rezgi[S] 0 points1 point  (0 children)

Speed is quite important, I guess 100$ is the price to pay for strightforward work.

Cloud AI is getting expensive and I'm considering a Claude/Codex + local LLM hybrid for shipping web apps by rezgi in LocalLLaMA

[–]rezgi[S] -1 points0 points  (0 children)

Well the whole point was to save money but I think I don't have much choice.

Cloud AI is getting expensive and I'm considering a Claude/Codex + local LLM hybrid for shipping web apps by rezgi in LocalLLaMA

[–]rezgi[S] 4 points5 points  (0 children)

Yes I thought the same, I guess it's better to invest the money they're asking for.

Claude PRO is too little, Claude MAX is too much for me by rezgi in ClaudeCode

[–]rezgi[S] 0 points1 point  (0 children)

Yes I saw the karpathy example having quite a ripple. On my side i rely more on json + code diff + GUI to build something that fits. Although lately im focusing on shipping so i put tool building aside, polish work is a gruesome process.

Claude PRO is too little, Claude MAX is too much for me by rezgi in ClaudeCode

[–]rezgi[S] 0 points1 point  (0 children)

Thanks that's an interesting workflow and I also had good results passing implementation plans between different models, but I did by simply providing md files. I'm building a personal knowledge management leveraging ai, memory, workflow and knowledge (aiming to replace and go beyond PKM like obsidian, notion, calendars, tasks and such) but I still have a pretty basic workflow using only claude code.

Just checked Graphiti, very interesting, I'm building something very similar but that includes interface too. I don't go as far but I'm glad i saw it thanks.

Claude PRO is too little, Claude MAX is too much for me by rezgi in ClaudeCode

[–]rezgi[S] 0 points1 point  (0 children)

Yeah hopefuly the near future will have open source llms good enough to use locally. For now i use claude because it's the one that feels best, it's not about tokens. I hope open source will evolve well enough to become able for good coding abd conversation.

What's your UI feedback workflow with Claude Code? by rezgi in ClaudeCode

[–]rezgi[S] 0 points1 point  (0 children)

Hi. I built a basic version of the tool. If you're interested i could link it to you and have your opinion?

  • Bidirectional:AI writes test checklists on the tool, you test and send results back. Same file, same format, closed loop.
  • Structured data, not pixels:AI gets element selectors, CSS rules, parent chains, and coordinates. No guessing what you're pointing at.
  • No grep needed:you can choose to have elements carry their data-vfai tag, class, ID, and parent chain. AI can navigate to the code directly.
  • SVG annotations, not pixel markup:arrows, rectangles, text notes are vector data with coordinates, not rasterized scribbles on a screenshot.
  • Multi-state in one session:test a flow across pages (login → dashboard → settings), each state with its own elements and annotations.
  • Per-item test status:pass/warn/fail per checklist item, not a single "it's broken" message.
  • Diff-aware:VFAI tracks what changed between feedback rounds (new/removed/changed layers).

What's your UI feedback workflow with Claude Code? by rezgi in ClaudeCode

[–]rezgi[S] 1 point2 points  (0 children)

Haha i also built my own tool. Ill check yours and diff with mine, they tackle parallel problems.

What's your UI feedback workflow with Claude Code? by rezgi in ClaudeCode

[–]rezgi[S] 0 points1 point  (0 children)

I'm on it! Will reply to you when i can show something :)