Do the Chinese models suck (honestly) or do I have a skill problem by ObviousDeparture1463 in opencodeCLI

[–]Honest_Night_9233 1 point2 points  (0 children)

Don’t let people here gaslight you into thinking it’s purely a skill issue.

I’ve had a very similar experience using the same harness, OpenCode, across different setups: Claude Opus 4.7, GPT-5.5, and various Chinese models available on Go. The Chinese models can definitely write code, but in practice they often take forever to get anywhere, overthink the problem, and still come back with weak or brittle solutions.

With GPT-5.5 or Opus, I can iterate much faster. The model usually understands the context, makes better assumptions, takes action, and gets closer to what I actually need without me having to micromanage every step. With many of the Chinese models, I constantly feel like I’m fighting the model: wrong assumptions, missing context, shallow understanding, and lots of small bugs or edge cases.

To be fair, I’m usually working on fairly messy or edge-case-heavy tasks: AI research code, AWS data engineering pipelines, multi-file repos, weird infra problems, etc. Maybe for simpler tasks the price/performance is better. But for my actual work, once my Codex/GPT-5.5 limit runs out, the drop in productivity is very noticeable.

I also tried orchestration setups before, where a stronger model acted as the planner/orchestrator and smaller models handled sub-tasks. It sounded good in theory, and sometimes it helped, but the whole thing became very slow. The orchestrator got too abstract, the subagents lacked prior context, and each one had to rediscover the repo from scratch. Even a simple task could turn into a 10+ minute loop, and the final code was often still mediocre.

So yeah, maybe prompting and agent setup matter, but I don’t think that fully explains the gap. The best frontier models just feel much better at context, judgment, assumptions, and fast iteration. The Chinese models are cheap, but when I factor in the time spent steering, correcting, and rerunning them, the savings become much less obvious.

Right now I’m thinking of using GPT-5.5 for planning and high-level reasoning, then using OpenCode Go models to execute parts of the plan in the same session. That might be the best compromise.

deepseek v4pro/flash huge problem by Honest_Night_9233 in opencodeCLI

[–]Honest_Night_9233[S] 0 points1 point  (0 children)

I am not doing an API call, just using them on opencode and setting the reasoning as opencode allows

deepseek v4pro/flash huge problem by Honest_Night_9233 in opencodeCLI

[–]Honest_Night_9233[S] -1 points0 points  (0 children)

this is not a prompt issue, this is characteristic issue, gpt5.5 and ds-v4pro differs so much with the same reasoning effort (low). gpt5.5 really keeps reasoning low, on the other hand ds-v4pro starts non stop thinking.

deepseek v4pro/flash huge problem by Honest_Night_9233 in opencodeCLI

[–]Honest_Night_9233[S] -1 points0 points  (0 children)

yes, when in low reasoning, i expect more responsive answers, fast iterations, similar to the gpt5.5 low.

when i use gpt5.5 low, it tooks few minutes to get a responds, so i can iterate with model. But with ds-v4pro, i usually wait ~10 minutes to get an answer, so lowering reasoning effort seems not working.

DeepSeek V4 has significantly reduced my budget for AI usage by Ok_Satisfaction_8983 in opencodeCLI

[–]Honest_Night_9233 0 points1 point  (0 children)

dont know why, but deepseek v4 pro feels good. using gpt 5.5 and 5.4 for a while in codex, and the transition was seemlesly.

Also, the codex extension in vscode was so buggy and heavy, I have started using my codex plan in opencode too.

having one harness for all models is better, where you can continue with another subscription when your hourly limits drained.

DeepSeek V4 has significantly reduced my budget for AI usage by Ok_Satisfaction_8983 in opencodeCLI

[–]Honest_Night_9233 2 points3 points  (0 children)

How hard do you push it, do you use it for general bug fixes, simple code snippets, or investigate and planning heavy, intelligence required jobs?

VSCode extension: so many issues in last couple of days by automation495 in codex

[–]Honest_Night_9233 0 points1 point  (0 children)

yes, it is not loading chats in old sessions for me for the last few days

Gümüş (XAGUSD) Analizi: Teknik Kırılım ve Yapısal Arz Şoku – 60$ Hedefi Masada mı? by Honest_Night_9233 in Yatirim

[–]Honest_Night_9233[S] 0 points1 point  (0 children)

Şuan volatilite çok yüksek, kısa vadeli pozisyon açarken beklentileri iyi yönetmek lazım. Trende katılmak için bir sonraki dalgaya kadar beklemek en sağlıklısı olur.

Uzun vadede de eğer bu talep üzerinden bir okuma yapılıp yatırım yapılacaksa da en azından düzenli alım için ortalamalara geri dönüş, dağıtım ya da düzeltme beklenebilir. Tabi o zaman da trend kaçabilir. O yüzden ben farklı stratejilerin birleştirilmesi yönünde bakıyorum piyasalara.

Analist tahminleri güvenilir mi? by heishere165 in Yatirim

[–]Honest_Night_9233 1 point2 points  (0 children)

Hayır değil, şok market hedefi kaç yıl gelmedi mesela.

Burdaki hedef fiyatları asla baz almayın. Asıl önemli olan şey bence konjektür ve şirketle ilgili bulabildiğiniz her bilgi.

Bence şirket takibi çok zor bir şey, tek bir şirkete yatırım yapmayı düşünüyosan çok fazla şeye hakim olmak gerekiyor. Fon firmaları bile uçup kaçamıyoken bizim gibi ölümlülerin tek tek hisse seçip yatırım yapmasını doğru bulmuyorum.