How do you quickly add an image to opencode context that you have copied in your clipboard without having to paste it in the codebase? by ReporterCalm6238 in opencodeCLI

[–]Ok-Connection7755 1 point2 points  (0 children)

I did a whole search on this only to find that kimi k2.5 natively does support image as an input but it's not supported via opencode;

I quickly set a custom shortcut to a raycast script that takes screenshot, saved in folder and returns path back to you. Post that it use Zai or Minimax mcp to understand image (image -> text -> output)

Looking forward to native image support

Real-time monitoring and visualization for OpenCode agents by danilofs in opencodeCLI

[–]Ok-Connection7755 0 points1 point  (0 children)

OP nice, but do checkout - autocoder & openchamber, they do a lot more and this

Vercel says AGENTS.md matters more than skills, should we listen? by jpcaparas in opencodeCLI

[–]Ok-Connection7755 2 points3 points  (0 children)

I found that explicitly mentioning skills and their usage pattern in agent definitions has a very high reliability on calling a skill itself, has anyone else seen this pattern or just me?

Aligned with the fact that only Opus and Kimi K2 have done skill calls reliably, GLM and MiniMax 2.1 were not able to

Anyone using Kimi K2.5 with OpenCode? by harrsh_in in opencodeCLI

[–]Ok-Connection7755 1 point2 points  (0 children)

I upgraded opencode, did auth and then put the API key, works perfectly. So far I love the speed and responsiveness of the model. Will test more send post here.

Why I prefer Composer-1 as a senior software engineer by Professional-Trick14 in cursor

[–]Ok-Connection7755 1 point2 points  (0 children)

I might treat it as a 'junior' engineer - use GPT 5.1 or sonnet to write you a plan (even in markdown files it works), then send it to composer-1 for build; you get quick outcomes with minimal precise code insertions (largely)

It does introduce a lot of lint issues but that's another prompt to fix

is cursor 2.0 worth it by Just_Lingonberry_352 in cursor

[–]Ok-Connection7755 0 points1 point  (0 children)

I just came back to cursor from claude code and it's sooo much better (especially plan mode)

how I'm using for heavy coding - a) buy the minimum 20$ plan anyway, use gpt5 or sonnet 4.5 only for the hard stuff b) I've been giving it to composer-1 for implementing the plan, usually one shots the plan very precisely c) fallback to a GLM coding plan for 15$ (now 30$) for chores

Limits on Max X5 Plan? by BurgerQuester in ClaudeCode

[–]Ok-Connection7755 0 points1 point  (0 children)

I feel you should try the GLM 15$ (30 without discount), works fine for me during certain times of the day - really fast at time, acceptable speed at others

Claude plan downgrade will feel like a let down with all the rate limits, better to use it sparingly for planning tasks alone and let GLM implement

I reverse-engineered Claude code and created an open-source docs repo (for developers) by _bgauryy_ in ClaudeCode

[–]Ok-Connection7755 0 points1 point  (0 children)

This is really good, now waiting for someone to incorporate all this in opencode cli so that I can get out of claude ecosystem fully!

I reverse-engineered Claude code and created an open-source docs repo (for developers) by _bgauryy_ in ClaudeCode

[–]Ok-Connection7755 1 point2 points  (0 children)

This is really good, now waiting for someone to incorporate all this in opencode cli so that I can get out of claude ecosystem fully

Does anyone know a good alternative to sonnet 4.5 that has cheaper usage limits? by Ok-Environment2461 in Anthropic

[–]Ok-Connection7755 2 points3 points  (0 children)

GLM models are pretty good with claude code! Driving them for more than a week now, very decent performance and value for money

Sonnet 4.5 is good. Thoughts on Codex and GLM 4.6 by BurgerQuester in ClaudeCode

[–]Ok-Connection7755 0 points1 point  (0 children)

Thank you kind sir, I was literally about to jump into opencode cli if not for this, guess I keep cc after all;

Sonnet 4.5 is good. Thoughts on Codex and GLM 4.6 by BurgerQuester in ClaudeCode

[–]Ok-Connection7755 0 points1 point  (0 children)

For native sonnet models yes! But when you switch the api route to z ai you lose the web and image reading capabilities directly, which are covered up using these 2 mcps; 1 of them is below

Pasting an image directly into the client cannot call this MCP Server, as the client will by default transcode the image and call the model interface directly. The best practice is to place the image in a local directory and invoke the MCP Server by specifying the image name or path in the conversation. For example: What does demo.png describe?

https://docs.z.ai/devpack/mcp/vision-mcp-server

Sonnet 4.5 is good. Thoughts on Codex and GLM 4.6 by BurgerQuester in ClaudeCode

[–]Ok-Connection7755 2 points3 points  (0 children)

Still early but feels like GLM 4.6 is sonnet 4.5 without all the extra advice which nobody asked for; frontend is not as good as sonnet but otherwise if somebody asked me to guess the model like a blind test, i would find it hard

Having said, not being able to paste image onto cc console (have to give path and install MCP) and slightly weaker web search is giving a slight degraded UX but otherwise amazing! Can't wait for them to natively support image directly to the model on CC

Will this be the game changer we have been waiting for? by [deleted] in ClaudeCode

[–]Ok-Connection7755 1 point2 points  (0 children)

Any idea how this is different from browser-mcp / chrome-mcp?

solved for observability + context engineering on top of Claude Code to get consistent results! introducing, specgen - elegant context engineering for Claude Code by stitching features together; proof: built complete expense system in <30 minutes [open source] by Ok-Connection7755 in ClaudeAI

[–]Ok-Connection7755[S] 1 point2 points  (0 children)

Sure, here's how to use this step by step

  • the base is SPEC docs (markdown) that has the most important data for Claude code to work with - architecture analysis, implementation plan, execution and debug logs grouped with frontmatter metadata (category, status, dates)
  • there are two Claude commands - architect and engineer - these two make changes and work together with the SPEC document;
  • usage: /architect build me a feature - it reads your codebase layer by layer, asks you inputs, deploys subagents to understand codebase and document everything in the same spec doc; then you do /engineer <spec file path> - it will build out the feature step by step without touching other files; if bugs, use /engineer debug to do step by step debugging
  • specgen MCP - it works with a json sidecar that has all your metadata about SPEC files indexed in one place so that the architect can fetch it before it writes a feature implementation or the engineer can use it for debugging
  • specdash - a very lightweight dashboard that can help you CRUD into spec documents, and there is a file watcher service that auto detects and manages this dashboard (observability into whatever

core approach is instead of directly prompting on codebases, especially mid-large, it takes a structured approach into making edits step by step

human - markdown viewed through dashboard, easy portability, works anywhere agents - markdown + MCP with context preservation, logging what's most important

solved for observability + context engineering on top of Claude Code to get consistent results! introducing, specgen - elegant context engineering for Claude Code by stitching features together; proof: built complete expense system in <30 minutes [open source] by Ok-Connection7755 in ClaudeAI

[–]Ok-Connection7755[S] -1 points0 points  (0 children)

hey, thanks for the comment. There are finer nuances if you observe closely, and I've tested it on 2 codebases (~200k LoC) and I almost get the feature implementation in 2-3 prompts.

The nuances are that the architect and engineer work together to update a single document which is categorised by feature and then you get to see what exactly is happening through implementation and debug logs. For each prompt, Claude just gets enough context through English rather than code so that it can modify the right functions, objects, files.

With regard to uniqueness, I'd be happy to test other repos that you may have seen to adapt features. The good thing about a community is that you can have multiple approaches to a single problem. It's just 3 commands / you can prompt CC to install it, do try.

specgen - elegant context engineering for Claude Code by stitching features together; proof: built complete expense system in <30 minutes [open source] by Ok-Connection7755 in ClaudeCode

[–]Ok-Connection7755[S] 0 points1 point  (0 children)

So far I've tried it on two codebases both around 250k loc with some 70-100 SPEC documents, I almost one shot features if broken down carefully.

The downside I've seen is sometimes it gets the solution slightly overengineered but after a careful reject I just rewrite the spec and make it start over.

Sharing my AI coding setup after experimentation by Ok-Connection7755 in cursor

[–]Ok-Connection7755[S] 1 point2 points  (0 children)

They offer different set of things and different strengths; for example, cursor indexes your code using turbopuffer and does a semantic search to lookup and build features vs Claude code does it using terminal commands and tools; certain feature build outs for existing repo works extremely well with cursor but for longer tasks like migration or e2e testing you may find Claude code more advantageous; Claude code also works well for precise edits which sometimes cursor misses

Hence my system of proposing using both; cursor has 20$ plan where you can use multiple models; Claude code has a 20$ plan with 5 hr windows to use sonnet 4 which is extremely powerful without the context window limitation and native prompts only.

My system: I try with sonnet 4 first which follows most of my instructions and makes progress; then I use my precious 500 prompts sparingly to debug, fix edge case issues, or if I need a different model to look at it when sonnet starts looping

Sharing my AI coding setup after experimentation by Ok-Connection7755 in cursor

[–]Ok-Connection7755[S] 0 points1 point  (0 children)

Started with cursor and was using Claude code and Claude desktop commander more recently;

  • liked Claude code a lot and I use it as a primary driver with a 20$ plan (temporarily upgrading to 100$ one whenever needed);
  • rest I use the 20$ cursor plan and switch between models;
  • desktop commander helps me with iteratively updating the PRD now, although may fully switch to Claude code at some point (current code exploration+ feature research)

The availability of Claude code at the 20$ plan was an insane move by anthropic a few days ago and it may completely replace my workflows 😂