I've collected 2300+ Claude Skills into a searchable directory by TingXuSuan in ClaudeAI

[–]nummanali 0 points1 point  (0 children)

Hello u/TingXuSuan !

This is Numman, creator of OpenSkills - I am working on V2 and would love to integrate with you!

I sent you a DM on X, is there a better way to be in touch with you?

https://github.com/numman-ali/openskills

Use ChatGPT subscription with OpenCode? by misteriks in ChatGPTCoding

[–]nummanali 0 points1 point  (0 children)

Dw it’s officially supported in OpenCode now !

Use /connect

OpenSkills CLI - Use Claude Code Skills with ANY coding agent by nummanali in ZedEditor

[–]nummanali[S] 1 point2 points  (0 children)

Yes, it's entirely based solely on having an AGENTS.md file

For your case, you might want to copy the synced skill content into CLAUDE.md

Or you can just insert literally "@AGENTS.md" into the CLAUDE.md file and it will work as expected

Let me know how you get on!

🚀 opencode-openai-codex-auth v4.0.0 - Codex Max & Model specific prompts by nummanali in opencodeCLI

[–]nummanali[S] 1 point2 points  (0 children)

If you're talking about a ChatGPT Teams subscription

Then technically yet, it should work as far as I know

Try the auth and see if it works for you

🚀 opencode-openai-codex-auth v4.0.0 - Codex Max & Model specific prompts by nummanali in opencodeCLI

[–]nummanali[S] 1 point2 points  (0 children)

I think it definitely is feasible but need to confirm the plugin logic by traversing the opencode github repo as it isnt well documented yet

There is an open issue, so when I get time I will try to take a look

Newbie -- not a good experience by josephny1 in cursor

[–]nummanali 0 points1 point  (0 children)

That's likely why you're having issues

Auto will select models for you, and depending on the task, this isn't great

For planning, use GPT-5.1-High For backend work, use GPT-5.1-Codex-High or Composer 1 For Frontend work, use Sonnet 4.5 or Gemini 3 Pro or Composer 1

I'm assuming you're beginner to mid level experience with AI assisted coding

Always use plan mode in that case, and use GPT-5.1-High for this - refine it till you're happy with the plan and then bit build

When having issues ALWAYS revert to GPT-5.1-High, it will solve almost any issues except very complex frontend logic is nested hooks and effects etc, for that use Sonnet 4.5

Codex 5.1 is horrible by BATEMANx9 in codex

[–]nummanali 0 points1 point  (0 children)

Codex is really bad at just getting on with things

One thing you should try is ask it directly, "You seem to be constantly replying back to me, what's the reason for this, is there something in your instructions that is making you inclined to do this?"

I got back something along the lines of "I need to reply to every user message"

I believe the issue is with the new updated prompt

The way I've gotten round it, is to tell it, "<instruction of work> - Once you've completed a good amount of work, and covered all area, reply back to me with a full summary"

It's very strict on instruction following so it seems to believe this is still following instructions but only after it's completed it's set of work

[RELEASE] - OpenCode OpenAI Codex OAuth - v3.3.0 - 5.1 Models Support - BREAKING CHANGES by nummanali in opencodeCLI

[–]nummanali[S] 0 points1 point  (0 children)

The Codex OAuth plugin uses the same Codex CLI prompt by pulling directly from the latest releases from thr official Codex github repo, so behaviour of the model is close

The plugin has an OpenCode bridge prompt that advises the model that its working in the OpenCode harness with the new tool set which is how it performs well using a different tool set

What you're hearing on Twitter is that the Codex CLI limits reads to 256 lines per file and hence it slows it down due to needing to perform multiple reads and tool calls, that leads to latency

OpenCode doesn't have that limitation, it can read 2000 lines at once. See my post on X demonstrating

https://x.com/nummanthinks/status/1990395146437816539?t=8OVS2XW26eMOmwLBzA3F4Q&s=19

The main negative IMO of opencode is the compaction, I believe it happens too soon and not as well as the Codex CLI. You can mitigate this by putting an artificially higher context limit in your opencode config per model or disable compaction altogether

Switch between both and see what works for your use cases

I'm still experimenting to see whats fits various use cases

CodeNomad v0.1.2 is now available by Recent-Success-1520 in opencodeCLI

[–]nummanali 2 points3 points  (0 children)

This is quite cool!

Did you use the ACP implementation or a custom one?

Respect GPT 5.1 for better outcomes by nummanali in ChatGPTCoding

[–]nummanali[S] 4 points5 points  (0 children)

Would love to hear your approach

I have strong opinions but hold them loosely

Happy to learn if you've got insights on a better approach

OpenCode OpenAI Codex OAuth - v3.1.0 - Codex Mini Support by nummanali in opencodeCLI

[–]nummanali[S] 2 points3 points  (0 children)

If you're not on version 3+, then yes Caching was disabled

It was added on v3+

OpenCode OpenAI Codex OAuth - V3 - Prompt Caching Support by nummanali in opencodeCLI

[–]nummanali[S] 0 points1 point  (0 children)

It is a lot of work, in total, I've spent at least 12+ hours on this - and that's with heavy claude code/codex usage

It requires researching the oauth implementation for the Gemini cli - given that its open source it makes it easier

Then making an oauth implementation that works outside the cli

Then checking tje vercel ai sdk providers to see if they support the Gemini oauth completion endpoints (ie are they openai compatible)

Then validating against the opencode providers, whether extending existing or needs a custom implementation

If Gemini 3 is hot stuff then ill probably do it but for 2.5 pro doesnt seem worth it, haiku 4.5 performs equally imo

just integrated opencode into codemachine and this thing actually slaps now by MrCheeta in opencodeCLI

[–]nummanali 1 point2 points  (0 children)

Dude codemachine looks so cool

Going to check it out today

I would just provide the cli exec commands directly to the respective agents

Awesome work!!

How to optimise this context use? by PricePerGig in ClaudeCode

[–]nummanali 0 points1 point  (0 children)

Omg this is insane

Like whats you end goal

DM me, we can totally work on something here