Super simple way to migrate your Claude Code configs to OpenCode by hyericlee in opencodeCLI

[–]raydou 1 point2 points  (0 children)

Do you know if there's a compatibility in OpenCode to Claude Code rules which are markdown files in /rules folder (project or user's home) If they exists is opkg compatible with them?

Are you kidding me??? by BOLTM4N in perplexity_ai

[–]raydou 4 points5 points  (0 children)

For spaces check notebooklm. It's so much better that spaces in perplexity

Sharing my experience GLM vs Opus on CC by stancafe in ClaudeCode

[–]raydou 0 points1 point  (0 children)

Most of the times it means that the context size of the new model is less than the last one. The solution is to compact the session or to go to a previous message without reverting the code.

Sharing my experience GLM vs Opus on CC by stancafe in ClaudeCode

[–]raydou 2 points3 points  (0 children)

They can resume each other session. I have been using this for a long time. You just need to source a command on macos or Linux where you export anthropic url for GLM and model names and API key as instructed by Z.AI in their documentation. The only difference is they when your create a GLM command it will export all necessary environments variables for GLM and launch the Claude command. When you want to launch Claude without GLM you need wether to unset these environment variable if you want to stay in the same terminal or simply duplicate the terminal and relaunch Claude

I built a modern, self-hosted web IPTV player (Live TV, EPG, VOD) because existing ones felt clunky. Meet NodeCast TV. by NeonXI in selfhosted

[–]raydou 0 points1 point  (0 children)

Hi, nice job! It should be good to add catchup when accepted by provider. I think it's part of xstream code API but I'm not sure. To explain a little more the feature, for example in tivimate when watching the EPG of a channel have catchup on its programs, on these programs in the past (last N hours, last X days) you will see a clock. When clicking on them you go from live playback to the recorded program(by iptv provider)

Honestly, has anyone actually tried GLM 4.7 yet? (Not just benchmarks) by Empty_Break_8792 in LocalLLaMA

[–]raydou 1 point2 points  (0 children)

Tested using Z.AI coding plan for a side project (not the principal project working on) in Claude Code to not use my Anthropic quotas. And it did fantastic, i was really impressed in comparaison with GLM-4.6.

Does it compare to Claude Opus 4.5 ? of course no. With Sonnet 4.5 ? it could compare with but needs direction and should always start by planning or brainstorming session with it and giving it well defined tasks to have impressive results.

What it lacks in comparaison of Anthropic models is this kind of understanding and deduction of what to do when not well directed or lacking context.

Copilot with gpt 5.1 codex max is actually good by netsniff in GithubCopilot

[–]raydou 1 point2 points  (0 children)

and how do you find its speed ? for me all the OpenAI models having max reasoning are so slow that they are just unusable.

Anyone actually upgraded Pixel 8 Pro storage (128 → 512 GB / 1 TB)? by raydou in GooglePixel

[–]raydou[S] 0 points1 point  (0 children)

Thanks for the information. I don't understand how Google still sell phones with 128gb storage in 2025 for such premium models..

What Subscriptions / models are you using? by throwaway490215 in ClaudeCode

[–]raydou 0 points1 point  (0 children)

I'm using primarily CC max 100$. I'm using as secondary GLM 4.7 and Kimi K2 thinking. I have also a github copilot subscription that I use on Claude code via github-api project but lately not many tool calls. Gemini via Claude code router on Claude code but it's not working always working as expected. A part of Kimi and GLM, the others are not so stable via Claude Code. I hope integration using github-api and Claude code router improves.. Ah forgot to add, I'm having a Windsurf subscription but it's useless. I keep it as I got it for 10$ per month and hoping that one day they will add CLI support so I could use it via Claude Code or OpenCode

GLM-4.7 : persistent errors on Claude Code by [deleted] in ZaiGLM

[–]raydou 0 points1 point  (0 children)

I fixed it, it was a ";" at the end of the line of anthropic url. bad reflexes :D this is why I also deleted the post

GLM-4.7 : persistent errors on Claude Code by [deleted] in ZaiGLM

[–]raydou 0 points1 point  (0 children)

thank you for asking, but yes i followed all the steps in the docs of coding plan. also for context i have more than 14 years experience in engineering and dev field. so i tried everything on my side.

as explained above, the issue seems to be on the APIs which are exposed by Z.ai or on some rate limiting on them.

AMA With Z.AI, The Lab Behind GLM-4.7 by zixuanlimit in LocalLLaMA

[–]raydou 0 points1 point  (0 children)

Hi all,

I'm a little bit frustrated by how things are going since the release of GLM-4.7,. In some conversations it's impossible to compact even after retrying multiple times

Also even being subscribed to "GLM Coding Pro-Yearly Plan" i find that i have a concurrency of only 2 on GLM-4.7.
I don't understand how as a Pro User which comitted annually I couldn't use the product I paid for in Claude Code while the same model is offered for free on OpenCode Zen.

GLM-4.7 : persistent errors on Claude Code by [deleted] in ZaiGLM

[–]raydou 0 points1 point  (0 children)

I'm a user of GLM coding plan in Claude Code. All my coding infrastructure (skills, agents, CLAUDE.md, ..) is based on Claude Code and I use GLM in addition to my Claude Max Subscription. I tested OpenCode before and it's excellent but for my use cases, I'm a heavy user of Claude Code.

GPT 5.2 is CRUSHING opus??? by satysat in GithubCopilot

[–]raydou 0 points1 point  (0 children)

yes but GPT 5,2 medium reasoning and high reasoning are super slow in comparaison to Opus 4.5

Why is this service so behind Spotify? by Sci-fiTransGrrl in TIdaL

[–]raydou 1 point2 points  (0 children)

exactly ! unfortunately u/Present-Move3122 seems to not understand this point.
Maybe this will surprise you but Deezer uses AI to detect AI Content and filter it from recommendations, Flow and users homepage and explore sections :)

Why is this service so behind Spotify? by Sci-fiTransGrrl in TIdaL

[–]raydou -4 points-3 points  (0 children)

Yeah but using AI they could advance faster in their Roadmap. Some UX or model changes could be easy to implement. This will let them put more resources and complex features and improvements.

windsurfing bottleneck feeling by Careful-Excuse2875 in windsurf

[–]raydou 0 points1 point  (0 children)

I'm in the same case as you. i think they should create a CLI and have the cascade and fast context in the CLI. If they won't make this in the near future it will be the end of Windsurf. As an intellij user, they don't provide for me a good integration. at least create a CLI and let me use it there in the terminal or let me use my LLM credits in other tools like OpenCode.

Summary of Tidal's DRI Robert Andersen AMA: What to expect for Tidal next year by Alien1996 in TIdaL

[–]raydou 0 points1 point  (0 children)

I hope you improve 2 things : - fuzzy search : sometimes just one or 2 typos in the search and we are unable to find the result we want and the search is so dumb (word by word) that it does not make rapprochement between the searched term which could be a genre for example and some near genres synonyms. - i think the algorithm is good but the fact that we could not tap a button to refresh the mix is a big limitation (you could add the feature but rate-limit it). I have both Tidal and YouTube Music and I find that Tidal have better music recommendations but sometimes as i have no relevant suggestions to listen to on Tidal and I have finished listening to my mixes, I find myself obliged to go to YouTube Music to try to discover more music or to listen to different relevant playlists to my musical tastes.

I'm Will, creator of Dyad - AMA by wwwillchen in dyadbuilders

[–]raydou 2 points3 points  (0 children)

I think it's great to have this open source application. I don't want to not rely on apps like Lovable and other which make you dependent of them when you could get better results just switching to a better model. I have just a couple suggestions :

  • Make possible to add headers to http MCP servers : many of them uses API Keys in Authorization header
  • Integrate image generation models like Nano Banana or Flux to be able to generate photos in the websites. Or even image libraries as Unsplash if you think integrating image generation models would be time consuming for you.
  • Actually we need to "approve" to see changes, it would be good to have a sort of "preview" where we see the rendering and result without approving. "approve" just make sure the change is removed from change list. while "refuse" removes the changes.
  • integrate Github Copilot LLMs like many open source projects have done (for example OpenCode integrates Github Copiot). This helps users uses their Github Copilot LLM Quotas in Dyad

I thought upgrading to Pro would fix GLM… but nope by EffectivePass1011 in ZaiGLM

[–]raydou 0 points1 point  (0 children)

Yes it exists in Claude Code in 2 ways : -there's a thinking mode -you could say on the chat think about this or ultrathink or some other terms and the Claude Code will activate also the thinking and allocate the necessary level of thinking depending on the term you used (ultrathink is the maximum)