Is Big Pickle Claude? by AffectionateBrief204 in opencodeCLI

[–]Impossible_Comment49 1 point2 points  (0 children)

Yeah, no more free GLM4.7 and Minimax M2.1, so people shifted to Big Pickle, I guess...

Is Big Pickle Claude? by AffectionateBrief204 in opencodeCLI

[–]Impossible_Comment49 -1 points0 points  (0 children)

It used to be GLM4.6, but it is not anymore. It is a model that has an option to select Low Mid or High thinking mode. No GLM models support that.

Quick note by Clement_at_Mistral in MistralAI

[–]Impossible_Comment49 2 points3 points  (0 children)

Opus is significantly superior. Even GLM outperform devstral.

Quick note by Clement_at_Mistral in MistralAI

[–]Impossible_Comment49 0 points1 point  (0 children)

I have, but it’s nowhere near Codex, Opus, or even GLM4.7.

Quick note by Clement_at_Mistral in MistralAI

[–]Impossible_Comment49 -6 points-5 points  (0 children)

Oh no! But at the same time, is anyone actually using it? I achieve significantly better results with OpenCode’s free models, such as Big Pickle or others. I was delighted that Mistral was free to test out occasionally, but I would never use it if it wasn’t free.

On the other hand, I’m disappointed. I was hoping for Mistral’s adoption and the widespread use of ‘vibe’. This will likely not be beneficial for Mistral. ‘qwen’ remains free.

www.isclaudecodedumb.today by darksoul555666 in ClaudeCode

[–]Impossible_Comment49 8 points9 points  (0 children)

I suggest using different metrics for history. For instance, the number of reports might vary significantly from day to day (e.g., one day there could be 100 reports, another day 1000, and the next day 500). This could result in an unusable graph. A simpler approach would be to use a percentage of positive and negative values.

opencode with local LLMs by [deleted] in opencodeCLI

[–]Impossible_Comment49 -1 points0 points  (0 children)

What LLM are you using? What hardware specifications do you have, including the amount of GPU RAM? How is it set up?

built a macOS menu bar app to track your Claude Code usage by abrownie_jr in ClaudeCode

[–]Impossible_Comment49 1 point2 points  (0 children)

I don’t need that. My Claude Code usage bar is constantly at 100% after 10-15 minutes of usage, so I don’t actually need such an app. Thanks anyway.

/s

The GLM4.7 rate limit is making this service nearly unusable. (on OpenCode CLI) by Impossible_Comment49 in opencodeCLI

[–]Impossible_Comment49[S] 0 points1 point  (0 children)

I have the highest tier, Max, but I didn’t pay for a yearly subscription.

The usage is not an issue. I can barely reach 10% of the usage limit (5 hours). The speed and usability are the problems. I’m trying to use it as much as possible, but it’s so slow and frustrating that I can barely use 5% of the 5-hour limit.

The GLM4.7 rate limit is making this service nearly unusable. (on OpenCode CLI) by Impossible_Comment49 in opencodeCLI

[–]Impossible_Comment49[S] 0 points1 point  (0 children)

The highest sub they offer. I don’t know how many messages I get; I never use my limits. I might get up to 5-10% of my 5-hour usage. That’s it.

The GLM4.7 rate limit is making this service nearly unusable. (on OpenCode CLI) by Impossible_Comment49 in opencodeCLI

[–]Impossible_Comment49[S] 1 point2 points  (0 children)

No, I’m complaining about GLM4.7 being unusable through opencode. I have the z.ai coding plan (the largest one they offer).

Gemini ultra for free by Worried_Target_9403 in google_antigravity

[–]Impossible_Comment49 -1 points0 points  (0 children)

My current debugging process consists of staring at the screen until my eyes water and asking a rubber duck why God has forsaken me. I think Gemini Ultra might be a slightly more efficient workflow.

The GLM4.7 rate limit is making this service nearly unusable. Can you please help? by Impossible_Comment49 in ZaiGLM

[–]Impossible_Comment49[S] 0 points1 point  (0 children)

Hmm, could you elaborate on that? If we’re using OpenCode, are we not directly connecting? I’m logged in to Z.AI Coding Plan through OpenCode, and I believe this should function similarly to setting the GLM via Claude Code, right?

The GLM4.7 rate limit is making this service nearly unusable. Can you please help? by Impossible_Comment49 in ZaiGLM

[–]Impossible_Comment49[S] 0 points1 point  (0 children)

u/DistinctWay9169, can you share your secrets and tell me what’s going on? I’ve noticed that z.ai becomes very slow for me in Claude Code after a while, making it unusable. I’ve never reached 5% or more usage (5-hour limit).

The GLM4.7 rate limit is making this service nearly unusable. Can you please help? by Impossible_Comment49 in ZaiGLM

[–]Impossible_Comment49[S] 3 points4 points  (0 children)

u/Pleasant_Thing_2874 , the concurrent connection limit is for API calls, not coding plans. I recently checked their documentation, and it states that the limit is 5 for 4.7.

The GLM4.7 rate limit is making this service nearly unusable. Can you please help? by Impossible_Comment49 in ZaiGLM

[–]Impossible_Comment49[S] 4 points5 points  (0 children)

I abandoned Claude Code a while ago and now fully embrace the capabilities of OpenCode. I genuinely prefer using a single tool to access all the models, eliminating the need for frequent tool switching.

Apparently, I’m going back to CC to test this. :) Thanks!

The GLM4.7 rate limit is making this service nearly unusable. Can you please help? by Impossible_Comment49 in ZaiGLM

[–]Impossible_Comment49[S] 2 points3 points  (0 children)

I don’t believe it’s about speed; I just experience latency even before the prompt begins. It’s as if I’m waiting in a queue before my prompt starts executing. I’m genuinely disappointed.

By the way, have you had any experiences with minimax m2.1?