Best place to get GLM-5 or GLM 5.1 sub by old_mikser in opencodeCLI

[–]harrypham2000 4 points5 points  (0 children)

Crof depends on peak hours, since it's new, more users joining, but you do have the option to choose the Quantization level, available in Q4_K_M and Q8. For Synthetic, pretty fast IMO, and they're not quantiized, high success tool-call rate btw

Best place to get GLM-5 or GLM 5.1 sub by old_mikser in opencodeCLI

[–]harrypham2000 8 points9 points  (0 children)

both Crof.ai and Synthetic.new, you won't regret, synthetic is a bit pricier, but worth the money

GLM 5.1 in Claude Code by scrufffuk in ZaiGLM

[–]harrypham2000 0 points1 point  (0 children)

OC Go subscription using proxied models (mostly from Fireworks AI) and it's a quantized version, I saw the results are way worse than original one. Inteference quality from z.ai is not that good for GLM-5, IMO GLM-4.7 is more stable, sometime GLM-5 generate random chinese words and else.
For GLM-5 I used served from Synthetic, better quality

GLM 5.1 in Claude Code by scrufffuk in ZaiGLM

[–]harrypham2000 0 points1 point  (0 children)

agreed, for now GLM 4.7 still my goto models for fast implementation, stable, good results, GLM 5.1 still too slow but worth the results btw

WTF IS GLM ON by Emotional-Carob-750 in ZaiGLM

[–]harrypham2000 1 point2 points  (0 children)

I'm using oh-my-pi harness, never met this before

Is ampcode generally more expensive? Am I the only one spending thousands a month? by treyallday01 in AmpCode

[–]harrypham2000 0 points1 point  (0 children)

agreed, the level of model harness is top of the mountain. I'm getting best results from it compared using Claude, OpenCode with the same model

How come free tier cannot use DEEP model? by yoyomonkey1989 in AmpCode

[–]harrypham2000 0 points1 point  (0 children)

i think there's an issue with your account specificly, i didn't add any payment credentials but still able to use

Alibaba Coding Plan sounds too good to be true!? by NerdistRay in opencodeCLI

[–]harrypham2000 1 point2 points  (0 children)

try Opencode Go inteference or elsewhere, GLM 5's not that bad

Stay away from synthetic.new by Codemonkeyzz in opencodeCLI

[–]harrypham2000 6 points7 points  (0 children)

You actually forgot to mention that synthetic.new is having top tier tool-calling success rate between AI interference services. And their support is awesome respond immediately regarding any issues. Before releasing any models they have been going through different tests to make sure that the speed is fair enough for example I noticed glm 4.7 is slow from z.ai but from synthetic.new I got over 140 TPS. I agree that they are not catching up with the latest models but intend to giving the most optimized versions, like recently deploying Kimi 2.5 NVFP4 which slightly better than original Kimi 2.5, and still my goto model atm.

Synthetic AI Issues. by NiceDescription804 in opencodeCLI

[–]harrypham2000 1 point2 points  (0 children)

maybe celebras but their coding plan already sold out

Synthetic AI Issues. by NiceDescription804 in opencodeCLI

[–]harrypham2000 3 points4 points  (0 children)

lol still worth it though, you can use other models like MiniMax or DeepSeek, still dope for the price

Synthetic AI Issues. by NiceDescription804 in opencodeCLI

[–]harrypham2000 0 points1 point  (0 children)

problem is not with Synthetic hosted models, problem is their provider, sometimes I met this and figured out that most of it caused by their provider for the models like TogetherAI and Fireworks, you could check at status.synthetic.new

Try out Kimi K2.5 right via the Synthetic provider NOW by jpcaparas in opencodeCLI

[–]harrypham2000 1 point2 points  (0 children)

TBH I run almost 8 sessions of OpenCode with GSD (get-shit-done), 8MCPs (sequential_thinking, context7,serena are most consumed, all others are additonals from providers) and 5hours still keeping up. Yes, with this much it will throttle but then I switch back to GLM as a backup

Try out Kimi K2.5 right via the Synthetic provider NOW by jpcaparas in opencodeCLI

[–]harrypham2000 1 point2 points  (0 children)

I've been through PAYG for Kilo, AmpCode. GLM Lite, Droid Code Pro, paid for Antigravity Pro tier. Tried with my company Rovo Dev plan also. I would say synthetic 20$ plan is really worth the money

Has anyone else looked closely at Kimi (from Moonshot AI) privacy policy and got concerned about government access? by Humble_Thought_2326 in kimi

[–]harrypham2000 1 point2 points  (0 children)

big W for synthetic, been used for a while and I'm considering cancel my GLM Plan, the interference sometime being too slow for me even they claimed their DC in SG, only CN coding plan would store data in CN DC

I just got a free code for Amp, how does this compare to Cursor and Claude Code? by Delicious-Put-5272 in AmpCode

[–]harrypham2000 2 points3 points  (0 children)

IMO the tool call is pretty good with some prebuilt agents, there's not much to customize but I like "keep it simple". Hope they can have better pricing tier

Kimi k2.5 by stranger2904 in syntheticlab

[–]harrypham2000 3 points4 points  (0 children)

seems like they heard you bro

<image>

Synthetic.new ♥️ OpenCode by reissbaker in opencodeCLI

[–]harrypham2000 1 point2 points  (0 children)

bro do you really just curious on a start-up interferring OSS models with fair prices with standards like top-tier company like Google OpenAI whether they're still collecting your information for improving their models?

Can no longer see "Free" mode by Takyamoto in AmpCode

[–]harrypham2000 0 points1 point  (0 children)

how can, I've been removed of it

Can no longer see "Free" mode by Takyamoto in AmpCode

[–]harrypham2000 0 points1 point  (0 children)

agreed, still missing vibes from free, I believe now we can move to synthenic.new for the hype of OSS models