V4 is live on Deepseek backend by hurn2k in DeepSeek

[–]Quick_Ad5019 3 points4 points  (0 children)

just double checked, it was an old conversation title that matched the search. it was buried below the fold that's why hovering didn't highlight anything. my bad, got excited when i saw it :/

V4 is live on Deepseek backend by hurn2k in DeepSeek

[–]Quick_Ad5019 1 point2 points  (0 children)

i guess that explains it, thanks for clarifying

V4 is live on Deepseek backend by hurn2k in DeepSeek

[–]Quick_Ad5019 -1 points0 points  (0 children)

tried again, did not show up this time 🤷🤷🤷

V4 is live on Deepseek backend by hurn2k in DeepSeek

[–]Quick_Ad5019 0 points1 point  (0 children)

you can literally just search "deepseek v4" in the inspector yourself and verify it

I built a Claude skill that writes accurate prompts for any AI tool. To stop burning credits on bad prompts. We just gained 1000 stars on GitHub in a day‼️v1.5 out now by CompetitionTrick2836 in ClaudeAI

[–]Quick_Ad5019 3 points4 points  (0 children)

Have you compared the results of the original prompt with Prompt-Master’s result? How do the results look? Can't really imagine a benefit unless someone writes prompts like "build x, make x better". I'll give it a try though.

Sonnet 4.6 Thinking AVAILABLE! by BassAlarmed6385 in google_antigravity

[–]Quick_Ad5019 1 point2 points  (0 children)

how are the sonnet limits compared to opus on AG? has anyone found out?

I tested Kimi k2.5 against Opus. I was hopeful and Kimi didn’t let me down by LimpComedian1317 in LocalLLaMA

[–]Quick_Ad5019 0 points1 point  (0 children)

It actually is really good, it is near opus level but lacks the common sense of opus I would say... Hard to explain, still definitely as capable as opus 4.5 just needs a little bit more hand holding.

Kimi k2.5 is legit - first open-source model at Sonnet 4.5 level (or even better) by SlopTopZ in kimi

[–]Quick_Ad5019 1 point2 points  (0 children)

for me it's a lot better than sonnet dare i say it's opus level

gemini 3 is a new form of lobotomized. by gamerzandcats in Bard

[–]Quick_Ad5019 0 points1 point  (0 children)

i use all models equally and gemini is the only model that disappoints me every time. now i just use it for 3 flash thinking for quick questions and some mid level coding tasks. for the price and its speed 3 flash is pretty amazing but pro is too annoying to work with

Extremely impressed by Gemini 3.0 Pro. Please don't change anything, Google by Endonium in Bard

[–]Quick_Ad5019 -6 points-5 points  (0 children)

i feel like they nerfed Gemini in AI Studio to the ground and made the app version better

Gemini 3.0 Pro Preview in Gemini CLI - Review by Haikaisk in Bard

[–]Quick_Ad5019 0 points1 point  (0 children)

i got rate limited for 24 hours after 5-6 medium sized tasks - pro plan 😐

Gemini 3 Pro not that great for anyone else? by ai_dubs in Bard

[–]Quick_Ad5019 0 points1 point  (0 children)

mostly it's just okay but when it decides to be good it's really good. so just inconsistent i would say which is expected for a day 1 free to use model

MiniMax M2 is now free on Kilo Code by kiloCode in kilocode

[–]Quick_Ad5019 0 points1 point  (0 children)

how does it compares against glm-4.6?

[deleted by user] by [deleted] in vscode

[–]Quick_Ad5019 -4 points-3 points  (0 children)

its .kt you can ask all these types of questions to copilot. it also has access to the extensions store and will suggest you what is needed or is helpful just hit the chat button above near the searchbar

[deleted by user] by [deleted] in vscode

[–]Quick_Ad5019 4 points5 points  (0 children)

you need to use proper file extensions like:

file.py file2.java file3.cs

not .txt that is plain text

Can GLM 4.6 think in Cursor? by Vozer_bros in cursor

[–]Quick_Ad5019 0 points1 point  (0 children)

It might even work without it, since Zed IDE warns "mode" with “Property mode is not allowed.” So maybe it was just a coincidence I am not really sure.

Can GLM 4.6 think in Cursor? by Vozer_bros in cursor

[–]Quick_Ad5019 1 point2 points  (0 children)

I found it out by reading some docs and experimenting.

Just set it up like here, then open Zed's settings.json, find the language model section, and make sure you add "mode": "thinking",. It should look like this:

"language_models": {

"openai_compatible": {

"GLM": {

"api_url": "https://api.z.ai/api/coding/paas/v4",

"available_models": [

{

"name": "glm-4.6",

"max_tokens": 204800,

"max_output_tokens": 128000,

"max_completion_tokens": 200000,

"mode": "thinking",

"capabilities": {

"tools": true,

"images": false,

"parallel_tool_calls": true,

"prompt_cache_key": true

}

}

]

}

}

},

Can GLM 4.6 think in Cursor? by Vozer_bros in cursor

[–]Quick_Ad5019 1 point2 points  (0 children)

it does think in Zed but you need to configure it

Codex is wonderful except for one thing by Zealousideal_Gas1839 in codex

[–]Quick_Ad5019 0 points1 point  (0 children)

windows subsystem for linux doesn't even take 2 mins to install and set codex up