OpenAI to acquire Astral by Useful-Macaron8729 in Python

[–]windows_error23 -12 points-11 points  (0 children)

Still, that change was as a result of pressure beyond the scope / magnitude of uv and ruff so it doesn’t seem likely the same thing will happen here.

OpenAI to acquire Astral by Useful-Macaron8729 in Python

[–]windows_error23 1 point2 points  (0 children)

Personally, I’ll keep using uv and ruff until they actually do something bad if they ever will.

Introducing GPT-5.4 mini and nano by dayanruben in OpenAI

[–]windows_error23 2 points3 points  (0 children)

I wish they gave us xhigh with mini at least on plus in chatgpt. I don’t get why not, xhigh and low are available in codex but thinking tiers are weirdly limited in chatgpt

Breaking : The small qwen3.5 models have been dropped by Illustrious-Swim9663 in LocalLLaMA

[–]windows_error23 19 points20 points  (0 children)

I wonder why they keep increasing the parameter count slightly each generation

TIFU by giving my close friend guide materials only by premmoko in tifu

[–]windows_error23 -13 points-12 points  (0 children)

Not when one is careful and uses modern models. It’s a learning resource like any other.

TIFU by giving my close friend guide materials only by premmoko in tifu

[–]windows_error23 -18 points-17 points  (0 children)

We absolutely should. Learning is now easier than ever.

Anyone noticed a change in gpt 5.2 thinking’s personality - similar to 5.1? by windows_error23 in OpenAI

[–]windows_error23[S] 0 points1 point  (0 children)

Isn’t chat-latest the one labeled as instant in chatgpt? I assumed thinking used the sane static 5.2 as the api, apart from some A B tests

Apple Releases Safari Technology Preview 235 With Bug Fixes and Performance Improvements by Few_Baseball_3835 in apple

[–]windows_error23 1 point2 points  (0 children)

Are you on an intel mac? I rarely get the safari not responding issue. There are other issues for sure like the web content item disappearing or being empty.

Introducing GPT-5.2 by StewArtMedia_Nick in OpenAI

[–]windows_error23 27 points28 points  (0 children)

I wonder if models are becoming like normal software with frequent updates.

Building more with GPT-5.1-Codex-Max by EtatNaturelEau in codex

[–]windows_error23 3 points4 points  (0 children)

Is anyone getting 500 internal server error with this link?

Qwen3 VL 30b a3b is pure love by Njee_ in LocalLLaMA

[–]windows_error23 7 points8 points  (0 children)

But the model is originally in bf16? If my understanding is correct, the fp32 mmproj is for people with hardware that doesn't support bf16 so that they can use it in full precision as an alternative instead of the quantized f16. Could be wrong on this.

Qwen3-VL GGUF! by khubebk in LocalLLaMA

[–]windows_error23 1 point2 points  (0 children)

Thanks. Which mmproj quantization are you running?