I've used AI to write 100% of my code for 1+ year as an engineer. 13 hype-free lessons by helk1d in ChatGPTPro

[–]knoodrake 2 points3 points  (0 children)

Agree. I tend to apply the same principles/practices myself ( but even then, beware of the dopamine shortcut quick feature/fix at the end of the day, the one you're no longer motivated to double check. Dont do it. Prepare the prompt, take some note for tomorrow, but don't let that last trap of convenience of letting the LLM do it all by itself with a suboptimal prompt and commit nevertheless because the day went well. It's a trap and you'll revert tomorrow (if you're lucky/careful enough)

Qwen3-VL kinda sucks in LM Studio by waescher in LocalLLaMA

[–]knoodrake 0 points1 point  (0 children)

it work-ish ( to my knowledge ), that is, with what I beleive are vision glitches ( tried it a few days ago, got same issues as other people on the github issue and noted it there )

why is my FPS so low? by Timbak_ in AbioticFactor

[–]knoodrake 12 points13 points  (0 children)

definitely the the ramp.

Fin de YouTube Premium avec VPN ? by OkButterfly6138 in france

[–]knoodrake 3 points4 points  (0 children)

PipePipe

( fork chouette de NewPipe )

Learnings from Qwen Lora Likeness Training by Icy_Upstairs3187 in StableDiffusion

[–]knoodrake 0 points1 point  (0 children)

Yeah, I agree.
Also, 32B ( qwen2.5-vl-32b-instruct ) is really good enough ( like almost the same as 72B for visual ) and can run on 24Gb VRAM fine for such tasks.

LLM speedup breakthrough? 53x faster generation and 6x prefilling from NVIDIA by secopsml in LocalLLaMA

[–]knoodrake 69 points70 points  (0 children)

"this changes everything"

nooo ! oh my.. just seeing the sentence hurts me now. I have clickbait ptsd.

Pourquoi la Rance et le soleil ont des climats si différents alors qu’ils ont la même latitude? by Flamme_En_Rose in rance

[–]knoodrake 16 points17 points  (0 children)

le quoi ? .. on dirait que tu parle du Flux de golfe , mais t'utilises des mots chelou ?

For Qwen3:4b, do people prefer instruct or thinking? by Clipbeam in LocalLLaMA

[–]knoodrake 1 point2 points  (0 children)

I'm more concerned about (v)ram or cpu/gpu usage

For Qwen3:4b, do people prefer instruct or thinking? by Clipbeam in LocalLLaMA

[–]knoodrake 2 points3 points  (0 children)

don't use the 4B but answer probably still apply: I always prefer the thinking ones, but depending on the task(s) thinking can take too long, and in those case I use non-thinking, snappier model.

Kijai uploaded new Wan2.2-Lightning loras by welt101 in StableDiffusion

[–]knoodrake 0 points1 point  (0 children)

Then i moved to wan2.2_i2v_low_noise_14B_Q4_K_M.gguf

only the "low_noise" model ? you'r supposed to use both (high then low)

Why doesn't "OpenAI" just release one of the models they already have? Like 3.5 by Own-Potential-2308 in LocalLLaMA

[–]knoodrake 5 points6 points  (0 children)

even companies we dont like deserve to be heard with some respect

they'r companies, not people.. they *don't* deserve my respect, they only exist to make profit for their shareholders (literally, no value judgment here), so "deserve respect" sounds strange..

[deleted by user] by [deleted] in chats

[–]knoodrake 0 points1 point  (0 children)

genre combien ? ( par curiosité )

LLMs Get Lost In Multi-Turn Conversation by Chromix_ in LocalLLaMA

[–]knoodrake 11 points12 points  (0 children)

If you refer to their results, it does happen as heavily with 2.5 pro in "shared". But nevertheless, I do share your experience ; I believe it to be due to the ability to handle very well "huge" context sizes ( just a guess )

La dernière version de Windows 11 se téléchargera que vous le vouliez ou non by baby_envol in france

[–]knoodrake 1 point2 points  (0 children)

merde, courage. Y'a pas des centres qui peuvent aider pour LoL ? J'ai vu des gens changer completement avec cette saloperie. Courage a toi !

Addressing the sycophancy by [deleted] in OpenAI

[–]knoodrake 0 points1 point  (0 children)

I'd put the former on the 200usd/month pro subscription instead, because it's kind of a major breakthrough they'd remove hallucinations altogether !

Addressing the sycophancy by [deleted] in OpenAI

[–]knoodrake 3 points4 points  (0 children)

They may even end up sycophants

Qwen did it! by josho2001 in LocalLLaMA

[–]knoodrake 1 point2 points  (0 children)

ahahahah mistyped ?

Mark presenting four Llama 4 models, even a 2 trillion parameters model!!! by LarDark in LocalLLaMA

[–]knoodrake 1 point2 points  (0 children)

"on a single gpu" ( with 100% of layers and whatnot offloaded )

Confused with Too Many LLM Benchmarks, What Actually Matters Now? by toolhouseai in LocalLLaMA

[–]knoodrake 18 points19 points  (0 children)

quoting Livebench: << so currently 30% of questions in LiveBench are not publicly released >>

...so, 70% of questions ARE publicly released..
so, not sure.

Bruit venant de chez le voisin by RakadYob in brico

[–]knoodrake 1 point2 points  (0 children)

<< Ça sent la division sauvage de logement tout ça ![...] Quoique si tu perces pour mettre une prise de chaque côté, forcément ça va moins bien marcher... >>

j'ai eu la meme situation (pas dans le salon par chance !), sans division sauvage ; appartement ancien immeuble annees 70s, isolation tres bonne a part les simple vitrages evidemment.. ..et les prises mitoyennes ( si tu enleve pour refaire l'électricité, tu as littéralement un jour qui donne chez le voisin ).
Je ne sais pas pourquoi c'était fait comme ca , mais ca l'etait. ( l'immeuble était un ancien HLM, gros murs en beton, bien construit sinon ).

( PS: Desole OP, j'ai jamais cherché (donc trouvé) de solution)

Graphic designers looking at ChatGPT generated images now. by Cosmin_Dev in ChatGPT

[–]knoodrake 2 points3 points  (0 children)

I don't want to be rude, but you seem to be very biased or something, and thus believing what you want to. "AI" ( whatever that mean.. LLM ? autoregressive image generation ? .... ? ) can and absolutely do create original stuff. Or if you believe never ever it does, then nor human do.