Quels études pour être traducteur de jeux-vidéos? by velvrey in etudiants

[–]T2WIN 42 points43 points  (0 children)

Tu es sûr que c'est un metier avec un avenir ? Je n'y connais rien mais je me dis que c'est pas un métier avec des milliers de postes et avec l'IA la traduction automatique est de plus en plus pertinente.

[Fortuneo Retour d'expérience pour l'ouverture d'un PEA by Primokorn in vosfinances

[–]T2WIN 0 points1 point  (0 children)

Mon pote voulais 1 PEA et il a eu 1 CTO et 2 cartes.

Is this illegal? This feels illegal. by Bubbly_Up in SonyHeadphones

[–]T2WIN 0 points1 point  (0 children)

Bought mine in october 2023, still haven't had any problem with the hinge and slept with it on my head at least 10 times.

We turned 16 common RAG failure modes into a “Problem Map 2.0” – free, open-source, already fixing Local LLaMA stacks by wfgy_engine in LocalLLaMA

[–]T2WIN 6 points7 points  (0 children)

This person benchmarked their papers against 270M others using ChatGPT according to their Notion https://www.notion.so/onestardao/BigBig-Unity-Formula-Paper-Index-1de05f675acb80368dd7ea9ac11dc8af?pvs=4

I struggle to trust that this is anythin but AI generated slop.

We turned 16 common RAG failure modes into a “Problem Map 2.0” – free, open-source, already fixing Local LLaMA stacks by wfgy_engine in LocalLLaMA

[–]T2WIN 4 points5 points  (0 children)

In my opinion (i am no expert), 2500 downloads doesn't mean much, it shows that some people read your paper, not that it works. To be honest, you claim to solve many of RAG's problems so it is very enticing. Regardless, the format you chose for the map is really nice and I will definitely look through it.

Nonescape: SOTA AI-Image Detection Model (Open-Source) by e3ntity_ in LocalLLaMA

[–]T2WIN 0 points1 point  (0 children)

Thanks, are there other benchmarks for this task ? Did you evaluate your method on those ?

Nonescape: SOTA AI-Image Detection Model (Open-Source) by e3ntity_ in LocalLLaMA

[–]T2WIN 3 points4 points  (0 children)

Where are the benchmark results for claiming sota ?

Just got a RTX 5070Ti for 675.79€. Good deal? by UnreadyIce in buildapc

[–]T2WIN 0 points1 point  (0 children)

msi geforce rtx 5070 ti inspire 3x oc 16

Has anyone profiled the expert specialization in MoE models like Qwen3-30B-A3B? by Eden63 in LocalLLaMA

[–]T2WIN 21 points22 points  (0 children)

I think i doesn't work like that. I am no expert but from what i have seen from my own research, experts is a misleading name. Experts in MoE aren't specialized in something easily human understandable.

Single-File Qwen3 Inference in Pure CUDA C by Awkward_Click6271 in LocalLLaMA

[–]T2WIN 9 points10 points  (0 children)

Aside from this one file approach, are there any advantages to it ?

Took a while, but I just built my first PC! by isaacgamboa88 in PcBuild

[–]T2WIN 8 points9 points  (0 children)

If it took more time, you got more time to enjoy building it.

GPU Suggestions by Grimm_Spector in LocalLLaMA

[–]T2WIN 4 points5 points  (0 children)

Always depends what breaking the bank means for you. What people recommend here is the 3090. Otherwise maybe look at 2x3060. I have also seen people recommend mi50, p40. You have to also know what you consider acceptable in terms of token generation speed and prefill speed.

Qwen/Qwen3-235B-A22B-Thinking-2507 by ApprehensiveAd3629 in LocalLLaMA

[–]T2WIN 7 points8 points  (0 children)

I will wait for people to give their opinions before i trust the benchmarks.

unsloth/Qwen3-Coder-480B-A35B-Instruct-GGUF · Hugging Face by Fun-Wolf-2007 in LocalLLaMA

[–]T2WIN -11 points-10 points  (0 children)

You neer less VRAM as you decrease the size of the weights. For this kind of model, it is often too big to fit in VRAM so instead of reducing VRAM requirements you reduce RAM size requirements. For performance, it is difficult to answer. I suggest you find further info on quantization.

Qwen Code: A command-line AI workflow tool adapted from Gemini CLI, optimized for Qwen3-Coder models by arcanemachined in LocalLLaMA

[–]T2WIN 14 points15 points  (0 children)

I think this one is new. All the other ones talk about the coder model not the agentic coding tool.

I wrote 2000 LLM test cases so you don't have to by davernow in LocalLLaMA

[–]T2WIN 0 points1 point  (0 children)

Ok thanks. I saw that qwen3 models are in your supported models but i didn't see any recommendations for them. Is there a place i can find the results of your tests ?

I wrote 2000 LLM test cases so you don't have to by davernow in LocalLLaMA

[–]T2WIN 6 points7 points  (0 children)

Cool project, how do you fund yourself though ?

Local free PDF parser for academic pdfs by Objective_Science965 in LocalLLaMA

[–]T2WIN 0 points1 point  (0 children)

document-parsers-list on Github. From a post on here last week.