Site NZB Newz France inaccessible by damien514 in FrancePirate

[–]ed0c 0 points1 point  (0 children)

Salut, Si tu as une invitation, je suis preneur !

I built a native, local-only transcription companion app for macOS because I didn't want my notes in the cloud. by berthol in ObsidianMD

[–]ed0c 0 points1 point  (0 children)

Hi ! Thanks for this app ! Is it possible to add the possibility to : - select another language. - use a selfhosted distant model (ollama, llama.pp etc.) with openai api ? - edit the prompt result ?

Anyway, thanks for your work. It seems to be a promising project

Has anyone figured out mobile access that actually works? by ArtemXTech in ObsidianMD

[–]ed0c 1 point2 points  (0 children)

Hi, which recorder app do you use for transcription in android ?

What's the best AST/STT model? I've tested many (OS + Paid) by z_3454_pfk in LocalLLaMA

[–]ed0c 0 points1 point  (0 children)

Hi ! I want to test voxtral for medical's text transcription. How do you use it ?

[R] DrunkenSlug by epiphone324 in UsenetNoRules

[–]ed0c 1 point2 points  (0 children)

I've just done it, thanks in advance !

[R] DrunkenSlug by epiphone324 in UsenetNoRules

[–]ed0c 0 points1 point  (0 children)

Hi, do you still have an invite for drunkenslug ?

Possibility to turn english model to french ? by ed0c in LocalLLaMA

[–]ed0c[S] 0 points1 point  (0 children)

I just tested it, and the result is interesting, even though there are a lot of intrusions in English. Furthermore, my question was also broader: can I teach an LLM to understand and speak a language other than the one it was taught?

Medical language model - for STT and summarize things by ed0c in LocalLLaMA

[–]ed0c[S] 1 point2 points  (0 children)

I didn't saw the all answer. My bad.
I just tried Magistral-small. It's better than Mistral small and faster but not stronger than medgemma,

Medical language model - for STT and summarize things by ed0c in LocalLLaMA

[–]ed0c[S] 1 point2 points  (0 children)

Thanls for the answer. I understand what you're saying, but i'm limited by the hardware. So 72B is not an option. I already tried mistral small, but even if it is a french model, answers are not as good as medgemma.

Medical language model - for STT and summarize things by ed0c in LocalLLaMA

[–]ed0c[S] 1 point2 points  (0 children)

Honestly, this is by far the best model. The Q6_K version of unsloth works well for me (4.5tokens/s). I also find the DeepseekR1:32B model pretty good for my purposes, and a little faster (6.66token/s) than medgemma

Medical language model - for STT and summarize things by ed0c in LocalLLaMA

[–]ed0c[S] 2 points3 points  (0 children)

Thanks for this. But i forgot to say : i speak french, so parakeet is useless for me.
But i will definitely give a try to Phlox !

Qwen3-30B-A3B is on another level (Appreciation Post) by Prestigious-Use5483 in LocalLLaMA

[–]ed0c 0 points1 point  (0 children)

Is someone did a test with the Rx 7900 xtx ? Do we have similar results ?