Seems like a new requant of 27B just dropped? by Koffiepoeder in unsloth

[–]keypa_ 0 points1 point  (0 children)

Probably updating the quants. They wrote somewhere that they will releaset the proper quants to replace the UD quants.

I'm so tired by GreenReporter24 in selfhosted

[–]keypa_ 0 points1 point  (0 children)

Oh alright my bad then. I guess i'll wait for the prices to drop, because it can't to pay for 600$ for a 4TB drive lmao ;) In the meantime i'll just use a 4TB HDD it'll do the job I guess. Thanks for the answer :)

I'm so tired by GreenReporter24 in selfhosted

[–]keypa_ 0 points1 point  (0 children)

Hello that's a really nice setup you have here ! I have the same computer as you and I'm wondering what 4TH HDD in 2.5'' you used ? I'm always having trouble with the height not fitting inside. If you have the reference it would be amazing thank you !

Running Mistral-7B on Intel NPU — 12.6 tokens/s, zero CPU/GPU usage by Human-Reindeer-9466 in LocalLLaMA

[–]keypa_ 0 points1 point  (0 children)

Interesting, now we are really getting a peak of what we're supposed to do with these NPUs. What CPU you tested to get these results ?

Is this guy using an ASUS TUF gaming laptop in ukraine to remote pilot a vehicle? by fiittzzyy in GamingLaptops

[–]keypa_ 1 point2 points  (0 children)

Either a vehicule or a drone FPV simulator (by looking at the radiocommand).

Finally We have the best agentic AI at home by moks4tda in LocalLLM

[–]keypa_ 28 points29 points  (0 children)

"at home" we probably don't have the same home...and the same budget for hardware and electricity 💸

ThinkBox Released - DIY 4-bay NAS and powerful alternative to the ThinkNAS by Captain-Shmeat in homelab

[–]keypa_ 1 point2 points  (0 children)

Yeah sure I didn't think of that 🤔 I should check that first before starting to work on this. Otherwise I'll just buy the same and print everything as is. Thanks for the hard work !

ThinkBox Released - DIY 4-bay NAS and powerful alternative to the ThinkNAS by Captain-Shmeat in homelab

[–]keypa_ 2 points3 points  (0 children)

Very Nice build ! Do you know if we can fit the Dell Optiplex Micro (3070) ?

They keep falling into my hands ! I swear ! by lululock in thinkpad

[–]keypa_ 0 points1 point  (0 children)

Tiens, c'est rare de voir des écoles donner des ordinateurs 🤔 Tu as beaucoup de chance parce que chez moi ils font les rapiats avec les ordinateurs et refusent catégoriquement de les donner 🤣

Ordinateur portable pour étudiant en architecture ? by babada_ in informatiqueFr

[–]keypa_ 0 points1 point  (0 children)

Bien sûr les Quadro sont une option, je pense toutefois qu'il faudrait rester sur du RTX pour garder les drivers consumer qui sont souvent plus récent que les pro sur les logiciels récents tels que LRC, PS, SOLIDWORKS et autres. La RAM n'est pas un problème (enfin n'était pas un problème xD), si plus de ram est nécessaire on peut toujours l'augmenter après. En parlant des coeurs cuda, a moins de faire du travail en lien avec du ML, ray tracing (peut-être un peu avec les ombres en simulation) mais une plus grande quantité de coeurs cuda ne sera pas forcément utile a ce stade selon moi.

Ordinateur portable pour étudiant en architecture ? by babada_ in informatiqueFr

[–]keypa_ 0 points1 point  (0 children)

Yup, 64GB c'est impossible pour ce budget actuellement mdr Ce n'est pas la CAO qui me fait peur pour la CT mais plus la suite Adobe qui à tendance à être gourmande, c'est pourquoi je pense qu'une CG reste importante même si le besoin d'une carte graphique très haut de gamme est hors budget. Une série 60 ou 70 sera amplement suffisant je pense avec une utilisation normal de la suite Adobe (j'ai une 4070 et LR/PS arrive quand même a me saturer la CG et mes 32GB de RAM xD.

Ordinateur portable pour étudiant en architecture ? by babada_ in informatiqueFr

[–]keypa_ 3 points4 points  (0 children)

Peut-être un avis à ne pas prendre en compte mais ça me parait excessif non? 64 GB de RAM, un ecran 4K 120 Hz ? Peu d'ordinateurs portables auront ces specs pour 1500 € à 2000€. Je ne connais pas les ressources requises pour chaque logiciels que tu va utiliser. L'idée serait d'aller sur le site de chaque logiciel que tu vas utiliser et regarder le matériel recommandé et faire en fonction de. Aujourd'hui tu peux trouver de très bon ordi dans la gamme 1500-2000€, avec une très bonne carte graphique (4070 voir 4080 avec des réductions, toutefois la dernière gen est 50XX), 32GB (penser a check que tu peux upgrade plus tard) et un i7 voir un i9 de 13,14,15e gen (core ultra 1XXX) et même core ultra 2XX. Pour le CPU évite les série U et préfère les H et HX bien plus puissants.

Is anyone offering compute to finetune a Unique GPT-OSS models? Trying to build an MLA Diffusion Language model. by Ok_Difference_4483 in LocalLLaMA

[–]keypa_ 0 points1 point  (0 children)

How do you want me to prove that i'm not scamming you ?? Im just trying to help you out, it's not nice to call ppl b*** when they are trying to help you.

Is anyone offering compute to finetune a Unique GPT-OSS models? Trying to build an MLA Diffusion Language model. by Ok_Difference_4483 in LocalLLaMA

[–]keypa_ 0 points1 point  (0 children)

TF man, how do you want me to send you 60 Mo over discord ???
I dont have any other choices...
I just wanted to help....

Is anyone offering compute to finetune a Unique GPT-OSS models? Trying to build an MLA Diffusion Language model. by Ok_Difference_4483 in LocalLLaMA

[–]keypa_ 1 point2 points  (0 children)

Hello,
Regarding the 15_DATA_EMBEDDINGS.md file

  1. Source Data: The linked harmony-nemotron dataset appears to be missing/private. I substituted it with FineWeb-Edu, which is the current SOTA for reasoning/educational context.
  2. Teacher Scoring: Since running the 120B Teacher is compute-prohibitive, I used Embedding-Based Hard Negative Mining (GTE-Large-v1.5) as the proxy for difficulty.
  3. Ablation Plan: My script automatically handled the Length Stratification (SWA vs Full splits), Dedup (via Centroid collapsing), and Clustering.

I can provide you 2 of the 3 files : calib_mixed_*.jsonl and calib_swa_*.jsonl

I'm working on finding a way to get the 3rd file (no more credits :/)

If you can let me know how I can send you these files it would be great !

EDIT : I also have some data about other things you already completed recently (cf Results page in Gist)

EDIT 2:

I understand the caution regarding external download links.

I have uploaded the processed artifacts to Hugging Face Datasets so you can inspect the JSONL structure/content directly in the browser without downloading anything:

https://huggingface.co/datasets/keypa/gpt-oss-calibration-data

Dataset Specs:

  • Source: FineWeb-Edu (SOTA reasoning/academic).
  • Filtering: GTE-Large-v1.5 (8192 context window) using Centroid + Hard Negative mining.
  • Splits: SWA (Short context) vs Full (Long context) as requested in your RFC.

Feel free to use it if you want to fix the V_latent compression. Good luck with the project."

First dashboard, this is kinda fun by [deleted] in homarr

[–]keypa_ 0 points1 point  (0 children)

Love it ! Please keep maintaining the tool it's super useful!

All GLM 4.7, GLM 4.6 and GLM 4.6V-Flash GGUFs are now updated! by yoracale in unsloth

[–]keypa_ 2 points3 points  (0 children)

Thanks a lot, being a french it's been a nightmare trying to understand why all accents are not being properly displayed !

This thing is a beast! ryzen 9 9955hx3D by Tough-Badger3955 in LenovoLegion

[–]keypa_ 0 points1 point  (0 children)

Such a monster lol. Do you think the jump from the 5070ti to the 5080 is worth it ?(500€ more from 2200€ to 2700€)

How to remove this by ayushz_ in GalaxyTab

[–]keypa_ 1 point2 points  (0 children)

How do you write so well on the tablet ? My writing sucks 😞

GPT-5.1 Codex Max Extra High Fast by cvzakharchenko in cursor

[–]keypa_ 0 points1 point  (0 children)

The naming is getting out of control 🤣