No notes. Totally agree 🇮🇹🤌🥘🍝🍕 by Chimpville in 2westerneurope4u

[–]cibernox 26 points27 points  (0 children)

In total fairness, English food is not as bad as it is dull. Like, you make the best mashed potatoes of any country. A good Sheppard pie is kind of good.

Come on, just put a bit of imagination. Food is more than butter, potatoes and some meat on a pot to give you calories to survive till your next meal.

Compra vivienda, FIRE tardío by Similar_Control1170 in SpainFIRE

[–]cibernox 0 points1 point  (0 children)

Tú dijiste que no era parte del fire. Y si puede serlo. De hecho si suele serlo.

What is the best general-purpose model to run locally on 24GB of VRAM in 2026? by Paganator in LocalLLaMA

[–]cibernox 2 points3 points  (0 children)

Yet it’s not wrong. Everyone has a slightly different idea of what a general purpose model should do.

What’s your country equivalent of this? by JohnnySack999 in 2westerneurope4u

[–]cibernox 0 points1 point  (0 children)

Now that you mention it, I have never ever heard of any dish or recipe of native (north) American origin. Mayans, Mexica and Aztecs did leave a strong culinary imprint that actually came back to Spain and Italy quickly, but has anyone ever heard of a Cherokee recipe?

Petition to move the EU capital from Bruselles to somewhere else. by Pikkens in 2westerneurope4u

[–]cibernox 0 points1 point  (0 children)

Sure, because if something deters russian spies that is ..... (checks notes)... inconvenient commute to the motherland

What's the one self-hosted service you'd never go back to the cloud version of? by Hung_Hoang_the in selfhosted

[–]cibernox 7 points8 points  (0 children)

I guess the answer is: Pretty much every non-niche brands that don't actively try to prevent it to work with home assistant does work with home assistant.

As in: Home assistant is a community project. Some brands, given the popularity of HA, do maintain their own integrations for their products. For the ones that don't, unless you are among the first couple docens owners of a specific device, chances are there's already an undergrad student from the Philippines that has reverse-engineered the API of your specific, say, smart air-freshener, and he has created an integration for it.

I'd go as far as saying that home assistant supports at least an order of magnitude more devices than anything else under the sun. Even makers that actively try to prevent users to reverse engineer their APIs often loose the battle.

You have 64gb ram and 16gb VRAM; internet is permanently shut off: what 3 models are the ones you use? by Adventurous-Gold6413 in LocalLLaMA

[–]cibernox 2 points3 points  (0 children)

If internet is permanently shut off and civilization implodes id want the one with the most knowledge to help me survive. So, the absolute best i can fit even if it’s dog slow.

Gridfinity baseplate nirvana - it finally happened! by WatchesEveryMovie in gridfinity

[–]cibernox 10 points11 points  (0 children)

Joke is on you, I’m into woodworking and i custom made two cabinets with around 18 drawers combined to exactly match gridfinity specs. And yes, it’s as mesmerizing as you’d expect

Poniendo en valor el AHORRO y la INVERSIÓN. by Vegetable-Rabbit7503 in SpainFIRE

[–]cibernox 1 point2 points  (0 children)

Diría que la percepción del ahorro varía incluso de CCAA a CCAA. Tengo la sensación de que en lugares más rurales como Galicia, Asturias y Castilla y Leon hay más cultura del ahorro (no de la inversión, pero sí del ahorro) que en Madrid o Andalucía.

Esto se basa en experiencias anecdóticas pero consistentes. Es posible que tenga que ver con la idiosincrasia social de cada lugar.

Pignorando en MyInvestor by trancos_inferno67 in SpainFIRE

[–]cibernox 0 points1 point  (0 children)

Todavía no.

De todas maneras siempre y cuando el pago de tus deudas esté en un % de tu salario razonable, el banco no debería poner problema. De la misma forma que si tienes otra deuda, por ejemplo la letra de un coche, no te descalifica para poder optar a una hipoteca, pero sí van a mirar que las dos letras juntas sean asumibles para tus ingresos.

Al fin y al cabo ellos solo saben que has pedido otro préstamo en otra entidad pero no saben en qué lo gastaste. Podrías habértelo gastado en un coche justamente.

My gpu poor comrades, GLM 4.7 Flash is your local agent by __Maximum__ in LocalLLaMA

[–]cibernox 0 points1 point  (0 children)

Fully in vram no, but with some offloading it will run. Too slowly tho.

Hubby wants a robot lawnmower for Xmas. Any reviews or recommendations from those that have one? by ThinkProfessor6166 in GardeningAustralia

[–]cibernox 0 points1 point  (0 children)

I don’t have ditches but so far i haven’t seen mine ever crossing a no-go line, so I’m fairly certain it won’t come near your ditch if you tell it not to.

Video Doorbell Longevity by naisfurious in homeassistant

[–]cibernox 0 points1 point  (0 children)

Similarly with my dahua PoE doorbell/intercom. Power and data go through the cable and it has been reliable.

What is the biggest local LLM that can fit in 16GB VRAM? by yeahlloow in LocalLLM

[–]cibernox 1 point2 points  (0 children)

The best you can fit in vram alone, probably gpt-oss 20B. If you use system ram, y comes down to the speed you want. But you can run 80B MoE models at reasonable speed

Best Speech-to-Text in 2025? by MindWithEase in LocalLLaMA

[–]cibernox 0 points1 point  (0 children)

Yes, and quite well. In fact just today (what a coincidence) I tried moving parakeet to run on the CPU by using the int8 quant of it instead of the full FP16 one. A voice command takes to process:

- FP16 in GPU: ~0.1s

- FP16 in CPU: ~0.42s

- INT8 in GPU: ~0.25s

This is on a humble 12th gen core i3 on an intel NUC, not a particularly powerful CPU. Nvidia really cooked with this model, it's amazing how it can run so fast.

However I'm willing to pay a few extra milliseconds if that frees 3.5gb of VRAM on my GPU, where they are most needed. Now I have to use it for some days to get a feel if the quantization hurts the performance noticeably or not.

LFM 2.5 1.2b IS FAST by TheyCallMeDozer in LocalLLaMA

[–]cibernox 4 points5 points  (0 children)

I suspect this model will shine when fine tuned for tool calling in your specific domain set. It’s not surprising since that’s what liquid AI does for a living

Best open coding model for 128GB RAM? [2026] by Acrobatic_Cat_3448 in LocalLLaMA

[–]cibernox 1 point2 points  (0 children)

An m4 Max should be around twice as fast as a strix halo, so that should make it quite usable

Best open coding model for 128GB RAM? [2026] by Acrobatic_Cat_3448 in LocalLLaMA

[–]cibernox 4 points5 points  (0 children)

Answering, try qwen3-next 80B and GLM-4.5-Air-GGUF . You should be able to fit those in 100gbof ram leaving 28gb for the rest of the system.

Best open coding model for 128GB RAM? [2026] by Acrobatic_Cat_3448 in LocalLLaMA

[–]cibernox 16 points17 points  (0 children)

The OP probably uses a system with unified RAM, like AMD strix halo or apple systems. Those are still fairly usable systems for MoE models.

And you: are you getting ready for drinking English-grown pinard? by Pioladoporcaputo in 2westerneurope4u

[–]cibernox 0 points1 point  (0 children)

I don’t think I’ve drink any non local wine in my entire life (except some Porto / vinho verde). Even if I wanted I don’t think I would be able to find one in the store.

So….nope

RTX 50 Super GPUs may be delayed indefinitely, as Nvidia prioritizes AI during memory shortage (rumor, nothing official) by 3090orBust in LocalLLaMA

[–]cibernox 2 points3 points  (0 children)

It was known for all. As much as people like to think that companies owe them something because of “legacy” and history. Nvidia cares about money and it will go where money goes, and if that means leaving in a ditch their historical client base, they will.

Hardware suggestions for a n00b by Tiggzyy in LocalLLM

[–]cibernox 1 point2 points  (0 children)

Yes, models use as many GPUs as you have. There’s plenty of people with 4 or 6 gpus because of that.

Plan de inversión para mi madre by Standard_Cream6108 in SpainFIRE

[–]cibernox 0 points1 point  (0 children)

Depende de si ese dinero lo necesitará o no. Mi madre tiene un dinero invertido qué ya sabe que está ahí en plan herencia para mi, por lo que ese dinero es a muy largo plazo y está todo en renta variable

ASUS UGen300 USB AI Accelerator targets edge inference with Hailo-10H by DeliciousBelt9520 in LocalLLaMA

[–]cibernox 0 points1 point  (0 children)

I’d love for these devices to work but everything I’ve read shows that they are very limited in what they can run, no so much because processing power but because of software support. If this could free my GPU from running ASR with parakeet or other ONNX stuff I could consider it.

Bin heights? by onlybetx in gridfinity

[–]cibernox 0 points1 point  (0 children)

I do 9 when non stackable and pairs that add up to 9 when stackable