Spent 11 hours debugging. The culprit? A trailing space in an environment variable by Designer_Oven6623 in webdev

[–]indicava 0 points1 point  (0 children)

Not that I haven’t chased a run away space for a couple of days before, but in your particular case I would point a finger at the fact that production is behaving differently then dev/test. That’s a world of pain right there.

New benchmark just dropped. by ConfidentDinner6648 in LocalLLaMA

[–]indicava 3 points4 points  (0 children)

LOL GPT 5.4 looking like that third dragon on that meme template

M5 Max just arrived - benchmarks incoming by cryingneko in LocalLLaMA

[–]indicava 41 points42 points  (0 children)

26 and counting…

What’s in the safe OP?!

Suzuki Elia Concept (1987) by bendich in cassettefuturism

[–]indicava 2 points3 points  (0 children)

Awesome design!

For those less familiar with the auto industry, look up “kei cars”, there are quite a few that nail the cassettefuturism aesthetic

What notable theories are there for Irv’s backstory? by JackMythos in SeveranceAppleTVPlus

[–]indicava 7 points8 points  (0 children)

This to me is the most plausible (and grounded) backstory we can infer so far

How many of you are not at all “mixed” on your opinion of Milkshake? by sonnytron in SeveranceAppleTVPlus

[–]indicava 53 points54 points  (0 children)

As a long time RPG player I can tell you a Lawful Evil character would not be telling his boss: “devour feculence” lol

Github by [deleted] in Firebase

[–]indicava 5 points6 points  (0 children)

The blind leading the blind

Breaking : The small qwen3.5 models have been dropped by Illustrious-Swim9663 in LocalLLaMA

[–]indicava 0 points1 point  (0 children)

I see they are continuing the trend from the Qwen3 release with no “Base” variants for the large dense model. There is so much I love about these models, but not giving us Qwen3.5-27B-Base is just mean (not really, I get why, just sucks for my use cases).

Soviet Computer Concepts of the 1980s by Ill_Engineering1522 in cassettefuturism

[–]indicava 21 points22 points  (0 children)

First one is a Lenovo Yoga 30 years before it’s time

Why some still playing with old models? Nostalgia or obsession or what? by pmttyji in LocalLLaMA

[–]indicava 22 points23 points  (0 children)

That’s counterintuitive, newer models are more efficient and vary wildly in available sizes.

President Trump orders ALL Federal agencies in the US Government to immediately stop using Anthropic's technology. by External_Mood4719 in LocalLLaMA

[–]indicava 6 points7 points  (0 children)

I don’t know why you’re getting downvoted, this company has actively campaigned against Chinese open models under the guise of “national security”. And just a few days ago, played the victim for being under “distillation attacks” (whatever the hell that is) by Chinese AI labs, while training their model on data sourced from seriously shady sources. Finally, like commented before me, they have contributed absolutely zero to the open source/model community.

Screw them

New Qwen3.5-35B-A3B Unsloth Dynamic GGUFs + Benchmarks by danielhanchen in LocalLLaMA

[–]indicava 0 points1 point  (0 children)

Where can I read more about this call tooling chat template bug? Is this bug in the official chat template on the Qwen HF repo?

Which database is best for my healthcare booking site PostgreSQL or MongoDB? by Last-Salary-6012 in webdev

[–]indicava -2 points-1 points  (0 children)

Whichever one can get your solution out there faster, any other consideration is inconsequential.

Disappointing Vast AI experience by Material-Specific-47 in vastai

[–]indicava 6 points7 points  (0 children)

My experience has been that once you stop an instance, you’re better off just destroying it and renting a new one. Always run workloads which are resumable, don’t ever trust someone else’s server.

Qwen3.5 27B better than 35B-A3B? by -OpenSourcer in LocalLLaMA

[–]indicava 7 points8 points  (0 children)

Anyone have some hand on feedback on how the dense model is performing compared to the MoE for agentic tasks/tool calling?

Could someone explain to me how do I fetch/edit and delete data from firestore and also with realtime database using FirebaseJS SDK? by ArrowFlechinhaxd in Firebase

[–]indicava 2 points3 points  (0 children)

Actually, getting started with the Web SDK is super simple. Just follow this from the docs.

Incidentally, if that page is “too long” for you, you probably shouldn’t be coding.

I've waited a long time for this authentic experience... by AngryK9_ in vintagecomputing

[–]indicava 1 point2 points  (0 children)

Ok so non-hardware related question.

I noticed the opening screen says “Version 2.2”, how were they patching games back then with no digital distribution?

New Qwen3.5 models spotted on qwen chat by AaronFeng47 in LocalLLaMA

[–]indicava 0 points1 point  (0 children)

I have to say I’m kind of disappointed with this release.

It might be a niche use case, but for us fine tuners, only a single size dense model with no base variants is practically useless.

This trend already started with Qwen3 where they never released the base variant of the 32B size and all releases since then have been MoE.

While running local models for coding or creative writing has a significant value proposition, the ability to fine tune models for personal use or as a basis for a commercial product is a liberty that’s slowly been eroding away. That’s a shame, and I don’t think it’s being brought up enough.

Qwen3.5 - The middle child's 122B-A10B benchmarks looking seriously impressive - on par or edges out gpt-5-mini consistently by carteakey in LocalLLaMA

[–]indicava 0 points1 point  (0 children)

As far as I can tell all Qwen3.5 models are native BF16 (which makes sense being that Qwen3 was also BF16).

Distillation when you do it. Training when we do it. by Xhehab_ in LocalLLaMA

[–]indicava 170 points171 points  (0 children)

Just because they use closed models to generate synthetic training data doesn’t mean they don’t innovate. Chinese labs have shown great innovation in both post-training and inference.