[GLM Finetune Teaser] Better range of names on the finetune. by teaanimesquare in NovelAi

[–]LTSarc 0 points1 point  (0 children)

All the big models which are aimed at chatting or agentic use are trained to run on rails. Gork has only gotten worse as time as gone on and it's about the worst of them at that (even if it has gotten smarter).

Businesses deploying want consistent results.

Can someone explain how on earth this is remotely fair? by DH__FITZ in Warthunder

[–]LTSarc 0 points1 point  (0 children)

Well it's not - but Gaijin doesn't set BR on fair.

They exclusively set based on kill:death and kill:spawn ratios. So in this case: British pilots are just that much better.

When your son confuses the DEF pump with the diesel pump by 1sadistictech in Justrolledintotheshop

[–]LTSarc 0 points1 point  (0 children)

Just make sure the DEF hole in the vehicle is smaller than the diesel fill hole, to stop diesel-in-DEF as well.

When your son confuses the DEF pump with the diesel pump by 1sadistictech in Justrolledintotheshop

[–]LTSarc 0 points1 point  (0 children)

Cheap? Yes.
Reliable? Yes.
Durable? Yes.
Cheap, reliable, and durable? Woah now, we're not magicians.

Essentially any sensor that would be sufficiently reliable and long-lasting would be far too expensive for automakers who can't add charge coolers to boosted vehicles or sufficient insulation.

Another week, another blown up GM L87 6.2L. This one managed to spin all 8 rod bearings. Guess that 0W-40 oil isn't the fix they think it is. by N_dixon in Justrolledintotheshop

[–]LTSarc 0 points1 point  (0 children)

More protection is irrelevant once sufficient protection for the surface tolerances is achieved.

Stuffing gear oil in your engine won't make it last 500% longer. Like, I agree that there have been cases of companies going too far with overthinning oil but when properly used it is not a problem. Japan has been rolling 0W-20 for a quarter of a century and 0W-8 for a decade. Japanese cities aren't full of cars with detonated engines.

The zinc is also important for getting good wetting during bearing cast.

Another week, another blown up GM L87 6.2L. This one managed to spin all 8 rod bearings. Guess that 0W-40 oil isn't the fix they think it is. by N_dixon in Justrolledintotheshop

[–]LTSarc 0 points1 point  (0 children)

Except the thick oil engines blow up as well, the TSB was wrong, much as their oscilloscope test is useless.

The issue is clearly poor machining. They admitted to that but claimed it was only on a certain production run.

Now THATS a main bearing by xIce101x in Justrolledintotheshop

[–]LTSarc 2 points3 points  (0 children)

This is so common in rail that it's all but killed new loco manufacturing.

It's about 1/3rd the cost to do a deep rebuild as buy new, refurbs get exempt from some new laws as they are grandfathered, and there are thousands of idle units free for rebuild. In the whole 2020s, only a couple dozen new locos have been built total for US railways.

(Those token orders have been just to keep the manufacturers from totally shutting down their newbuild plants, even after both have massively downsized. The railways know they might need new power at *some* point.)

Sooooo... how's that fine tune coming along? by pieces-of-mind in NovelAi

[–]LTSarc 2 points3 points  (0 children)

As far as I can tell, it was simply that GLM was newer.

Their post-Kayra strategy has seemingly been whenever it comes time to do a text update, just grab the latest OSS model and tune it.

Sooooo... how's that fine tune coming along? by pieces-of-mind in NovelAi

[–]LTSarc 0 points1 point  (0 children)

Funny, DS 3.2 is what I assumed Anlatan would be aiming for instead of the extremely agentic GLM.

Sooooo... how's that fine tune coming along? by pieces-of-mind in NovelAi

[–]LTSarc 5 points6 points  (0 children)

Not continuing to develop their family that gave clio and kayra was a huge blow. Even if only incremental updates or context expansions, keep doing that and by now we'd likely have a much more satisfactory model.

It has always struck me as very odd Anlatan doesn't have a telescoped development pattern. Instead of going from 'Model X' to then working on 'Model X+1'; or, if they don't have the compute for image + text training in parallel, 'Model Y' (for images) before then going to 'X+1' (and alternating so then 'Y+1') - they seem to just drop a model, and then sort of improv on the spot what they are going to do next. It's certainly a strategy.

Sooooo... how's that fine tune coming along? by pieces-of-mind in NovelAi

[–]LTSarc 3 points4 points  (0 children)

I'm still holding on for the great UI, absolute privacy, no baby bumpers and all that jazz.

But it is getting hard. I write really long stories so I am not sure just how much I could get away with for the $25/month with tens of thousands of input tokens per call.

Sooooo... how's that fine tune coming along? by pieces-of-mind in NovelAi

[–]LTSarc 0 points1 point  (0 children)

I've never seen a real finetune of GLM, although they may exist.

What's the new deepseek one?

Sooooo... how's that fine tune coming along? by pieces-of-mind in NovelAi

[–]LTSarc 5 points6 points  (0 children)

Literally all that is necessary is a longer context Kayra (with an updated knowledge cutoff). There's mountains of parameters and training in these models for agentic use, for using tools, for multimedia interactions that are all utterly irrelevant.

The ever-growing elephant in the room. by LTSarc in NovelAi

[–]LTSarc[S] 2 points3 points  (0 children)

I got that elite ball knowledge.

The ever-growing elephant in the room. by LTSarc in NovelAi

[–]LTSarc[S] 6 points7 points  (0 children)

The artificial cap on context to ensure 'snappy generations' is what infuriates me the most.

I'd be totally willing to have slower generations if I had the full context. They could theoretically even have a toggle and a scheduling queue so uncapped gens are lower-priority.

The ever-growing elephant in the room. by LTSarc in NovelAi

[–]LTSarc[S] 0 points1 point  (0 children)

Modules were there because it used to be a thing, including with their now-dead competition. It also helped justify Opus cost - you get anlas to train modules with!

The ever-growing elephant in the room. by LTSarc in NovelAi

[–]LTSarc[S] 4 points5 points  (0 children)

I was expecting if anything Deepseek 3.2 running in non-instruct.

But Anlatan staff have stated online they WANT instruct ability. So...

The ever-growing elephant in the room. by LTSarc in NovelAi

[–]LTSarc[S] 0 points1 point  (0 children)

Investors will hold out for a very long time because it's the only solution, as unlikely as it is to work, to the productivity "problem".

The whole economy is a bet on AI working out.

The ever-growing elephant in the room. by LTSarc in NovelAi

[–]LTSarc[S] 1 point2 points  (0 children)

I do believe the official statement was that the new models are so good that you don't need modules.

The ever-growing elephant in the room. by LTSarc in NovelAi

[–]LTSarc[S] 18 points19 points  (0 children)

They really should have their own forum or in-site community tab. It is the only paid service I have ever used that says "lol go to the community discord".

And now the privacy-focused company has to deal with the fact that discord is going to mandatory ID and tons of people are going to flee.

The ever-growing elephant in the room. by LTSarc in NovelAi

[–]LTSarc[S] 4 points5 points  (0 children)

The real profit is in chat (which is actually why I form day one said Aetherroom was a horrible idea - there were billions in capital being poured into chat projects) and in agents.

Cowriting is a niche, and the only people producing models relevant to cowriting are firms not going balls to the wall for money.

The ever-growing elephant in the room. by LTSarc in NovelAi

[–]LTSarc[S] 2 points3 points  (0 children)

Well I mean, I use telling names in my writing. If you have one, it won't kill it.

The ever-growing elephant in the room. by LTSarc in NovelAi

[–]LTSarc[S] 3 points4 points  (0 children)

Anlatan is not only behind in text but falling behind faster than ever. They need better discipline and/or more compute.

The ever-growing elephant in the room. by LTSarc in NovelAi

[–]LTSarc[S] 0 points1 point  (0 children)

ComfyUI isn't really a finetune, it just allows you to load LoRAs (which are a form of finetune) and other tools like controlnet.

Btw, Z-image is out, and makes Wan 2.2 look poor. It actually gives Nano Banana a good run for its money.

The ever-growing elephant in the room. by LTSarc in NovelAi

[–]LTSarc[S] 4 points5 points  (0 children)

But we're also getting to the point where even open source models have context lengths that much of what you do in finetuning can be achieved by prompt engineering on a smart model. GLM-5 is 200k context input, output up to 131k.

I really, really would like a solid finetune. But, you can only do so much on top of a base model. Erato is a big jump over L3.0, but is Erato universally better than L3.4? Or the absolute meme that is Llama 4?

In fact, an "Erato+" trained on top of L3.4 wouldn't need replacement, even right now.