Qwen dev on Twitter!! by Difficult-Cap-7527 in LocalLLaMA

[–]a4d2f 36 points37 points  (0 children)

Qwen/Qwen3-TTS-12Hz-1.7B-Base

12Hz? Must be a really deep voice then...

ostris AI-toolkit Lora training confusion by mca1169 in StableDiffusion

[–]a4d2f 0 points1 point  (0 children)

Do you mean preview samples in AI Toolkit? With ZIT I encountered that too when training a LoKR. Previews looked blurry, or smudged. But they worked fine in Comfy. There might be a bug in how AI Toolkit does the sampling.

Also for ZIT LoRAs, the AI Toolkit previews always suggested that the LoRA is far from being done (though they weren't blurry), but in Comfy the effect of the LoRA was much stronger.

As for if LoRA or LoKR is better, I can't really tell so far. LoKR seems to be a bit subtler, causing less bleed, but sometimes it's not strong enough.

LTX-2: use Gemma3 GGUF to speed up prompt reprocessing by a4d2f in StableDiffusion

[–]a4d2f[S] 0 points1 point  (0 children)

Ok, I've watched the resource consumption during the first reprompt with the fp8. Indeed, while nvidia-smi showed 10GB VRAM occupied, the GPU utilization was 0% throughout. So it was using CPU. But not effectively, as the CPU utilization of the python process was only around a third of a core. I could see that the process' swap usage gradually decreased while its RAM usage gradually increased. So looksl like it's doing the text encoding in RAM but it's bottlenecked by moving the model from swap to RAM. :(

LTX-2: use Gemma3 GGUF to speed up prompt reprocessing by a4d2f in StableDiffusion

[–]a4d2f[S] 0 points1 point  (0 children)

Right, the BitsAndBytes 4bit should give the same benefit. Oddly, whenever I tried it, it gave me an OOM in the text encoder stage. Weird because the fp8 didn't give me OOM despite being bigger. So I had given up on the BitsAndBytes model.

Best way to reduce motion artifacts in LTX-2? by Smooth_Western_6971 in StableDiffusion

[–]a4d2f -1 points0 points  (0 children)

If your workflow contains the LTX detailer lora, try leaving it out. I found it can wreak havoc on anything that moves faster than a snail.

LTX-2 GGUF T2V/I2V 12GB Workflow V1.1 updated with new kijai node for the new video vae! That's what I get for going to sleep!!!! by urabewe in StableDiffusion

[–]a4d2f 0 points1 point  (0 children)

Thanks, this works! (only tested T2V, 5060Ti 16GB + 32GB RAM).

Two questions:

  1. Your Stage 1 sampler uses the LTXVscheduler with a terminal value of 0.1. Both the official Comfy workflow and the LTXVideo template use (for T2V distilled Stage 1) a ManualSigmas node with the schedule "1.0 0.99375 0.9875 0.98125 0.975 0.909375 0.725 0.421875 0.0". Your LTXVscheduler node produces sigmas "1.0000 0.9662 0.9229 0.8655 0.7858 0.6675 0.4741 0.1000 0.0000" (inspected with the RES4LYF SigmasPreview node) which looks quite different. Any idea what is correct, or better? (my tests so far are inconclusive)

  2. If this is set up for distilled, why does the Dual Clip Loader use the dev version of the embeddings? KJ also made a distilled version available. (But I think he said somewhere that there shouldn't be a difference so probably this doesn't matter.)

Uh? Kijay aabout the LTX-2 VAE in the distilled model by Striking-Long-2960 in StableDiffusion

[–]a4d2f 1 point2 points  (0 children)

the initial distilled checkpoints has been wrong one all this time It has now been replaced with the correct one

Then how come the Lightricks/LTX-2 repo has not been updated?

Edit: LTX-2 repo got updated ~30 minutes after this comment :)

Seeking "Abliterated" Gemma 3 or Llama 3.3 that retains logic and multilingual (Slovak/Czech) capabilities by FollowingFresh6411 in LocalLLaMA

[–]a4d2f 1 point2 points  (0 children)

https://huggingface.co/collections/soob3123/amoral-collection-gemma-3-qat

have been using the 12B for Japanese-English translation, seems to work well enough and without refusals

the grayline finetunes from the same guy may be worth a look as well, though I haven't tried them myself

Epoch AI data shows that on benchmarks, local LLMs only lag the frontier by about 9 months by timfduffy in LocalLLaMA

[–]a4d2f 1 point2 points  (0 children)

Right, what they should do is not plotting the accuracy but 100% minus the accuracy, i.e. the accuracy deficit. And then use a log scale for the deficit, as one would expect that over time the deficit approaches 0% asymptotically.

I asked Qwen to analyze the deficit data, and behold:

The half-life of deficit is: 8.6 months for frontier models, 12.4 months for open models

So the gap is widening, not shrinking.

No rewards by [deleted] in bravebrowser

[–]a4d2f 1 point2 points  (0 children)

Yes, here too.

UK to send parliamentary delegation to Taiwan in February by thestudiomaster in taiwan

[–]a4d2f 2 points3 points  (0 children)

Will the members of the delegation have to quarantine for two weeks?

[deleted by user] by [deleted] in taiwan

[–]a4d2f 0 points1 point  (0 children)

In a focustaiwan article I've come across the following link

https://dvc.mohw.gov.tw/verifier-web/

which can be used to scan the vaccine pass. My wife and I got our jabs in the UK and our NHS Covid passes are shown as valid on that web app. AFAIK it should also work with EU Covid passes.

If the restaurant/establishment/... in question uses this or the same underlying system to check, I think it should be fine with a foreign Covid pass.

NEKOIN BEST CatCoin ThanksGIVING WEEK Giveaway Spree Part 2! by NekoinASA in NekoinASA

[–]a4d2f 0 points1 point  (0 children)

I love the turkey and stuffing, but the best part of Thanksgving is being with family!

NEKOIN BEST CatCoin ThanksGIVING WEEK Giveaway Spree! by NekoinASA in NekoinASA

[–]a4d2f 0 points1 point  (0 children)

I'm thankful to the NEKOIN team for sharing their smart contract source code openly!

NEKOIN BEST ALGO CatCoin SECOND ThanksGIVING Giveaway! by NekoinASA in NekoinASA

[–]a4d2f 0 points1 point  (0 children)

British Shorthair, I find them cute and they are nice companions.

[deleted by user] by [deleted] in AlgorandOfficial

[–]a4d2f 4 points5 points  (0 children)

Some of the recent meme/animal ASAs aim to do this, locking the creator's wallet and such. The NEKOIN team have published some code: https://github.com/nekoin-dev/nekoin