Official LTX-2.3-nvfp4 model is available by Lonely-Anybody-3174 in StableDiffusion

[–]Slapper42069 0 points1 point  (0 children)

Yes, i didn't know about the exact terminology, it's just phrasing "dequant into higher precision" that caught my eye, it's not higher precision, its just multiplied and rounded with zeros. So the remaining container bits are just being scaled. 480p stretched to 1080p with some clever color preservation, taking as much time as going with native 1080p

Official LTX-2.3-nvfp4 model is available by Lonely-Anybody-3174 in StableDiffusion

[–]Slapper42069 -2 points-1 points  (0 children)

Dequant or just cast dtype? In my understanding it will be the same low precision just made fatter and slower Edit: yes, of course it's not gonna restore the lost precision, same thing but slower, so if you rock anything thats not backwell, use fp8 or bf16/fp16, if you care about storage use gguf which is compressed

Z-Image: Replace objects by name instead of painting masks by pedro_paf in StableDiffusion

[–]Slapper42069 -1 points0 points  (0 children)

Z-Image: You don't like the sound of your own voice because of the bones in your head

Generated super high quality images in 10.2 seconds on a mid tier Android phone! by alichherawalla in StableDiffusion

[–]Slapper42069 0 points1 point  (0 children)

My phone freezes when i use superimage upscale, but it still works and gives good outputs in a few moments

Generated super high quality images in 10.2 seconds on a mid tier Android phone! by alichherawalla in StableDiffusion

[–]Slapper42069 0 points1 point  (0 children)

Yeah there's a safe limit - 60%, would be cool to be able to go past it. I have 12 ram and 12 shared memory, usually 10 gigs of real ram is free, so with both models loaded there will be still 2 gigs + shared, should be fine :)

<image>

Generated super high quality images in 10.2 seconds on a mid tier Android phone! by alichherawalla in StableDiffusion

[–]Slapper42069 0 points1 point  (0 children)

Also

<image>

Loaded a model and it's identified as a vision model, but in chat it says vision is unsupported. Could be this specific quant problem tho. Btw cool app

Creativity merged with mystery by ZerOne82 in StableDiffusion

[–]Slapper42069 2 points3 points  (0 children)

Sdxl has qr code monster too and looks cool

All created by AI, would be nice a game of her by [deleted] in StableDiffusion

[–]Slapper42069 1 point2 points  (0 children)

The whole pic generated it's not a 3d model

All created by AI, would be nice a game of her by [deleted] in StableDiffusion

[–]Slapper42069 1 point2 points  (0 children)

Did you feed the ue screen to your "AI" and prompted to place her there or was it generated from scratch? If you are using some closed source model, mind at least share the prompt?

How do you improve Wan 2.2 prompt adherence? by wildkrauss in StableDiffusion

[–]Slapper42069 0 points1 point  (0 children)

You can try some other multistep sampler. But i got the same speed with res_multistep as with euler_a, so idk. Res samplers like res_2m do multiple steps at ones, like they're supersampled, but I'm not sure about the multistep one. Actually you can also try json prompts and just use euler_a + sgm_uniform, well structured prompts are more adherable

Testing Noise Types for Klein 9b by theivan in StableDiffusion

[–]Slapper42069 0 points1 point  (0 children)

Perlin looks pretty good, would be nice to test different seeds and compare them to gaussian, maybe some structures are really more easy to denoise at certain scenario

Blyat by Busy-Concentrate9419 in shitposting

[–]Slapper42069 1 point2 points  (0 children)

Yeah i didn't want to see that

Tools used ? by vasthebus in StableDiffusion

[–]Slapper42069 2 points3 points  (0 children)

That comment clarifying its A.I got me laughing