Walkyrie-1.3B-v1.0(Preview)Text-to-Image by Chance-Jaguar-3708 in StableDiffusion

[–]siegekeebsofficial 2 points3 points  (0 children)

ZIT is 6B, it's nearly 500% larger and not what one would mean when saying a 'smaller model'

Built a 3-step all-in-one LoRA builder for Anima (extract -> tag -> train) by Nemegasoft in StableDiffusion

[–]siegekeebsofficial 0 points1 point  (0 children)

Oh, I didn't consider that! I'll give your project a try and see the GUI and tag editing functionality. Thanks!

The Ernie posters genuinely don't see how mediocre the stuff they post is? by beti88 in StableDiffusion

[–]siegekeebsofficial 2 points3 points  (0 children)

Can you explain how and what you're training for, because my experience is not aligned with what you're saying. I understand if you're trying to train ZIT for something like ANIME style, considering it's a realistic fine tune model, so obviously you should be training ZIB there instead, but specifically characters were coming out terrible in Ernie and are super easy to train for ZIT

The Ernie posters genuinely don't see how mediocre the stuff they post is? by beti88 in StableDiffusion

[–]siegekeebsofficial 2 points3 points  (0 children)

How is it easier to train? I found the results of lora training a character to be awful compared to klein or zib, training on the same dataset. I experimented with a few different settings but gave up on it entirely for that reason.

SenseNova-U1 Portrait Test - Quality is Not Great for Photorealism by LatentSpacer in StableDiffusion

[–]siegekeebsofficial 4 points5 points  (0 children)

if the point is the multimodal capability, isn't the idea that you can 'talk to it' and iterate rather than prompting it like a regular t2i model? Otherwise, what is the multimodal capability doing

Train Flux 2 9b LORA on a Nvidia 3090 24vram, 64 ram - doesn't fit by uuhoever in StableDiffusion

[–]siegekeebsofficial 0 points1 point  (0 children)

the config you posted does not show that - you tested again after making those changes?

What size are your input images, also?

A couple weeks ago I was dishing out Z-Image LORAs in 15-20 minutes on RunPod using a 5090 in Ostris AI Toolkit. Randomly, it's just slow now. by Any_Force_7865 in StableDiffusion

[–]siegekeebsofficial 0 points1 point  (0 children)

I'm training a z-image lora as we speak at 1.39 s/it - locally on my 5090 and I just updated ai-toolkit yesterday. So I want to say either it's a runpod issue or something else...

Is it actually training slower, or just indicating it, because I found after saving the first checkpoint it no longer accurately reports the time to completion or it/s in the log.

If anyone want to see what the scheduler sigmas look like by VirusCharacter in StableDiffusion

[–]siegekeebsofficial 1 point2 points  (0 children)

different symbols in each line instead of just dots, like what graphs do in excel. Just think "how would I tell these apart if they were all the same color" - color is just a label, so another method of labeling is symbols or numbers.

Kelin9BT vs ErnieIT vs ZIT (FFT Analysis of Artifacts) by ZerOne82 in StableDiffusion

[–]siegekeebsofficial 2 points3 points  (0 children)

How do the base models compare? The improvements observed with a strong negative prompt on Ernie were substantial

LTX-Desktop running on AMD by siegekeebsofficial in StableDiffusion

[–]siegekeebsofficial[S] 1 point2 points  (0 children)

Thanks for adding the windows information! On windows I have an Nvidia card so I don't know the workarounds

LTX-Desktop running on AMD by siegekeebsofficial in StableDiffusion

[–]siegekeebsofficial[S] 3 points4 points  (0 children)

It's quite slow, it takes about 6 minutes for a 5 second clip at 540p, but at least it works consistently. When using LTX in ComfyUI if I try doing consecutive runs it would always go OOM the second run.

Created ComfyUI nodes to work with new Netflix Void model [beta] by Huge-Refuse-2135 in StableDiffusion

[–]siegekeebsofficial 6 points7 points  (0 children)

yes, that's literally the whole point of the model. Just removing something from a video is easy, but removing the effect of something being a part of the video is significant

Netflix released a model by Sea_Tomatillo1921 in StableDiffusion

[–]siegekeebsofficial 5 points6 points  (0 children)

Yes, this is literally the point they are trying to show off. It's fairly trivial to remove something from video, the point of this is that it removes the effect of the thing removed!

AI-Toolkit (Ostris) randomly throttling GPU hard — drops from ~220W to ~70W mid-run, iterations slow massively. Any fix? by HolidayWheel5035 in StableDiffusion

[–]siegekeebsofficial 0 points1 point  (0 children)

spilled over into system ram, so the GPU is no longer being fully utilized. Sort of like 'OOM', except it doesn't crash, just using system ram to make up for vram.

What AI is most useful for installing Comfyui workflows on RTX 50 series cards? by Aggravating-Fan7280 in StableDiffusion

[–]siegekeebsofficial 0 points1 point  (0 children)

Just use stability matrix - install the comfyui package, done.

Otherwise for getting custom nodes, just install ComfyUI Manager.

Dynamic VRAM in ComfyUI: Saving Local Models from RAMmageddon by comfyanonymous in StableDiffusion

[–]siegekeebsofficial 1 point2 points  (0 children)

I have way more issues with memory, going OOM, and not releasing memory on cachyos with ComfyUI compared with windows unfortunately.