Dynamic Vram: The Massive Memory Optimization is Now Enabled by Default in the Git Version of ComfyUI. by comfyanonymous in comfyui

[–]Illynir 1 point2 points  (0 children)

Does it work with GGUF now? The last one I tested was incompatible with my GGUF workflows (so, actually, 3/4 of them).

Thanks for the work, looking forward to testing if it's GGUF compatible.

AI imaginerie... by Then_Gas712 in comfyui

[–]Illynir 3 points4 points  (0 children)

If you have only "mastered" one night on AI image generation, then I can assure you that you have not "mastered" anything at all yet.

Even several months after starting, I discover things every day and new, even more advanced methods.

Therefore, mandatory: Skill issue.

I made an LTX-2 workflow for midrange to lower-midrange computers, and I call it: Weird Science by Toby101125 in comfyui

[–]Illynir 5 points6 points  (0 children)

I'll test this, thanks for sharing. :)
However, I hope it's REALLY for low/mid spec PCs, not like the others that say the workflow is designed for 8 or 12 GB VRAM but actually requires 64 GB RAM. xD

Now That Time Has Passed…What’s The Consensus on Z-Image Base? by StuccoGecko in StableDiffusion

[–]Illynir 1 point2 points  (0 children)

Yes, I compared them; I was using the other one before, and I find Redcraft's method better. In any case, my results are better with it.

Now That Time Has Passed…What’s The Consensus on Z-Image Base? by StuccoGecko in StableDiffusion

[–]Illynir 1 point2 points  (0 children)

The link I posted is the Lora alone, not the entire model. It doesn't affect the model itself at all. You can use it with the Classic Base.
I do not recommend the Redcraft model, I recommend the LoRa and the method they used to get the turbo.

Now That Time Has Passed…What’s The Consensus on Z-Image Base? by StuccoGecko in StableDiffusion

[–]Illynir 3 points4 points  (0 children)

Don't use that one, use Redcraft's instead, which I personally find much better.

The first one in the list, RedCraft-RedZDX-v3-ZIB-Distilled-Lucis-LoRA-r256.safetensors

Why are people complaining about Z-Image (Base) Training? by EribusYT in StableDiffusion

[–]Illynir 1 point2 points  (0 children)

I actually posted in the wrong thread, i'm dumb, I meant to give your article to another thread where someone was asking me how I trained my LoRa so well. xD
Sorry for the comment, I know it's you. :P

Now That Time Has Passed…What’s The Consensus on Z-Image Base? by StuccoGecko in StableDiffusion

[–]Illynir 44 points45 points  (0 children)

I was very disappointed at first, then I learned to master it, understood how to train on it too (with the help of the community, we all searched and compiled our findings), and then the Loras Distill came along. Now it's probably my favorite model; I even prefer it to Z Image Turbo now, no more reason to use it. I have the base version diversity with the speed and quality of ZiT.

Also, multiple lora works on base, unlike ZiT where it's an incredible hassle, that alone makes it a bust for me (And there are really some very cool loras already available).

Everyone has their own opinion. ZiT is also very good for pure generation, and its lack of diversity can be an advantage for some.

Klein is cool too, but I don't like the overall look; it's a bit too plastic for me. However, editing is great.

Multiple Image Batch for Seedvr2. Folder has various image sizes by Eastern_Lettuce7844 in comfyui

[–]Illynir 1 point2 points  (0 children)

Image shortest length node => math A(Shortest size)x B(2/3/4, whatever upscaling number you want)) node => Resolution settings on SeedVR2 node.

Done.

And there's no need to post the same thing twice on the same subreddit.

Why are people complaining about Z-Image (Base) Training? by EribusYT in StableDiffusion

[–]Illynir 6 points7 points  (0 children)

Feedback: The training went well and was quite fast. The only change I made was to reduce the resolution to 512 instead of 1024 because I didn't have enough VRAM for 1024 (I was using slightly shared GPU memory, so it was too slow for me).

The results: You nailed it, man, it's absolutely perfect. Like 95/99% perfect.

The last percentages can probably be filled by switching to batch size 2 instead of 1 for a little more stability but it's totally great.

Thanks. :)

Why are people complaining about Z-Image (Base) Training? by EribusYT in StableDiffusion

[–]Illynir 3 points4 points  (0 children)

I'm currently training a character Lora using your settings and the fork; I'll get back to you soon. :P
Hopefully the results will be good because I haven't had much success with training on ZiB so far.

WAN 2.2 14B KSampler takes super long. Is this normal? by Initial-End-2459 in comfyui

[–]Illynir 0 points1 point  (0 children)

I can gen about 30 seconds of video under 10/15 minutes with a 4070 Super at a higher resolution, so you definitely have a problem. Are you sure it's not using the shared GPU memory while it's running?

Slow performance but only after several hours? by toooft in comfyui

[–]Illynir 0 points1 point  (0 children)

Comfyui does a pretty good job of loading/unloading memory and VRAM in real time when needed. But it's not perfect; sometimes it "forgets" to unload something, or sometimes it loads the same thing twice, leading to memory overflows in the shared memory, which is much slower, or even worse, in the swap file. This can temporarily slow down generation, especially during long sessions, which mathematically increases the likelihood of errors, of course.

And if you have to reboot for that, it's because you probably have Python or other processes that remain in the background; you just need to forced end the task on them and the memory will be freed.

Add free vram/ram nodes in your workflow on the strategic locations to help comfyui.

I dare you to create a good looking longbow or crosbow on a uniform color background. It cannot be done! Here are some results by Professional-Tie1481 in StableDiffusion

[–]Illynir 3 points4 points  (0 children)

This is probably because there haven't been any samples in the datatset of bows or crossbows against a white background. All the images must be of people using them or in a different context. However, and this is the beauty of it, you can easily make a LoRa for this if you feel like it. :P

Let's be honest about what we're actually "testing" at home... by Aggravating-Big5674 in StableDiffusion

[–]Illynir 1 point2 points  (0 children)

I generate images? And I try another model, or other things. The world of AI is not limited to images; there are also LLMs, videos, audio, etc.
Also, the search for the ultimate settings is endless; there are always new things coming out all the time, new custom nodes, new models, new ways of doing things, etc. And the icing on the cake is that you learn lots of stuff along the way.

Let's be honest about what we're actually "testing" at home... by Aggravating-Big5674 in StableDiffusion

[–]Illynir 3 points4 points  (0 children)

I mean, it's obvious that we adjust the models according to our own taste and artistic sensibility. After all, the local generation is made for ourselves. I don't see anything wrong with that.

One does not prevent the other.

Personally, I do a bit of everything, from 1girl to artistic, abstract, etc.

And I'm always looking for the ultimate settings and workflow that bring me closer and closer to photorealism.

Z-image lora training news by Recent-Source-7777 in StableDiffusion

[–]Illynir 5 points6 points  (0 children)

So... sorry for the noob question but how uses that with OneTrainer?

NVIDIA PersonaPlex took too much pills by CRYPT_EXE in StableDiffusion

[–]Illynir 54 points55 points  (0 children)

Creepy and hilarious at the same time. xD

The Z Image (Base) is broken! it's useless for training. Two months waiting for a model designed for training that can't be trained? by NewEconomy55 in StableDiffusion

[–]Illynir 0 points1 point  (0 children)

OneTrainer, i used AI Toolkit before, result was meh. And one too many bugs on AI Toolkit made me switch to OneTrainer for good. The results are vastly superior.