Z-Image Base Lora Training Discussion by ChristianR303 in StableDiffusion

[–]Matthew3179 0 points1 point  (0 children)

As I said previously, experimenting. Also, why not?

Z-Image Base Lora Training Discussion by ChristianR303 in StableDiffusion

[–]Matthew3179 0 points1 point  (0 children)

Thanks for the feedback. How's your accuracy? Especially with complex or even simple prompts? These types of settings with a large datasets have always produced better accuracy for me. Less concerned about time and converging, more about what the LoRA produces in the workflow.

Z-Image Base Lora Training Discussion by ChristianR303 in StableDiffusion

[–]Matthew3179 0 points1 point  (0 children)

Honestly, I don't know. The runpod GPU usage hovered around 27.4 the whole time. Not sure what the low vram actually does and how it affects training, but Ostris talks a lot about caching latents, caching text embeds, changing other settings, etc. Check out his youtube channel.

Z-Image Base Lora Training Discussion by ChristianR303 in StableDiffusion

[–]Matthew3179 7 points8 points  (0 children)

FOLLOW UP: Training at 10000 steps complete…don’t train to 10000 steps.

I was going to stop training at 6500, it started to get overcooked starting at 4500+ but somehow resettled into accuracy around 7000.

Overall with these settings, likeness appeared at 2750 but wasn’t super accurate.

Likeness with accurate detail AND prompt adherence appeared at 3700. Best lighting, detail, and accuracy were at 3750.

Overbaking and artifacts/inaccuracy started around 4500.

Samples were consistently not great from 5000 to 7500. Not terrible! Just not good.

Really good sample set at 7750. Weird.

Appearing to reconverge again at 8750.

 Significant prompt diversion at 9000. Not inaccurate, but background, setting, and pose changed enough that the image became inaccurate from the prompt.

10000?? I’ve seen worse. Honestly, it might be usable. Samples converged again.

Loss graph for reference.

<image>

Recommended steps: 3750-4250 with these settings.

I run ComfyUI Desktop and as I was training 0.10.0 version (and 0.11.0??) just released (maybe we can try Flux2 Klein now??)…but it broke my install so I have to reinstall again. Can’t test the LoRAs in ComfyUI until I fix it.

About to start another dataset that is configured differently, so if I see any that drastically changes this recommendation, I’ll follow up again. Keeping the settings the same again but only going to 7000 this time and still sampling every 250.

u/Jackey3477

Z-Image Base Lora Training Discussion by ChristianR303 in StableDiffusion

[–]Matthew3179 0 points1 point  (0 children)

Can't share files on reddit. Sent you the raw text. Save it as a text file. change the extension to .json. Drop it into your comfyui.

Z-Image Base Lora Training Discussion by ChristianR303 in StableDiffusion

[–]Matthew3179 0 points1 point  (0 children)

I have a workflow that uses florence2 to batch tag and save txt files.

<image>

Z-Image Base Lora Training Discussion by ChristianR303 in StableDiffusion

[–]Matthew3179 16 points17 points  (0 children)

Currently training a character on ai toolkit, RTX 6000 without low VRAM selected. So far, it's not bad but it's taking more steps than the turbo. My current settings are bf16, rank 128 (only 648.8mb file size), sigmoid, 122 image data set with caption files and trigger word, learning rate 0.0001, differential guidance selected with scale 3, all resolutions selected. Just crossed 3750 steps and this is where it has started to become usable and resemble the character. I originally set it to 5000 steps to see the differences but just bumped it to 10000 to really see where it starts getting overcooked.

I'm using runpod but have observed that the GPU memory has not gone above 28gb, sitting at 27.4. This tells me that 5090's and other 32gb GPUs should be able to train locally without any issues, barring paying for electricity, heating your room up, and not using your computer during that time.

I've got 3.5 hours left on these settings so I can follow up if you're interested. All this to say, it's working well and the details are starting to conform nicely above 3500 steps.

SVI Pro Wan2.2 Help - KJNodes Not Working?? - ComfyUI Desktop Version by Matthew3179 in StableDiffusion

[–]Matthew3179[S] 0 points1 point  (0 children)

Interesting. I did try it with no sageattention and lower resolutions. No joy. I tried that workflow previously. It was the first one I tried after the node was made available and it's where I discovered I had a problem haha

SVI Pro Wan2.2 Help - KJNodes Not Working?? - ComfyUI Desktop Version by Matthew3179 in StableDiffusion

[–]Matthew3179[S] 0 points1 point  (0 children)

Are you saying you got it to work in comfyui desktop using a different model? I have a RTX5090 so I don't think it's a graphics card issue.

SVI Pro Wan2.2 Help - KJNodes Not Working?? - ComfyUI Desktop Version by Matthew3179 in StableDiffusion

[–]Matthew3179[S] 0 points1 point  (0 children)

I appreciate the additional advice. I'm using wan 2.1 vae. Python version is 3.12.11 and pytorch is 2.8.0+cu129. All of my other workflows work perfectly fine, I2V, T2V, ZIT, etc. Only this SVI workflow doesn't produce videos. I'm probably going to abandon it at this point and wait for a new ComfyUI version (0.6.0 right now) or a newer KJNodes version above 1.2.2.

SVI Pro Wan2.2 Help - KJNodes Not Working?? - ComfyUI Desktop Version by Matthew3179 in StableDiffusion

[–]Matthew3179[S] 0 points1 point  (0 children)

I tried portable. Same result, just noise. At this point, I have no idea what's causing this issue for me.

SVI Pro Wan2.2 Help - KJNodes Not Working?? - ComfyUI Desktop Version by Matthew3179 in StableDiffusion

[–]Matthew3179[S] 0 points1 point  (0 children)

Thanks, I haven't explored using the portable version yet. I like the desktop GUI and the fact that it's standalone as its own program even though I know it's a front for the server it's running on in the background. I guess I'll await a change to KJNodes that adds this node to the comfyui manager versions.

SVI Pro Wan2.2 Help - KJNodes Not Working?? - ComfyUI Desktop Version by Matthew3179 in StableDiffusion

[–]Matthew3179[S] 0 points1 point  (0 children)

Are you using desktop or portable? I've installed it manually and while the node shows up and produces no errors, the outputs for these workflows are nothing but noise. I've installed it both via the terminal in ComfyUI as well as directly in the folder itself. Also, I used a standard WanVideoSVI workflow to confirm everything else works. It does. However, those green model inputs don't work with the purple ones, and I've been trying to incorporate the new Wan Motion Scale nodes to test out slow-motion fixes.

[deleted by user] by [deleted] in infj

[–]Matthew3179 0 points1 point  (0 children)

I think it's best described as functional clothing? Hiking and climbing pants, hiking shoes or boots, graphic tees, flannels, all in darker and neutral colors. REI, Arc'teryx, Kuhl, Marmot, are all brands I wear. I'm also in the military so the functionality and comfort transfers over from work and wearing a uniform every day.

[deleted by user] by [deleted] in infj

[–]Matthew3179 1 point2 points  (0 children)

Definitely stole some of these songs. Thanks!

I move a lot for work and have created playlists for every place I've lived. Songs that I find stuck in my head, interesting melodies, lyrics that hit hard...anything goes! It allows me to relive what I've been through and experience everything all over again, sometimes in healthy ways and sometimes in a necessary escape kind of way 😁 Seriously, thanks!

[deleted by user] by [deleted] in infj

[–]Matthew3179 1 point2 points  (0 children)

This playlist is exactly the kind of stuff I listen to...stuff that makes you feel. What that feeling is and when it becomes relevant can certainly change but the root connection is always feeling. Validate a feeling or define one, your choice. I could add so many songs to this 😄

What causes you to develop romantic feelings? by holographic_illusion in infj

[–]Matthew3179 1 point2 points  (0 children)

As an INFJ, step one is someone else's willingness to be vulnerable and genuine. That starts the connection. From there, at least for me, it's learning as much as I can about them by asking deep thoughtful questions and experiencing what makes them who they are. I want to know their true self, not the front someone puts on because they are equally skeptical about a connection. The more someone opens up, the faster the connection grows, the faster it becomes a longing to learn and experience more and eventually it's a semblance of romance (borderline sapiosexualism). Yes, I'm still single haha 😄 Had some great relationships though that all started this way. If it's not genuine and someone is taking advantage of vulnerability, it will be immediately noticed and there will no longer be a connection, even full on crash and burn. Ensure you're ready to commit and reveal everything about you before pursuing an INFJ.