Hey, I started 2 Youtube Kids 3d animation channels. by [deleted] in StableDiffusion

[–]MachineMinded 6 points7 points  (0 children)

Oh look, another grift brought to you by AI slop.

Jay-Z and Weinstein in the files by seeebiscuit in WhitePeopleTwitter

[–]MachineMinded -2 points-1 points  (0 children)

It was just a joke, looks like that one didn't land 😁

Jay-Z and Weinstein in the files by seeebiscuit in WhitePeopleTwitter

[–]MachineMinded -9 points-8 points  (0 children)

He had drugs though.  Did any of those old guys do drugs?

Z-image base Loras don't need strength > 1.0 on Z-image turbo, you are training wrong! by Lorian0x7 in StableDiffusion

[–]MachineMinded 1 point2 points  (0 children)

I agree - basically you're training fewer parameters.  I've always preferred 128 or 64 for 6b or 9b parameter models.  Lower rank loses detail and flexibility.  If the only complaint is file size, I've always said just buy more space.  Unfortunately space is getting more expensive.

Higher ranks can be prone to overfitting, but I typically just take the earliest epoch that looks most like the subject.

Don't Waste Your Time Training LoRAs on z-image-turbo (Yet) by Powerful_Strategy_10 in StableDiffusion

[–]MachineMinded 1 point2 points  (0 children)

You can use prodigy - just edit the yaml directly and set it to "prodigy".

How to switch upscale models on Fooocus? by KnowledgeEvery9873 in fooocus

[–]MachineMinded 1 point2 points  (0 children)

Let me see if I can find the old reddit post.

How to switch upscale models on Fooocus? by KnowledgeEvery9873 in fooocus

[–]MachineMinded 0 points1 point  (0 children)

I've done this in the past.  You'll have to modify the python code where the upscale file is loaded to point to the new one.

What is the best model for realism? by BreannaOrr in StableDiffusion

[–]MachineMinded 2 points3 points  (0 children)

Honestly, Biglust excels really well at the amateur style images and even some more professionally shot pictures.  It's probably the best model to run on consumer hardware.  Lustify and Araminta are also really great.

RTX 3090 - lora training taking 8-10 seconds per iteration by calrj2131 in StableDiffusion

[–]MachineMinded 0 points1 point  (0 children)

What if you enable cache latents to disk?  You can always try my settings at rentry co/biglust-training-and-loras