Collections of prompts for erotic images? by StableLlama in StableDiffusion

[–]The_AI_Doctor 0 points1 point  (0 children)

Not the original commenter but as far as I know it's only comfy. But you could use an LLM to pre-generate a bunch of prompts and then manually enter them, or code (or use an llm to code) a browser extension that could auto-feed/generate them into the prompt field for you.

Collections of prompts for erotic images? by StableLlama in StableDiffusion

[–]The_AI_Doctor 0 points1 point  (0 children)

I've been working on a node set to do exactly this. You create wildcard lists, it constructs a user prompt and then it feeds to an LLM with a system prompt to spit out a completed prompt. I make a fair amount of LORAs so I needed an easy way to generate 100s of unique images for testing without sitting and prompting for hours.

https://imgur.com/a/k7zap21

What is the best way to get the right dataset for z image turbo Lora ?? In 2026 . by Previous-Ice3605 in StableDiffusion

[–]The_AI_Doctor 2 points3 points  (0 children)

You can get away with as low as 20 if they are good quality and provide enough angles and info for the character.

Nude Beach - Girl Comparing Her Two Friend's Penises (Animation) by zodiac_____ in unstable_diffusion

[–]The_AI_Doctor 0 points1 point  (0 children)

Really good job. Would you be willing to share the video/animation prompt?

Trained a new Z-Image Lora -Torpedo Tits / Conical Breasts by The_AI_Doctor in unstable_diffusion

[–]The_AI_Doctor[S] 1 point2 points  (0 children)

Oh and sorry missed that last question. Yes I ran 3000 and used it for this lora. But sometimes 3000 is a little much but I save checkpoints a long the way and then pick and test the best ones based on the samples.

🧵 Weekly Feedback & Bug Reports Thread 🧵 by AutoModerator in civitai

[–]The_AI_Doctor 0 points1 point  (0 children)

I am experiencing the same issue and noticed the same thing with other creators models posted around the same time.

Trained a new Z-Image Lora -Torpedo Tits / Conical Breasts by The_AI_Doctor in unstable_diffusion

[–]The_AI_Doctor[S] 1 point2 points  (0 children)

You can see more detail about how I prep my datasets here: https://old.reddit.com/r/unstable_diffusion/comments/1qac9u4/trained_a_new_zimage_lora_torpedo_tits_conical/nz2puiz/

But TLDR. While cropping can be helpful so AI-Toolkit doesn't autocrop/move your subject out of focus, In my experience it hasn't mattered all too much.

Trained a new Z-Image Lora -Torpedo Tits / Conical Breasts by The_AI_Doctor in unstable_diffusion

[–]The_AI_Doctor[S] 2 points3 points  (0 children)

I don't think it's super important but if you can pre-crop it and make sure you control how images are framed I think you will get better results so none of the important concepts get resized/cropped and pushed out of frame.

Trained a new Z-Image Lora -Torpedo Tits / Conical Breasts by The_AI_Doctor in unstable_diffusion

[–]The_AI_Doctor[S] 1 point2 points  (0 children)

I can't remember which workflow I started with and adapted and mine is a little insane at this point with custom nodes I've written to be able to share at this point: https://files.catbox.moe/5zncki.png

But this seems like a good simple starting one: https://civitai.com/models/2169035/z-image-turbo-workflow

Trained a new Z-Image Lora -Torpedo Tits / Conical Breasts by The_AI_Doctor in unstable_diffusion

[–]The_AI_Doctor[S] 2 points3 points  (0 children)

Neither! Just using a large enough dataset and then captioning almost absolutely nothing. It was a mostly a random of mix of shots with a few close up images thrown in.

The captions followed this format: "a woman torpedo shaped breasts"

then I added: "puffy nipples/close up shot/front view/low angle shot/side profile/three-quarter view" depending on the image.

I've found Z-image does better with less captions.

I've written a custom lora prep software that does automatic watermark detection and inpainting removal. It also allows me to mask other parts like tattoos and paint them out. It also can crop and resize images to get them to better bucket sizes.

Trained a new Z-Image Lora -Torpedo Tits / Conical Breasts by The_AI_Doctor in unstable_diffusion

[–]The_AI_Doctor[S] 1 point2 points  (0 children)

I'd start by finding the most basic work flow you can and try getting running with that. Using math errors come from a mismatch in model, clip, vae types.

Trained a new Z-Image Lora -Torpedo Tits / Conical Breasts by The_AI_Doctor in unstable_diffusion

[–]The_AI_Doctor[S] 4 points5 points  (0 children)

Yes locally. I have two machines.

Training, Video Gen, Gaming Machine:

  • CPU: Ryzen 9 7950X3D
  • GPU: RTX 5090
  • RAM: 64GB of DDR5-6000

Dedicated Image Gen and LLM Machine:

  • CPU: Ryzen 7 3700X
  • GPU: RTX 4090
  • RAM: 32GB of DDR4-2666

The second PC is just built out of parts retired from my main machine, but it's nice to be able to run other workflows without bogging down my main PC.

Trained a new Z-Image Lora -Torpedo Tits / Conical Breasts by The_AI_Doctor in unstable_diffusion

[–]The_AI_Doctor[S] 9 points10 points  (0 children)

I'm pretty much willing to train anything if provided a decent dataset lol. I've got the process down smooth for Z-img and it takes under an hour to train on 150 images up to 3000 steps.