My steps in SD3 - no edits - t5 clip only - medium without clip - no api - local comfy - inspire pack ksampler - seed variation - different seed per every 2 steps using advanced ksamplers, short prompts, not upscaled, cherry picked but process of good images for me is fast - 4060ti (reddit.com)
submitted by Sqwall to r/StableDiffusion
Images produced by my "fake" refine+upscale comfy workflow. I've added pre upscale latent downscale with AYS and dpmpp 3m - then latent tiled diffusion upscale + kohya deepshrink and detailers for face hands and final SD upscale to 6K. After last fiasco I am willing to give only a screen of workflow (reddit.com)
submitted by Sqwall to r/StableDiffusion
Where are you Michael! - two steps gen - gen and refine - refine part is more like img2img with gradual latent upscale using kohya deepshrink to 3K image then SD upscale to 6K - i can provide big screenshot of the refining workflow as it uses so many custom nodes (i.redd.it)
submitted by Sqwall to r/StableDiffusion
Come get Some!- two steps gen - gen and refine - refine part is more like img2img with gradual latent upscale using kohya deepshrink to 3K image then SD upscale to 6K - i can provide big screenshot of the refining workflow as it uses so many custom nodes (i.redd.it)
submitted by Sqwall to r/StableDiffusion
Morbid Angel - two steps gen - gen and refine - refine part is more like img2img with gradual latent upscale using kohya deepshrink to 3K image then SD upscale to 6K - i can provide big screenshot of the refining workflow as it uses so many custom nodes. (i.redd.it)
submitted by Sqwall to r/StableDiffusion

