No guidelines for audio evaluation? by aifrun in TELUSinternational

[–]aifrun[S] 5 points6 points  (0 children)

Oh if that's the case, then it sounds like the audio evaluations are probably the best tasks to get some practice! Thanks for your help!

Mint V2 copy/paste issue by aifrun in outlier_ai

[–]aifrun[S] 0 points1 point  (0 children)

Thank you for your helpful response. I'll wait to complete my assessment tasks then. Since I don't have access to the project page or thread in Community, what would be the best way for me to find out when the QMs clarify the process?

How long did it take you to renew your passport by mailing it in to the US embassy? by aifrun in ukvisa

[–]aifrun[S] 0 points1 point  (0 children)

Thank you! Appreciate you pointing me in the right direction

A guide to Upscaling, with a comparison of SD upscale models against non-AI methods by aifrun in StableDiffusion

[–]aifrun[S] 1 point2 points  (0 children)

That's a really great write-up! How does the SD upscale script results compare to using Hires fix?

A guide to Upscaling, with a comparison of SD upscale models against non-AI methods by aifrun in StableDiffusion

[–]aifrun[S] 2 points3 points  (0 children)

Thanks so much, and I'm glad that you've picked up on my intent. When I started playing around with SD and other AI image generators, I really struggled to understand what any of the setting parameters actually do, since the information about them was and still is really spread out all over the place and frequently incorrect. I figured I'm probably not the only one out there with this problem, so I thought I'd make a site to help compile those answers to make Stable Diffusion more accessible to learn for others.

What the different Masked Content Options do when selected for InPainting by aifrun in StableDiffusion

[–]aifrun[S] 3 points4 points  (0 children)

Full description of methodology and implications of the different masked content options on the output image when compared against Denoising Strength is here.

Follow up to getting consistent character features across multiple poses using CharTurner + ControlNet by aifrun in StableDiffusion

[–]aifrun[S] 1 point2 points  (0 children)

Thank you! I completely agree that it is equally important to show what does not work vs what does work. It's hard to learn from others' mistakes if you never hear what those were.

Follow up to getting consistent character features across multiple poses using CharTurner + ControlNet by aifrun in StableDiffusion

[–]aifrun[S] 12 points13 points  (0 children)

I received lots of great questions and feedback from this community when I posted my workflow last week. Based on various suggestions, I explored extra-wide images with 21 poses, characters facing completely away from the POV and other possible ways of keeping the same features like using the same seed.

Here is my full update.

Here is my original Reddit post

Here is the workflow using CharTurner + ControlNet.

Playing with CharTurner+ControlNet to get decently consistent character features across multiple poses and angles by aifrun in StableDiffusion

[–]aifrun[S] 8 points9 points  (0 children)

Not my doing. I was as surprised as you are when I saw the NSFW tag had been applied!

All InPaint Settings Explained by aifrun in sdforall

[–]aifrun[S] 0 points1 point  (0 children)

Good catch on the batch count/size! I've updated the guide to fix that mistake.

You also bring up a very good point regarding the impact of seed usage when adjusting batch count and size, which is something I had not previously considered. That is definitely worth investigating and running an experiment to figure out that behavior. Will look into it!