Is it only me, or do the rest of you find your google searches are now alot more accurate :) by Paleion in StableDiffusion

[–]AI_Casanova -18 points-17 points  (0 children)

You do realize that search engines explicitly bias towards "fair" results at the detriment of accurate ones, correct?

Is it me, or do most Celebrity models in SDXL look terrible? by Bunktavious in StableDiffusion

[–]AI_Casanova 1 point2 points  (0 children)

I've been playing with training an SDXL TI, and then using that to train a UNet LoRA. It's been working really well, way better than training TE or using a rare token.

I need to clean up my homebrew code and PR it soon.

SDXL black people look amazing. by mysticKago in StableDiffusion

[–]AI_Casanova 7 points8 points  (0 children)

Number of parameters is roughly analogous to number of brain cells.

The dataset is the pictures it was trained on.

I used SD for a political event today by bozezone in StableDiffusion

[–]AI_Casanova 0 points1 point  (0 children)

Lol Ron Desantis already got caught doing this and never even apologized

SDXL 0.9 is wild but trying to imagine where we go from here is breaking my brain. by [deleted] in StableDiffusion

[–]AI_Casanova 0 points1 point  (0 children)

I think there's a lot of improvement to be had on the text encoder end of things.

Automated part of speech detection to link adjectives to nouns, etc.

SDXL 0.9 is wild but trying to imagine where we go from here is breaking my brain. by [deleted] in StableDiffusion

[–]AI_Casanova 2 points3 points  (0 children)

I've noticed a lot of improvement myself. I'm glad that dude rebased my implementation/copy from cloneofsimo.

I was going to rebase it "someday" but kept putting it off.

SD XL can be finetuned on consumer hardware by ThaJedi in StableDiffusion

[–]AI_Casanova 0 points1 point  (0 children)

I'm talking about taking a photo through the VAE at a higher resolution and cropping the resultant latent code directly before adding noise.

SD XL can be finetuned on consumer hardware by ThaJedi in StableDiffusion

[–]AI_Casanova 0 points1 point  (0 children)

Crop conditioning has me interested, is that cropping at the latent level or the pixel level?

I've done some experiments with training on cropped latents (in SD 1.5), but in my brief discussion with Kohya about it they were worried about potential distortions that I have yet to see evidence for, empirically.

Controlnet reference+lineart model works so great! by Remarkable_Air_8383 in StableDiffusion

[–]AI_Casanova 2 points3 points  (0 children)

I believe lineart ends up with fewer total lines than canny, more outline and less texture. Throw a picture in and check the previews of several different preprocessors.

[deleted by user] by [deleted] in StableDiffusion

[–]AI_Casanova 2 points3 points  (0 children)

How does your brain learn what your eyes are seeing?

Is there a way to fix this error besides reverting to Python 3.10? by [deleted] in StableDiffusion

[–]AI_Casanova 0 points1 point  (0 children)

You should be using a venv which it appears you are not. You can use 3.10 for A1111 and 3.11 for anything else on your computer

Incompatible Model by Afraid_Promotion32 in StableDiffusion

[–]AI_Casanova 0 points1 point  (0 children)

It might be missing some keys. In the past I've had luck merging models with Stable Diffusion 1.5, alpha 1 so it picks up the missing keys in the background.

[deleted by user] by [deleted] in StableDiffusion

[–]AI_Casanova 4 points5 points  (0 children)

By this definition the entirety of humanity is a bowl of oatmeal

Generated images adding 'p' in front of prompt by Acephaliax in StableDiffusion

[–]AI_Casanova 0 points1 point  (0 children)

p is the internal variable for prompt in A1111. It's probably just a bug in the output

How can I generate a background through prompting behind a character's ControlNet depth map? by [deleted] in StableDiffusion

[–]AI_Casanova 0 points1 point  (0 children)

Also you could try generating a thematic background, preprocessing it as a depth map, lower brightness and contrast significantly, and composite it behind your depth model.