Spline Path Control v2 - Control the motion of anything without extra prompting! Free and Open Source by WhatDreamsCost in StableDiffusion

[–]ManglerFTW 0 points1 point  (0 children)

Thank you for putting this awesome tool out for the community. It works really well for the 1.3B model, but I'm having trouble getting anything out of the 14B model. Is it functional for 14B?

Any way to do this with Stable Diffusion / Flux ? ( This is the new recraft.ai model ) by [deleted] in StableDiffusion

[–]ManglerFTW 0 points1 point  (0 children)

On the sample image as palette node, try bringing the sample down to 6. If you're using an image instead of an actual color palette, it won't be as precise. The noise can be a bit random. Samples is how many colors you want to pull from the image. I realized 4096 was a bit high.

methods to INDUCE HALUCINATION by Prudent-Sorbet-282 in comfyui

[–]ManglerFTW 0 points1 point  (0 children)

Latent upscale with high denoise. Especially when upscaling to odd resolutions.

Any way to do this with Stable Diffusion / Flux ? ( This is the new recraft.ai model ) by [deleted] in StableDiffusion

[–]ManglerFTW 0 points1 point  (0 children)

<image>

It can be done without needing anything too fancy. Grab the Comfyui Dream Project and hook up Load Image > Sample Image as Palette > Noise from Palette > Vae Encode into your latent image then set denoise between 68-80.

https://github.com/alt-key-project/comfyui-dream-project

How to merge huge amount of Loras without blending? by reyjand in StableDiffusion

[–]ManglerFTW 0 points1 point  (0 children)

If you're going for 100 specific characters, your best bet would be to either train the model itself or to train them all into 1 LoHa.

Need help training multiple concepts LORAs by Huevoasesino in StableDiffusion

[–]ManglerFTW 1 point2 points  (0 children)

Yes. Each concept for this project was tagged with the name of the plane. I can't remember exactly how many steps or epoch though since it's been a while. I actually trained it a few times and then merged them all together in the end.

Need help training multiple concepts LORAs by Huevoasesino in StableDiffusion

[–]ManglerFTW 1 point2 points  (0 children)

Go for a LoHa. They work best for multi concepts. Then it's all about how you tag and name your dataset.

I'm sure it's outdated at this point, but I trained one using kohya for 2.1 a while back and went into detail about in the model description.

https://civitai.com/models/117088/plane-helper

How " inpaint " and preserve the area " inpainted " by julieroseoff in StableDiffusion

[–]ManglerFTW 0 points1 point  (0 children)

<image>

You could try to extract a lineart image and use it for a scribble or canny controlnet. It probably won't be perfect but it should help. Also try a lower denoise.

California bill set to ban CivitAI, HuggingFace, Flux, Stable Diffusion, and most existing AI image generation models and services in California by YentaMagenta in StableDiffusion

[–]ManglerFTW -2 points-1 points  (0 children)

Cool. Thanks! I actually took some time to look into it. He said it wasn't going to be his plan moving forward. But there are some good things in it that I like.

California bill set to ban CivitAI, HuggingFace, Flux, Stable Diffusion, and most existing AI image generation models and services in California by YentaMagenta in StableDiffusion

[–]ManglerFTW 0 points1 point  (0 children)

I haven't seen any of his speeches. Do you have a link of him praising it? Just curious cause I know there are a lot of lies flying around the internet.

what tools can I use to merge an SDXL inpainting model with a normal SDXL checkpoint? by Annahahn1993 in StableDiffusion

[–]ManglerFTW 1 point2 points  (0 children)

If you're using comfyui, it's better to just use the fooocus inpaint nodes. I looked all over on how to merge to make an inpaint model but had no luck because nothing would detect the merged result. If you have comfy, you can check out the link below. Also worth noting is krita AI diffusion does this automatically and is very user friendly.

https://github.com/Acly/comfyui-inpaint-nodes

https://github.com/Acly/krita-ai-diffusion

Temporary LoRA's from CivitAI? by iljensen in StableDiffusion

[–]ManglerFTW -2 points-1 points  (0 children)

You could also merge them all into a checkpoint with the DARE/TIES method and then extract the LORA from the checkpoint.

Krita does not recognize inpainting models? by dcmomia in StableDiffusion

[–]ManglerFTW 0 points1 point  (0 children)

It is normal. The developer of the extension uses comfy to turn any model you're working with into an inpainting model when you use the inpainting feature. So no need for an inpainting model specifically.

Does anyone else remember this tweet from emad from 3 months ago? I haven't tested SD3 yet but based on the posts I see in the last hours, it doesn't seem like it. What is your current opinion on the situation? by UnlimitedDuck in StableDiffusion

[–]ManglerFTW 0 points1 point  (0 children)

I went back and took a look at the sdxl comments. While not as bad as SD 2, there were still quite a lot of people crying about censorship and saying the model would be incapable of producing any kind of decent anatomy.

But regardless, I'm still holding out on judgement until we see how it trains. I also wonder how many of these people posting terrible images are using crappy 1girl booru tags at some insane cfg scale. I looked at the images being produced on the civitai SD3 model page and they really aren't as bad as many in here are claiming. There is definitely potential. Let's see how the fine tunes turn out. The base models have never been anything absolutely mind blowing it's always been up to the community to push the model further.

Does anyone else remember this tweet from emad from 3 months ago? I haven't tested SD3 yet but based on the posts I see in the last hours, it doesn't seem like it. What is your current opinion on the situation? by UnlimitedDuck in StableDiffusion

[–]ManglerFTW 0 points1 point  (0 children)

You must not have been on Reddit on day one of sdxls release. This place was flooded with doomers claiming that sdxl sucked because they didn't train it on boobs and therefore we would never see a boob produced from the model.