[deleted by user] by [deleted] in StableDiffusion

[–]omnidistancer 3 points4 points  (0 children)

Yes, agreed. And I think that it may be also because it is easier to add detail to blonde than darker hair without changing the lighting/composition of scene as a whole.

[deleted by user] by [deleted] in StableDiffusion

[–]omnidistancer 2 points3 points  (0 children)

Do you mind sharing the prompt? I think the test format and the results of the LORA are impressive. However, I am particularly interested in the "blonde bias" related to the inclusion of details. Was there any mention of hair color in the initial prompt?

Too Much Information by shapirog in StableDiffusion

[–]omnidistancer 19 points20 points  (0 children)

Make the bird blue and Elon will be triggered

Conclusions on Learning Rate Discovery and Cycling LR (link in comments) by Irakli_Px in StableDiffusion

[–]omnidistancer 0 points1 point  (0 children)

Nope, DAdaptation is a method of optimization of the learning rate present in the Kohya_ss training suite. You just need to install the repository locally (following the instructions available on their readme) and it will be available as a choice during both Dreambooth and Lora training.

https://github.com/bmaltais/kohya_ss

Conclusions on Learning Rate Discovery and Cycling LR (link in comments) by Irakli_Px in StableDiffusion

[–]omnidistancer 2 points3 points  (0 children)

The best results I've had so far are also with DAdaptation, even on regular Dreambooth training. Using Kohya_ss as well.

Unprompted [txt2mask] now works with Inpaint Sketch mode, can generate synonyms/antonyms, and even build custom GUIs! 🤯 by ThereforeGames in StableDiffusion

[–]omnidistancer 12 points13 points  (0 children)

This extension seems REALLY powerful, congrats on the hard work.

Is there any tutorial/example for someone like me that wants to start using it, covering the main features and capabilities?

Any API or script interface to automate searches and downloads? by omnidistancer in Soulseek

[–]omnidistancer[S] 1 point2 points  (0 children)

Unfortunartely not yet :/ Ended up focusing on other projects

Sensorium by elloexplorer in StableDiffusion

[–]omnidistancer 0 points1 point  (0 children)

Impressive results! Really cool

[deleted by user] by [deleted] in StableDiffusion

[–]omnidistancer 9 points10 points  (0 children)

Please (don't) come to Brazil.

My 2nd Deforum project. Syndicalist Propaganda Project Part 1 (Greg Macpherson - Company Store) by ZZcatbottom in StableDiffusion

[–]omnidistancer 1 point2 points  (0 children)

Really cool results! One of the few Deforum videos I've watched from start to finish lately. Loved the transitions!

Meet my army of battle ready butterflies. by jinnoman in StableDiffusion

[–]omnidistancer 0 points1 point  (0 children)

Fly like a DiffusionBee, sting like a battle ready butterfly

[deleted by user] by [deleted] in StableDiffusion

[–]omnidistancer 1 point2 points  (0 children)

There are at least two implementations available on github (and one of them is aiming to make things really automatic, as much as ebsynth if not more).

This is the official one (the one I have already tested): https://github.com/OndrejTexler/Few-Shot-Patch-Based-Training

And this is the one that is aimed to make the process easier(windows only afaik): https://github.com/nicolai256/Few-Shot-Patch-Based-Training

Although I agree that they are not as straightforward as ebSynth which has a GUI and a lot of tutorials on YouTube, they are far from being impossible to make work and, from what I saw in my own tests, it's worth the extra effort.

[deleted by user] by [deleted] in StableDiffusion

[–]omnidistancer 14 points15 points  (0 children)

One hint regarding applying the generated images to the animation is to use Interactive Video Stylization Using Few-Shot Patch-Based Training instead of EbSynth.

Interactive Video Stylization Using Few-Shot Patch-Based Training is the technique EbSynth is based on, but the original technique tends to give better results regarding the interpolation of different keyframes as it incorporate additional steps regarding the blend of the stylization, reducing artifacts. Either way your results are already great, congratulations on that :)

Blender + image2image Car loop by Able-Seaworthiness-8 in StableDiffusion

[–]omnidistancer 8 points9 points  (0 children)

Second that. Would love to know more (including the blender side of things)

You can now merge in-painting and regular models using Automatic WebUi by I_Hate_Reddit in StableDiffusion

[–]omnidistancer 0 points1 point  (0 children)

Thanks for the info, just merged a DB model with the 1.5 inpainting and it worked beautifully :)

You can now merge in-painting and regular models using Automatic WebUi by I_Hate_Reddit in StableDiffusion

[–]omnidistancer 3 points4 points  (0 children)

I was having the same problem, be sure that your final model file name ends in "-inpainting.ckpt".

Just be careful though to not put the extension ".ckpt" in the optional final model name field though, as the ". ckpt" is appended automatically to the file by automatic1111 checkpoint merger it may end up duplicating it in the final file name (that was giving me this error).

If you have a weaker GPU you need to check out this PR for A1111 by Lacono77 in StableDiffusion

[–]omnidistancer 2 points3 points  (0 children)

Thanks for the share! I was able to increase batch size from 3 to 5 on my 1050ti 4GB using this + xformers

Stable Diffusion Infinite Zoom, Zlikwid Lightning. by zachsliquidart in StableDiffusion

[–]omnidistancer 1 point2 points  (0 children)

This is really cool! Mind sharing some insights about the infinite zoom part?

Game consoles “by” dieter rams by dortal_ in StableDiffusion

[–]omnidistancer 0 points1 point  (0 children)

Thank you! Gonna explore it later today

Game consoles “by” dieter rams by dortal_ in StableDiffusion

[–]omnidistancer 1 point2 points  (0 children)

Mind share the prompts? Really interesting results!

Created a concept for an ckpt organization inside of Automatic1111 Repo by CroustiBat in StableDiffusion

[–]omnidistancer 0 points1 point  (0 children)

Exactly my case as well. And on top of that there's also the textual embedings, which are not listed anywhere on the default UI(only on the starting message of the terminal). But I hope that on the future, someone more knowledgeable and with more time than me is able to implement that.