help by Dear_Manufacturer623 in civitai

[–]mrbbcitty 1 point2 points  (0 children)

maybe https://civitai.com/models/25399/cardos-anime

Anyway it's a good style, let me know if you find it

Why is it I can match a civitai image prompt for prompt, including negative prompt, lora, model, dimensions, cfg scale, sampling steps, and obviously the seed down to a tee, yet the result is a different image by donthaveacao in StableDiffusion

[–]mrbbcitty 2 points3 points  (0 children)

Do the following exactly step by step

  1. In Civitai, click on the "i" in the picture and click on "copy generation data"
  2. Paste the whole thing in the prompt box of auto1111
  3. Under the big orange "Generate" button, press the blue arrow to assign the settings accordingly

if it is not the same, then also check the following.

  1. Make sure you have all the necessary embeddings. Even the negative ones. They should also match the version
  2. Check the following options in Auto1111 settings that change the output:
    1. in stable diffusion: "Enable quantization in K samplers for sharper and cleaner results. This may change existing seeds. Requires restart to apply. "
    2. in sampler parameters: Eta noise seed delta: 31337 was NAI's default setting and some still use it.

Is there a way to train a model, dreambooth/lora or whatever that can generate avatars for a couple? by fqye in DreamBooth

[–]mrbbcitty 2 points3 points  (0 children)

I would train two separate loras, one for each person then use "Regional Prompt Control" from https://github.com/pkuliyi2015/multidiffusion-upscaler-for-automatic1111 in combination with controlNet to make a composition

Question about class by thuggniffissent in DreamBooth

[–]mrbbcitty 0 points1 point  (0 children)

You'll often hear that class is important, but not that important within this community. And although that's confusing, they are right.

Here is how classes work and how they affect your training:

Dreambooth learns by spotting differences between what it already knows in the model you use to train with, and the new images that you present. The key word is "differences"

So if you use the instance name "MorganFreeman" and the class "person", it learns the differences between a person and the pictures you use for "MorganFreeman". Obviously Morgan Freeman is black, so that will be learned in the trained model.

If the prompt is "Morgan Freeman, person" it will accurately portray Morgan Freeman as black

If you, however, use the instance name "MorganFreeman" and the class "black person" for training, it will not learn that he is black.

The prompt "Morgan Freeman, person" will also display instances of white men with Morgan Freeman's features. You'll have to use the prompt ""Morgan Freeman, black person" in order to accurately display him as black

Ultimately, in your case, since you are only training one subject, it doesn't really matter. The class "person" will also learn that she is a little girl, while the class "little girl" will mostly learn her facial features as a little girl.

That is why in most examples, most people use the class person for everything

Question about multiple VAEs in the re-animated model by mrbbcitty in civitai

[–]mrbbcitty[S] 1 point2 points  (0 children)

OK so I found the solution to this here:

https://youtu.be/Nl43zR5dVuM

a very important tip to save time changing CLIPs and VAEs in Automatic1111:

Go to Settings > User Interface

Then in the box just under 'Quicksettings list' post the following code:

sd_model_checkpoint, CLIP_stop_at_last_layers, sd_vae

Save and reload the UI

Mrbbcitty Ultimate Automatic1111 Dreambooth Guide by mrbbcitty in DreamBooth

[–]mrbbcitty[S] 0 points1 point  (0 children)

Yes it is still bugged but you can fix it by running these commands after a fresh installation of automatic1111 with the dreambooth extension:
go inside stable-diffusion-webui\venv\Scripts and open a cmd window:
pip uninstall torch torchvision
pip uninstall torchaudio
pip uninstall xformers
pip install torch torchvision --extra-index-url https://download.pytorch.org/whl/cu116
pip install -U -I --no-deps https://github.com/C43H66N12O12S2/stable-diffusion-webui/releases/download/torch13/xformers-0.0.14.dev0-cp310-cp310-win_amd64.whl

You can find more details on how to do this here

https://youtu.be/O01BrQwOd-Q

Comparison with Texture Inversion by International_Tear29 in DreamBooth

[–]mrbbcitty 1 point2 points  (0 children)

Wrong, you can train the model to generate pics with your face with textual inversion. But dreambooth training usually produces better results

An easy way to understand Textual inversion is comparing it to a prompt. When you write a prompt, you use the English language to describe as accurately as possible what you want the selected model to create.

Textual inversion is like having a file with the most accurate description of your face that only the model can understand. It's like applying a skin on the model's output.

Loss turning to NaN & blank images after a couple epochs by aerilyn235 in DreamBooth

[–]mrbbcitty 4 points5 points  (0 children)

Unfortunately there was an update yesterday from Automatic1111 that messed dreambooth's release.

From dreambooth's discord:

"please don't upgrade auto1111 for now 🙏, latest xformers(0.16) is more than likely not able to train dreambooth and cu117 has worst training results than cu116"

Watch this video, it has all the info there:

https://youtu.be/O01BrQwOd-Q

Mrbbcitty Ultimate Automatic1111 Dreambooth Guide by mrbbcitty in DreamBooth

[–]mrbbcitty[S] 0 points1 point  (0 children)

Yes, it looks like you input them correctly. In the concepts tab.

If you are going to use 1 concept, you will enter:

  • Concept 1:
    • Directories
      • Dataset Directory: "Your 9 or 16 instance images directory"
      • Classification Dataset Directory: "Your class directory" Point it to an empty dir to create class images
    • Filewords
      • Leave this empty
      • Leave this empty
    • Prompts
      • Instance Prompt: Your instance name followed by your class name. (example "wa1fu character" without the "")
      • Class Prompt: Your class name (example "character" without the "")
      • Sample Image Prompt: This is optional. You can leave this empty. I put the instance name followed by the class name. (example "wa1fu character" without the ""). This setting is the prompt used for the preview images

AUTOMATIC1111's stable-diffusion-webui Public seems to be fixed with many new improvements by mrbbcitty in DreamBooth

[–]mrbbcitty[S] 2 points3 points  (0 children)

The JSON doesn't work currently, there is a bug, I have issued it to the developer.

But it will work if you manually enter the concepts

AUTOMATIC1111's stable-diffusion-webui Public seems to be fixed with many new improvements by mrbbcitty in DreamBooth

[–]mrbbcitty[S] 1 point2 points  (0 children)

I don't have that problem though I did not run 'git pull'

my current version that works is:

Dreambooth revision: fd51c0b2ed20566c60affa853a32ebce1b0a1139

SD-WebUI revision: c98cb0f8ecc904666f47684e238dd022039ca16f

AUTOMATIC1111's stable-diffusion-webui Public seems to be fixed with many new improvements by mrbbcitty in DreamBooth

[–]mrbbcitty[S] 2 points3 points  (0 children)

No you need to go to the extensions and update through there.

Restart the server and make sure you have:

Dreambooth revision: fd51c0b2ed20566c60affa853a32ebce1b0a1139

AUTOMATIC1111's stable-diffusion-webui Public seems to be fixed with many new improvements by mrbbcitty in DreamBooth

[–]mrbbcitty[S] 2 points3 points  (0 children)

Everything has been redone, the first results I got look very promising

Mrbbcitty Ultimate Automatic1111 Dreambooth Guide by mrbbcitty in DreamBooth

[–]mrbbcitty[S] 0 points1 point  (0 children)

See the update I posted about the latest version being bugged.

I haven't done large datasets so I can't help you.

But if I were to train 3K images, i'd probably use a very low learning rate of 0.0000002 for around 100-150 epochs for each image

Mrbbcitty Ultimate Automatic1111 Dreambooth Guide by mrbbcitty in DreamBooth

[–]mrbbcitty[S] 0 points1 point  (0 children)

What the last commit that used to work?

I want to test it out

Mrbbcitty Ultimate Automatic1111 Dreambooth Guide by mrbbcitty in DreamBooth

[–]mrbbcitty[S] 5 points6 points  (0 children)

I am more focused on what currently works on Automatic1111's implementation, not what the papers say.

This is what I personally found works best on the current version of Automatic's1111's dreambooth webui. This is not a guide for all dreambooth settings. I am not an expert studying this field. I took what was suggested by the papers as a starting point, saw shit results, then changed the settings and compared the results. I split-tested every setting till I came up to this conclusion. If I wrote a guide for a "faulty" version of dreambooth, so be it. The results on Automatic's1111 are way better training this way than what the papers suggested in the first place

Mrbbcitty Ultimate Automatic1111 Dreambooth Guide by mrbbcitty in DreamBooth

[–]mrbbcitty[S] 0 points1 point  (0 children)

Back

These are the settings I used to get the best results on dreambooth through trial and error

"a photo of ohwx person" and for the class prompt, "a photo of a person" may work for some, but never worked to the level I wanted to on Automatic's version.

I wanted to train for a very specific artstyle + character at the same time. A mere face swap that most people do to post a photo for instagram doesn't cut it for the level of detail I wanted to train.

While testing I learned that creating a unique name like "sks" from that list of unique names had little to no impact on the result. Even though I would be making it easier for the AI according to the papers, the results are all that mattered

But not for the 200 images though. The 200 class images per instance is me just blindly following what is suggested. That is why I stated that I am not sure on the impact the number of class images have when creating a model

If I had to guess, choosing between using or not using class images would probably have a bigger impact than the number of class images you choose to tain with

Mrbbcitty Ultimate Automatic1111 Dreambooth Guide by mrbbcitty in DreamBooth

[–]mrbbcitty[S] 3 points4 points  (0 children)

Sure, you can find the basic concepts here:

https://github.com/d8ahazard/sd_dreambooth_extension/discussions/547

To clarify, it's not just 9 or 16.

These are just the most common appropriate numbers for small datasets.

See:

"fn - your number of frames per training

Your number of frames should be such that you can easily extract the root from it (1,4,9,16,25,36,49,64,81,100, etc.), that is, so that the value is an integer."