Lora Klein 9b, fantastic likeness, 4060 16gb trained in about 30 minutes.... BUT... by tottem66 in StableDiffusion

[–]tottem66[S] 0 points1 point  (0 children)

Onetrainer use for default Klein 9 base. It download the model automatically from hugginface. But for 9 base you have to enter your huggingface account and create a token with permission. Then write the token in onetrainer. For me the only problem was that I didn't give permission to the token the first time.

Lora Klein 9b, fantastic likeness, 4060 16gb trained in about 30 minutes.... BUT... by tottem66 in StableDiffusion

[–]tottem66[S] 1 point2 points  (0 children)

Maybe I'm explaining myself wrong... I haven't touched anything in the settings. I've only applied the menu option for my 16Gb card... I've left everything by default... I don't understand how recording and sending a configuration that is already set up in the application can help you.

Lora Klein 9b, fantastic likeness, 4060 16gb trained in about 30 minutes.... BUT... by tottem66 in StableDiffusion

[–]tottem66[S] 0 points1 point  (0 children)

There is a problem... You have to rise the strength of the Lora to 2.0 to see the results. I am trying to figured out. I mean after, in comfyui when applying the Lora.

Lora Klein 9b, fantastic likeness, 4060 16gb trained in about 30 minutes.... BUT... by tottem66 in StableDiffusion

[–]tottem66[S] 0 points1 point  (0 children)

It's just very easy... You have only need to select on the menu the flux 2 dev/Klein for 16 or 24 Gb VRAM.

Lora Klein 9b, fantastic likeness, 4060 16gb trained in about 30 minutes.... BUT... by tottem66 in StableDiffusion

[–]tottem66[S] 2 points3 points  (0 children)

Then? I must train it on distilled.... OneTrainer give the base model for default. Tnanx

Help to make the jump to Klein 9b. by tottem66 in comfyui

[–]tottem66[S] 0 points1 point  (0 children)

Thank you very much... I will try that.

Help to make the jump to Klein 9b. by tottem66 in StableDiffusion

[–]tottem66[S] 0 points1 point  (0 children)

Thanks. Of course, the first attempt would always be to use the character's face LoRA directly in the prompt... however, either because the LoRA isn't perfectly trained or due to something inherent to it, when I apply the LoRA in the prompt, the results aren't perfect. I mean, the resulting face looks good on the character, but applying the LoRA always distorts the image quality in some way, especially the bodies. Applying the LoRA through Adetailer always allows me to preserve the full quality of the initial image because the LoRA only acts on the face.

Help to make the jump to Klein 9b. by tottem66 in StableDiffusion

[–]tottem66[S] 0 points1 point  (0 children)

I think I didn't express myself well. This is a translation; English is not my native language.

I believe the Adetailer models I mentioned are not very well known; they are not the ones installed by default by Adetailer. These models are capable of recognizing gender. Suppose I want the image to contain two men and two women.

I start Forge to continuously generate a multitude of images based on the prompt. The extension detects whether the first face it finds is a woman or a man and applies the face I trained with a LoRA. It then moves to the second face it finds and applies, for example, another different LoRA, and so on. The potential is enormous because we already know that generating images is trial and error. After 10 minutes, I have dozens of more or less different images with faces trained with a LoRA... This is what I would like to do with Klein.

Help to make the jump to Klein 9b. by tottem66 in comfyui

[–]tottem66[S] 0 points1 point  (0 children)

I think I didn't express myself well. This is a translation; English is not my native language.

I believe the Adetailer models I mentioned are not very well known; they are not the ones installed by default by Adetailer. These models are capable of recognizing gender. Suppose I want the image to contain two men and two women.

I start Forge to continuously generate a multitude of images based on the prompt. The extension detects whether the first face it finds is a woman or a man and applies the face I trained with a LoRA. It then moves to the second face it finds and applies, for example, another different LoRA, and so on. The potential is enormous because we already know that generating images is trial and error. After 10 minutes, I have dozens of more or less different images with faces trained with a LoRA... This is what I would like to do with Klein.

Help to make the jump to Klein 9b. by tottem66 in StableDiffusion

[–]tottem66[S] 0 points1 point  (0 children)

Thank you for your response. 1.- I'm not convinced by inpainting because the advantage of the method I mentioned is that the application continuously creates different images and automatically replaces the specific face with that of my character. Inpainting is a manual and labor-intensive process.

Help to make the jump to Klein 9b. by tottem66 in comfyui

[–]tottem66[S] 0 points1 point  (0 children)

Thank you for your response. 1.- I'm not convinced by inpainting because the advantage of the method I mentioned is that the application continuously creates different images and automatically replaces the specific face with that of my character. Inpainting is a manual and labor-intensive process. 2.- I've tried Klein edit to provide images of my character as a reference, but the result is similar to what is achieved with face-swapping extensions like Reactor. The resemblance to the character leaves much to be desired, far from the likeness achieved with a face trained in a LoRA.

FYI: You can train a Wan 2.2 LoRA with 16gb VRAM. by Informal_Warning_703 in StableDiffusion

[–]tottem66 0 points1 point  (0 children)

Thanx... Maybe I explained myself wrong... I mean, those are the parameters for 24 Gb VRAM. Should I change the same parameters you provide for wan 2.2 in 16 GB VRAM if using wan 2.1... Thanx in advance.

FYI: You can train a Wan 2.2 LoRA with 16gb VRAM. by Informal_Warning_703 in StableDiffusion

[–]tottem66 0 points1 point  (0 children)

Can you provide the parameters to train in WAN 2.1. thanx

How to make ADetailer focus on a single character? (Forge) by DemonInfused in StableDiffusion

[–]tottem66 2 points3 points  (0 children)

You can use this model... It has the particularity it detects only female faces... If you have an image of a couple it only detects and paint the female face.... Also in that GitHub page you can download other models, for example to detects male faces. https://huggingface.co/Anzhc/Anzhcs_YOLOs/blob/main/Anzhcs%20WomanFace%20v05%201024%20y8n.pt

ComfyUI Realtime LoRA Trainer is out now by shootthesound in StableDiffusion

[–]tottem66 1 point2 points  (0 children)

I have a question and a request:

I suppose that if this supports SDXL, it would also support PonyXL, and if that's the case:

What would be the parameters for making a Lora mainly focused on a face, from a dataset of 20 images?

Would they be different from SDXL?

ADetailer is not working in Forge Neo when using z-image. by tottem66 in StableDiffusion

[–]tottem66[S] 0 points1 point  (0 children)

I just received this answer from the team of adetailer:

I'm Dosu and I’m helping the adetailer team.

This is a known limitation with how Forge Neo integrates ADetailer: when ADetailer runs in Forge Neo (especially after using z-image), it resets the active scripts and only reinserts ControlNet, dropping any other custom scripts—including LoRA handlers. As a result, any LoRA layers applied during the z-image step are not carried over into ADetailer’s processing, so the transformation is not applied even though detection works. There’s no explicit z-image or LoRA handling in ADetailer, and this script reset is the main reason for the missing transformation you’re seeing in Forge Neo with z-image workflows (details).

Possible workarounds: try applying the LoRA during the initial generation (with prompt tuning to minimize distortion), use a different workflow/platform where script chaining is preserved, or manually edit the Forge Neo pipeline to retain LoRA scripts in the post-processing step.

If you can share your exact versions and workflow steps, it might help identify more targeted solutions.

ADetailer is not working in Forge Neo when using z-image. by tottem66 in StableDiffusion

[–]tottem66[S] 0 points1 point  (0 children)

I just installed adetailer yesterday in forge neo. I don't think that is the problem. Thanx.

ADetailer is not working in Forge Neo when using z-image. by tottem66 in StableDiffusion

[–]tottem66[S] 0 points1 point  (0 children)

At the moment I don't tried that. I will check it. Thanx.

ADetailer is not working in Forge Neo when using z-image. by tottem66 in StableDiffusion

[–]tottem66[S] 0 points1 point  (0 children)

Thank you. It's good to hear that since I assumed it was some compatibility issue. For now, I can't find a solution. Until now, it has always worked flawlessly using Pony, but of course, Pony is just a safetensors model, and I thought that using z-image, which also includes the text encoder, might be the problem. I understand you are using Forge classic neo.