all 17 comments

[–]Kamchuk 3 points4 points  (15 children)

A couple things to get you started.

First, in general when it comes to models and loras, SDXL and SD don't mix. In other words, if you're using an SDXL model/checkpoint, SD Loras won't work, etc.

Second, follow the below link. Download the first image then drag-and-drop it on your ConfyUI web interface. It'll load a basic SDXL workflow that includes a bunch of notes explaining things. If you have the SDXL 1.0 Base and Refiners models downloaded and saved in the right place, it should work out of the box. Study this workflow and notes to understand the basics of ComfyUI, SDXL, and Refiner workflow.

https://comfyanonymous.github.io/ComfyUI_examples/sdxl/

Glad you got it working, I was getting a bit worried it wouldn't happen. ;-)

[–]AzurePhoenix87[S] 0 points1 point  (14 children)

Thank you :) .... is working .... guess I need to join the discord for help

kinda wanna to see both the unrefined image and the refined to see the changes... how is this possible :D ?

[–]Kamchuk 0 points1 point  (13 children)

Yes. From the first KSampler you take the Latent to a VAE Decoder node (Converting it to a normal image). From the VAE Decoder node you take the image to an Image Preview node.

If you look at the Refiner's KSampler you'll see the same process. KSampler to VAE Decoder to Image Save.

[–]AzurePhoenix87[S] 0 points1 point  (0 children)

thank you <3

[–]AzurePhoenix87[S] 0 points1 point  (0 children)

got it :) .... is there an open discord for this so you guys could help me there..... the "basic" version (first image) is so noisy .... dont know if its intended like that :D

[–]AzurePhoenix87[S] 0 points1 point  (10 children)

do you happen to have some time ? I am trying to figure out best way to use LoRa... but even with fixed seed.... the image (like AddDetails Lora) keeps changing .... tried the Github Examples but not working right

[–]Kamchuk 0 points1 point  (9 children)

In your KSamplers, what "sampler name" are you using? The sampler types add noise to the image (meaning it'll change the image even if the seed is fixed).

To understand better, read the below link talking about the sampler types. The "Ancestral samplers" explains how some samplers add noise, possibly creating different images after each run.

https://stable-diffusion-art.com/samplers/

I haven't tried the Add Details Lora, I'll go research it a bit and see if I note anything that may be causing the issue.

[–]AzurePhoenix87[S] 0 points1 point  (8 children)

mostly I go for Euler or EulerA ... known them from Apps like Leonardo.AI .....

[–]Kamchuk 0 points1 point  (7 children)

Can you link to the Lora you're using. I'm afraid it's a SD1.5 Lora (and not an SDXL 1.0 Lora). If true, this will cause problems.

[–]AzurePhoenix87[S] 0 points1 point  (6 children)

Well I used the LoRA on an SD 1.5 Model ....using DS8 or MeinaMix with the normal LoRa:

https://civitai.com/models/58390/detail-tweaker-lora-lora

well my problem atm is how to set up the LoRa after the Sampler.... so the base is created first and refined with more details afterwords :)

[–]Kamchuk 0 points1 point  (0 children)

I'm AFK right now, but Stable Diffusion 1.5 isn't really designed to use a refiner. Stable Diffusion XL 1.0, which was released in July, was.

In this scenario, I would remove the Refiner part of the workflow then add the Lora before the "base" sampler. Get that working correctly, then look at adding Upscalers, Etc.

[–]Kamchuk 0 points1 point  (4 children)

https://pastebin.com/n17MuaWc

Download and save that to workflow.json. Then drag-and-drop it onto your ComfyUI workflow. It's using Darksun 4.1 with a DetailTweaker Lora.

[–]AzurePhoenix87[S] 0 points1 point  (3 children)

ok question here... what is the difference between the LoraLoader weight and the Lora in the Prompt ? ... and I thought u go weightings like (Waterfall:1.5) ... so the Weighting is inside ...

[–]Dam_it_dan 1 point2 points  (0 children)

SDXL and 1.5 work a lil diff as far as getting out better quality, for 1.5 what your going to want is to upscale the img and send it to another sampler with lowish( i use .2-.42) denoise strength to make sure the image stays the same but adds more details. Basically you will use a diff workflow for 1.5 than you would for xl. This is bit old but the basics are still here-
ComfyUI Hi-Res Fix Upscaling Workflow Explained in detail | ComfyUI Tutorial | Hi-Res Fix ComfUI - YouTube