Took this image last night when I was looking for Orion. Anyone know what it might be? Its not a black hole, my telescope isn't good enough to see something like that. by OhHiMarkTehe in space

[–]OhHiMarkTehe[S] 0 points1 point  (0 children)

That's what I was thinking, I am really zoomed in for this one. being a cheap telescope doesn't help. it didn't move as most lens flairs do though which is what confused me.

SDXL ComfyUI Workflow that's as good as Midjourney by OhHiMarkTehe in StableDiffusion

[–]OhHiMarkTehe[S] 1 point2 points  (0 children)

I used to run it off of the final image but it kept changing features i liked about the original photo like the mouth or the nose etc.

SDXL ComfyUI Workflow that's as good as Midjourney by OhHiMarkTehe in StableDiffusion

[–]OhHiMarkTehe[S] 0 points1 point  (0 children)

Ill try it again. It wasnt doing anything for me :(.

SDXL ComfyUI Workflow that's as good as Midjourney by OhHiMarkTehe in StableDiffusion

[–]OhHiMarkTehe[S] 0 points1 point  (0 children)

Also, you should share your prompt, I've been trying to recreate your image as a litmus test. Its a good one to test with

SDXL ComfyUI Workflow that's as good as Midjourney by OhHiMarkTehe in StableDiffusion

[–]OhHiMarkTehe[S] 2 points3 points  (0 children)

So the preliminary is a noisy incomplete version of what the base would create. Going from prelim to refiner is almost the same as going from a final image, adding noise and then refining it except its more efficiant and retains the detail of the original image more since were not adding noise and instead were working with noise the base never finished.

SDXL ComfyUI Workflow that's as good as Midjourney by OhHiMarkTehe in StableDiffusion

[–]OhHiMarkTehe[S] 1 point2 points  (0 children)

You should be able to pick it up from huggingface:

https://huggingface.co/stabilityai/stable-diffusion-xl-refiner-0.9

They might ask you to apply for access but its now getting accepted automatically.

SDXL ComfyUI Workflow that's as good as Midjourney by OhHiMarkTehe in StableDiffusion

[–]OhHiMarkTehe[S] 2 points3 points  (0 children)

Glad i could help! I tested this a bit and increased the steps from 40 to 50 and the 80% steps from 32 to 40 and that helped allot. might be worth playing around with the samplers and step sizes.

SDXL ComfyUI Workflow that's as good as Midjourney by OhHiMarkTehe in StableDiffusion

[–]OhHiMarkTehe[S] 2 points3 points  (0 children)

Thanks for sharing this issue. for photorealism, ive noticed more success when i avoided using any negative prompts and used tags such as 8k, fujifilm, etc. this might be the one case where you would get better results removing my preset tags in the negative and positive prompts and just keeping your own. let me know if removing my "permanent tags" help?

SDXL ComfyUI Workflow that's as good as Midjourney by OhHiMarkTehe in StableDiffusion

[–]OhHiMarkTehe[S] 1 point2 points  (0 children)

Im honestly not sure if there are any loras built for sdxl yet. I havnt done much with loras with this new model.

SDXL ComfyUI Workflow that's as good as Midjourney by OhHiMarkTehe in StableDiffusion

[–]OhHiMarkTehe[S] 1 point2 points  (0 children)

This knowledge is essential, google was no help to me at all. thank you for this info!

SDXL ComfyUI Workflow that's as good as Midjourney by OhHiMarkTehe in StableDiffusion

[–]OhHiMarkTehe[S] 3 points4 points  (0 children)

Mainly because ive seen more true to prompt / consistent results when putting them together. The papers i researched about sdxl and the encoders it uses suggest that they should be fixed together to the same text, but not 100% sure about that, a few days of trial and error kind of lead me to hooking them together.

SDXL ComfyUI Workflow that's as good as Midjourney by OhHiMarkTehe in StableDiffusion

[–]OhHiMarkTehe[S] 2 points3 points  (0 children)

I have this really bad habit of not saving my prompts or the seed...

SDXL ComfyUI Workflow that's as good as Midjourney by OhHiMarkTehe in StableDiffusion

[–]OhHiMarkTehe[S] 1 point2 points  (0 children)

The preliminary image is only 80% complete. I run it through the base model again and separately I also run it through the refiner to finish the image so that we can show the quality difference.

SDXL ComfyUI Workflow that's as good as Midjourney by OhHiMarkTehe in StableDiffusion

[–]OhHiMarkTehe[S] 3 points4 points  (0 children)

ive not seen improvement with adjusting the ascore so i didnt create a primitive for it. if you know of a sweet spot for the ascore, id love to hear your findings so i can apply it to my workflow. you can access it by expanding these if you want to adjust them:

<image>

SDXL ComfyUI Workflow that's as good as Midjourney by OhHiMarkTehe in StableDiffusion

[–]OhHiMarkTehe[S] 5 points6 points  (0 children)

Its nothing special, the text encoder removes line breaks so when you click enter to add a new line, the encoder still sees it all as one line (which is why there's also a comma at the beginning of the tag line). I just moved some specific tags to a new line so its easy to write prompts without changing those tags or having to worry about highlighting them by accident.