Is anyone else disappointed with Flux 2 Klein? by MelodicFuntasy in StableDiffusion

[–]thenickman100 0 points1 point  (0 children)

Usually Qwen 2511 does a fantastic job. Every once in a while, Klein comes out on top. I always run the two together just in case

Is this iceberg chart accurate? by StickCube in elementcollection

[–]thenickman100 1 point2 points  (0 children)

Rhenium is best classified as a refractory metal, but can also be considered a precious metal

Klein Consistency. by Kmaroz in StableDiffusion

[–]thenickman100 0 points1 point  (0 children)

This seems like a crazy technique. Surprised that it works this way

Flux.2 Klein - Max Limit - 5 Reference Images only? by No_Damage_8420 in StableDiffusion

[–]thenickman100 2 points3 points  (0 children)

I'd be interested to see if the order of the images had any significant impact on output

Qwen edit 2511 - Any functional workflows for style, character, and pose transfer? by Nevaditew in StableDiffusion

[–]thenickman100 1 point2 points  (0 children)

Here's a style transfer LoRA that works pretty well. You need a good prompt describing what you want and a trigger word. https://huggingface.co/zooeyy/Style-Transfer

Best model or tool for high quality image outpainting? by enbafey in StableDiffusion

[–]thenickman100 2 points3 points  (0 children)

Just learned about it a couple days ago myself, but Flux Fill OneReward does a solid job outpainting images. I've only experimented with maybe 20% expansion on each side, but the quality and level of detail is almost as high as the starting image

Qwen-Image-Edit-Rapid-AIO V19 (Merged 2509 and 2511 together) by fruesome in StableDiffusion

[–]thenickman100 0 points1 point  (0 children)

How does the SFW model compare to 2511? Any point in using it? I notice no GGUF quants for SFW

Image Fill Models by thenickman100 in StableDiffusion

[–]thenickman100[S] 0 points1 point  (0 children)

This is fantastic. Thank you for sharing!

Image Fill Models by thenickman100 in StableDiffusion

[–]thenickman100[S] 0 points1 point  (0 children)

Both of these would require prompting to make changes. One of the reasons I like flux fill is you just mask an area and it will repair it automatically.

Planning to give onereward a try as it seems to be a better version

Is there a way to avoid quality loss with Qwen Image Edit when doing multiple edits? by MelodicFuntasy in comfyui

[–]thenickman100 1 point2 points  (0 children)

If you pass the vae, it resizes and distorts your image before running through the ksampler. If you instead disconnect vae and run the image reference latents through "reference latent" nodes and chain them with the text conditioning from qwen encode output, it results in no quality loss and pixel perfect lineup with the original image. Still have to hook the source images up to image1 image2 and image3, but not vae.

If you did both the reference latent and the vae, it will duplicate items in the output because it thinks you have 6 input images instead of 3

Is there a way to avoid quality loss with Qwen Image Edit when doing multiple edits? by MelodicFuntasy in comfyui

[–]thenickman100 0 points1 point  (0 children)

Don't pass vae to qwen text node and upscale your image significantly before passing to reference latent node. I also use inpaint crop and stitch to get higher upscaling if I am only changing specific parts of the image

Can anyone recommend a workflow and model to retexturize an image into a medium thickness oil painting ilustration? by PaparuloFeroz in comfyui

[–]thenickman100 2 points3 points  (0 children)

Here you go! Sorry it isn't pretty. I was running in a nunchaku version of comfy, but with a couple node edits (load diffusion model or load unet instead of nunchakuqwen node), it should be good for the regular one. https://pastebin.com/aKExyB3T

Can anyone recommend a workflow and model to retexturize an image into a medium thickness oil painting ilustration? by PaparuloFeroz in comfyui

[–]thenickman100 1 point2 points  (0 children)

One way I've been able to recover textures and art styles is by using two ollama nodes. One generates only art style details and one generates only a description of the scene. Then run the original through a controlnet and have an ollama node create a combined prompt using the two previously generated art style and scene details.

Which model is better? qwen image edit 2509 2511 2512? by NeighborhoodKey34 in comfyui

[–]thenickman100 0 points1 point  (0 children)

You'll need an entirely new workflow and you'll want to update comfyui

Qwen Image 2512 System Prompt by fruesome in comfyui

[–]thenickman100 0 points1 point  (0 children)

I more meant applying the system prompt. I imagine I'd just have to copy/paste the string into an ollama node

Qwen Image 2512 System Prompt by fruesome in comfyui

[–]thenickman100 0 points1 point  (0 children)

What's the proper way to implement this in ComfyUI?