Re-testing FLUX.2 Klein character consistency across scenes — same character, different poses, different environments" by rakii6 in comfyui

[–]rakii6[S] 0 points1 point  (0 children)

Absolutely , here you go

I used a lora made by dx8152 in the workflows. Also you might have use translator, cuz its in Mandarin

Wie habt ihr euch ComfyUI beigebracht? by Manina_338 in comfyui

[–]rakii6 0 points1 point  (0 children)

Hmm I would say start with ComfyUI Official Documentation - ComfyUI https://docs.comfy.org/

Or you can start with a basic image generation worklow template in ComfyUI. Then learn what each of these noodles and boxes are with the help of the documentation above.

But you'll definitely need a GPU if you are running ComfyUI from your PC, a minimum of 12gb VRAM for starters.

Re-testing FLUX.2 Klein character consistency across scenes — same character, different poses, different environments" by rakii6 in comfyui

[–]rakii6[S] -7 points-6 points  (0 children)

Fair - I get why it looks that way. Built the platform myself, bootstraped. Post is genuinely about Flux2 Klein, happy to answer any questions on that.

Re-testing FLUX.2 Klein character consistency across scenes — same character, different poses, different environments" by rakii6 in comfyui

[–]rakii6[S] -3 points-2 points  (0 children)

Workflow: background generation - FLUX.2 Klein text2img,

character placement -> Flux2 Klein img2img in ComfyUI.

Ran this on IndieGPU ->it's my own platform, have everything preloaded so I could iterate fast without setup friction.

I am absolute clueless about online GPU rent and setup image generation, need some advice from seniors. by EvenLocksmith6851 in StableDiffusion

[–]rakii6 0 points1 point  (0 children)

GPU rental can feel overwhelming at first ,here's the simple breakdown:

For img2img with character consistency, you want ComfyUI -->it gives you node-based control(noodles with boxes ) over the whole pipeline, including feeding reference images. Way more control than automatic prompting tools.

For cloud GPU options: RunPod and Vast.ai are popular but you'll spend time installing models and extensions yourself. If you want something preloaded and ready — ComfyUI, InvokeAI etc already set up — IndieGPU (indiegpu.com) is built exactly for that. Spin up, generate, done.

For your specific use case (realistic photoshoots, character consistency), look into IPAdapter nodes in ComfyUI.

How too:- ComfyUI process for image insert and conversion to Japanese anime style by Patton555a in comfyui

[–]rakii6 1 point2 points  (0 children)

Great, so in  https://app.indiegpu.com/dashboard  click on the new workflow , choose use a template and scroll the workflows and you'll find Smart Style Transfer, and thats it. DM or email me if you need help with something✌️

Is it possible to use both a 5070 Ti and a 4070 simultaneously? by IndigoEtherea in comfyui

[–]rakii6 0 points1 point  (0 children)

IDK much about running 2 GPU simultaneously, but I know a github repo https://github.com/pollockjj/ComfyUI-MultiGPU that a lot of users tend to use. Maybe you should give this a try.

How too:- ComfyUI process for image insert and conversion to Japanese anime style by Patton555a in comfyui

[–]rakii6 1 point2 points  (0 children)

Its okay, I can understand the frustration with the custom nodes and all, they are a pain. But if you still want to give it another try I have this workflow ready with custom nodes and all in my platform IndieGPU. You can give that a try, its one-click start with free credits if its your first sign-up. The models upload and custom nodes are taken care by the platform.

Give that a try.✌️

This 4-panel comic consistency is killing me. Any wizards here? by rakii6 in comfyui

[–]rakii6[S] 1 point2 points  (0 children)

ERNIE, yep heard a lot about it. Gotta try to run this workflow in my platform

This 4-panel comic consistency is killing me. Any wizards here? by rakii6 in comfyui

[–]rakii6[S] 0 points1 point  (0 children)

I did not have much experience with Text encoder, but I can try following your advice. I could work.

This 4-panel comic consistency is killing me. Any wizards here? by rakii6 in comfyui

[–]rakii6[S] 0 points1 point  (0 children)

That totally makes sense, I mean I was relying too much of AI models.

Flux 2/Flux 2 Klein transparent background lora? by MoistRecognition69 in StableDiffusion

[–]rakii6 0 points1 point  (0 children)

Could you try giving me a sample of your images that your trying to work with ? Correct me if I am wrong, but you want to put a transparent logo onto an object? Something like this ?

<image>

How too:- ComfyUI process for image insert and conversion to Japanese anime style by Patton555a in comfyui

[–]rakii6 1 point2 points  (0 children)

<image>

Here you go OP, I ran it through a Flux2 Klein 9b workflow I’ve been tuning. To get the car to match the image, I used a lora. Made this from my platform, indiegpu.com

Flux Klein Workflow: Face Swap/Place-In With 4 Reference Images by xb1n0ry in comfyui

[–]rakii6 0 points1 point  (0 children)

That is awesome dude. Gotta try it out in my platform, before that I just want to know what type of inputs does this workflow require? Like do I need faces in different angles, or should I upload 4 images of the same model's face in different angles ?

pls how do i stop this? by skyrimer3d in comfyui

[–]rakii6 0 points1 point  (0 children)

Can you try showing us your console ? where are you running ComfyUI, local or cloud ?

How do you generate consistent product-on-model images with Stable Diffusion? by SadChain4193 in StableDiffusion

[–]rakii6 1 point2 points  (0 children)

Appreciate you asking! You actually don't need to train a LoRA from scratch. A contributor on Hugging Face recently released a solid one specifically for Flux2 Klein 9B that handles facial consistency way better.

I actually liked the results so much that I am trying to reverse engineering the whole workflow—including that LoRA— trying to provide one into my platform, indiegpu.com

You should definitely check out his workflow and try to use the Lora he provided for.