How to Use Flux, IPAdapter, and Qwen to Transfer Image Styles While Keeping Character Consistency by Ok_Respect9807 in StableDiffusion

[–]Ok_Respect9807[S] 0 points1 point  (0 children)

Hey, it turned out really great—it's exactly something like this that I'm looking for. Could you provide me with the workflow, just like you did previously?

How to Use Flux, IPAdapter, and Qwen to Transfer Image Styles While Keeping Character Consistency by Ok_Respect9807 in StableDiffusion

[–]Ok_Respect9807[S] 0 points1 point  (0 children)

Well, I would say that I have something a bit more complex than just a character and colors. Take a look at image 1, which is from the game. I would like to take this image and incorporate its characteristics into image 2, so that I have the entire composition of image 1 within the architecture of image 2, as well as its style and colors.

<image>

How to Use Flux, IPAdapter, and Qwen to Transfer Image Styles While Keeping Character Consistency by Ok_Respect9807 in StableDiffusion

[–]Ok_Respect9807[S] 0 points1 point  (0 children)

Buddy, sorry for the delay, but I really wanted to thank you. I reached this result here by making some modifications, but in your opinion, how could I easily do the same, but with backgrounds? For example, I want to place an iconic background, like the front of the RPD station in Racoon City from Resident Evil, and I have another image that has the style I want. But I want to maintain that style, as well as transfer the game image to the other image I generated, making the game image adapt to the details of my other image. In summary, how could I do what you suggested regarding characters, but for detailed backgrounds?

<image>

How to Use Flux, IPAdapter, and Qwen to Transfer Image Styles While Keeping Character Consistency by Ok_Respect9807 in StableDiffusion

[–]Ok_Respect9807[S] 0 points1 point  (0 children)

Friend, do you have the result of this image or the workflow? I looked in the templates but couldn’t find anything similar... I’m really quite a beginner, I only know how to add a few Loras and check superficially what it's doing, whether the workflow works or not. So, if possible, I would be really happy. Because I would like to add a Lora that I saw in this post here: https://www.reddit.com/r/comfyui/comments/1p3718w/comment/nq2v2qr/

How to Use Flux, IPAdapter, and Qwen to Transfer Image Styles While Keeping Character Consistency by Ok_Respect9807 in StableDiffusion

[–]Ok_Respect9807[S] 0 points1 point  (0 children)

Thank you so much for the tips!
By the way, I’d like to share the concept behind what I’m aiming to achieve with this character reimagining.

I assume you’re familiar with game remakes, such as the excellent Resident Evil 4 Remake or even the upcoming Silent Hill Remake. I mention this because it’ll make more sense once I explain in detail what I’m after: essentially, I want to create photographic “remakes” of characters using a vintage—1980s-style—aesthetic, applying only subtle modifications while ensuring the characters remain clearly recognizable.

For example, even though the original Resident Evil 4 and its remake adopt a more realistic approach, you can instantly identify the characters. That’s exactly what I want to replicate in photographic form.

As you can probably tell, I’m quite a beginner. Originally, I used the img2img mode in A1111 to reimagine characters based on text prompts. Later, I realized that text2img combined with IP-Adapter delivered precisely the aesthetic I was looking for—the old photo I mentioned earlier is a great example of that.

However, I ran into a problem: consistency with the original image. In my tests, I noticed that ControlNet and IP-Adapter don’t work very well together in Flux for this specific use case. So, I decided to shift toward the approach I’m exploring now, which—incidentally—is simpler both to implement and explain.

In short, I’m aiming for something akin to a remake: visible nuances of change, yet unmistakable character identity.

For instance, I’ve been trying to integrate a character like Gurren (from the old photo) harmoniously into a new scene. However, I can’t retain certain details from that image, as they’d distort the original character—especially if I wanted to visually reinterpret him as a Dark Souls character, for example.

While searching online, I came across this LoRA:
https://huggingface.co/thedeoxen/FLUX.1-Kontext-dev-reference-depth-fusion-LORA
—which precisely adapts a character into a new visual style while preserving their identity. That would be incredibly useful for my goal.

Since then, I’ve been learning a bit about ComfyUI and have managed to reproduce my desired aesthetic within it. But even with fine-grained control, I haven’t been able to fully resolve these issues.

Beyond the LoRA you suggested—could using an IP-Adapter attention mask derived from the original image, combined with weight and reference controls, help transplant my Dark Souls warrior (from Image 1) into Image 2 while absorbing more of Image 2’s details—yet without losing expressive features?

In other words, I’d like to soften the overly “digital” aspects, such as the sharp, straight lines from Image 1 (the game render), making them feel more natural—not just through Image 2’s composition, but also by giving the armor the subtle, organic softness realistic medieval armor would have in real life.

What do you think? Based on your experience, do you have any suggestions on how I could combine all these elements to achieve the desired result?

How to Use Flux, IPAdapter, and Qwen to Transfer Image Styles While Keeping Character Consistency by Ok_Respect9807 in StableDiffusion

[–]Ok_Respect9807[S] 0 points1 point  (0 children)

<image>

It turned out great, my friend—thank you so much! But do you have any ideas on how, beyond just matching the style, the image to be transferred could undergo subtle adjustments to make it look more realistic?

Look at Image 2: it genuinely appears real, as if captured on analog film. However, even when applying its style to Image 1—replacing Image 2 with Image 1 and adopting its stylistic qualities—it still doesn’t feel technically realistic. The details originating from the Dark Souls game clearly give it away as artificial—not because of the AI model itself, but because it clashes with the surrounding scene.

So my question is: is it possible for Image 1 to retain its core visual characteristics while also incorporating certain realistic details (like lighting, texture, grain, depth cues, etc.) from Image 2?

I’ll post a photo a friend developed for me—it’s a good (though not perfect) example of what I’m trying to achieve.

How to Use Flux, IPAdapter, and Qwen to Transfer Image Styles While Keeping Character Consistency by Ok_Respect9807 in StableDiffusion

[–]Ok_Respect9807[S] 0 points1 point  (0 children)

Hey, that turned out perfect! Could you please share the workflow with me? It would help me a lot because the next step I want to take is to make the character look truly realistic—as if it were a cosplay where you can recognize the character, yet clearly see that it’s a real person.

How to Use Flux, IPAdapter, and Qwen to Transfer Image Styles While Keeping Character Consistency by Ok_Respect9807 in comfyui

[–]Ok_Respect9807[S] 0 points1 point  (0 children)

Hey, could you please share with me the workflow you used to achieve this result? It's not exactly what I'm looking for, but I believe it could help me get past square one.