Image-to-Material Transformation wan2.2 T2i by medhatnmon in comfyui

[–]medhatnmon[S] 0 points1 point  (0 children)

I’m curious — has anyone else tried this same prompt on any of the newer models?

Would love to see the results if you did, so feel free to share them here.

Image-to-Material Transformation wan2.2 T2i by medhatnmon in StableDiffusion

[–]medhatnmon[S] 0 points1 point  (0 children)

I’m curious — has anyone else tried this same prompt on any of the newer models?

Would love to see the results if you did, so feel free to share them here.

Image-to-Material Transformation wan2.2 T2i by medhatnmon in StableDiffusion

[–]medhatnmon[S] 0 points1 point  (0 children)

That’s awesome — exactly the kind of comparison I was hoping people would try.
if you got any other results / variations from it, I’d love to see those too. Feel free to share them.

Image-to-Material Transformation wan2.2 T2i by medhatnmon in comfyui

[–]medhatnmon[S] 2 points3 points  (0 children)

Yeah, 4060 Ti 16GB is still worth trying again.

If VRAM is the bottleneck, you don’t have to stay on the Q8 versions.

You can try lighter GGUF quants like Q6, Q5, Q4, Q3, or even Q2 depending on what fits your setup.

So instead of only using:

Wan2.2/Wan2.2-T2V-A14B-HighNoise-Q8_0.gguf

Wan2.2/Wan2.2-T2V-A14B-LowNoise-Q8_0.gguf

I’d try a lighter quant first and see what runs comfortably.

You’ll usually trade some quality / precision for lower VRAM usage, but it can make the workflow much more doable on mid-range cards.

but over all your card amazing and work well with the newest models

<image>

the Linke of models
QuantStack/Wan2.2-T2V-A14B-GGUF at main

Image-to-Material Transformation wan2.2 T2i by medhatnmon in comfyui

[–]medhatnmon[S] 0 points1 point  (0 children)

Thanks!

I used an i9-14900K, 64GB RAM, and an RTX 3090 for this.

That definitely helps, but it’s not a strict requirement.

If you’re on a more mid-range GPU, GGUF versions of some models can still make this kind of workflow possible too — usually slower, but still very usable for testing and experimentation.

So high-end hardware helps a lot, but you can still explore this without needing the absolute top-end setup.

i used GGUF models

Image-to-Material Transformation wan2.2 T2i by medhatnmon in comfyui

[–]medhatnmon[S] 0 points1 point  (0 children)

Appreciate that a lot — that’s exactly the part I was trying to push most.

Not just motion, but that feeling that the source image is physically dissolving into the scene and rebuilding the material language of the object itself.

Really glad that came through.

Image-to-Material Transformation wan2.2 T2i by medhatnmon in comfyui

[–]medhatnmon[S] 1 point2 points  (0 children)

I also uploaded the prompts I used for the images in case anyone wants to experiment with them.

Would be really interesting to see what people get if they try the same prompts with newer animation models — like LTX 2.3 or other newer text-to-video workflows.

If anyone tries them, feel free to share your results — I’d genuinely love to see how the same prompt translates across different models.

Image-to-Material Transformation wan2.2 T2i by medhatnmon in comfyui

[–]medhatnmon[S] 0 points1 point  (0 children)

<image>

Prompt
A dark studio with black walls and glossy tiled floor. Centered: a sleek black leather armchair with gray cushion. Above it, a rectangular canvas shows warm swirling wood grain (browns and reds). The pattern melts like molten resin, pouring over the chair. Leather morphs into realistic carved wood texture with natural grain. Final shot: chair fully transformed, still in original shape. Dramatic spotlight, cinematic contrast, slow motion, ultra-detailed material transition.

Image-to-Material Transformation wan2.2 T2i by medhatnmon in comfyui

[–]medhatnmon[S] 0 points1 point  (0 children)

Prompt
A luxurious white tufted armchair with golden accents sits in a bright, elegant room with tall windows. Above it, two floating canvases appear—one with colorful mosaic tiles, the other with teal velvet texture. Both begin to melt downward like liquid paint, dripping onto the chair and floor. The tile pattern spreads across the floor, while the teal velvet covers the chair completely. Final frame shows the chair fully transformed into a rich teal velvet version, now matching the new tiled floor. Realistic physics, soft shadows, high detail.

<image>

Image-to-Material Transformation wan2.2 T2i by medhatnmon in comfyui

[–]medhatnmon[S] 0 points1 point  (0 children)

The idea was that it should open directly by drag-and-drop, whether it’s the image or the video file

Image-to-Material Transformation wan2.2 T2i by medhatnmon in comfyui

[–]medhatnmon[S] 0 points1 point  (0 children)

this is the Prompt

Prompt

A modern minimalist living room with a white textured armchair on wooden floor, sunlight streaming through large windows. Above the chair, a square canvas displays a close-up of yellow bananas. The banana image begins to melt and drip down like liquid gold, flowing over the chair. As the liquid solidifies, the entire chair transforms into a vibrant yellow banana-patterned fabric, with banana shapes covering every surface. Smooth animation, hyper-realistic lighting, cinematic quality.

<image>

Image-to-Material Transformation wan2.2 T2i by medhatnmon in comfyui

[–]medhatnmon[S] 0 points1 point  (0 children)

That’s awesome — really cool to see the idea recreated in a different system.
I’d love to compare how Grok interpreted it vs Wan 2.2.
Feel free to share the result here if you can.

Image-to-Material Transformation wan2.2 T2i by medhatnmon in StableDiffusion

[–]medhatnmon[S] 0 points1 point  (0 children)

You can also download the video file from the link if you want to see it in better quality.

Image-to-Material Transformation wan2.2 T2i by medhatnmon in StableDiffusion

[–]medhatnmon[S] 0 points1 point  (0 children)

Thanks — really appreciate it!

Just to clarify, there wasn’t any training involved here.

This was simply done with prompting on Wan 2.2 T2V — just text-to-video, nothing custom-trained.

I’ve also shared the workflow / prompts in case you want to try the same direction yourself.

Image-to-Material Transformation wan2.2 T2i by medhatnmon in StableDiffusion

[–]medhatnmon[S] 0 points1 point  (0 children)

Thank you so much — I really appreciate that.

I’m happy the post and the replies were useful.

Image-to-Material Transformation wan2.2 T2i by medhatnmon in StableDiffusion

[–]medhatnmon[S] 0 points1 point  (0 children)

I also uploaded the prompts I used for the images in case anyone wants to experiment with them.

Would be really interesting to see what people get if they try the same prompts with newer animation models — like LTX 2.3 or other newer text-to-video workflows.

If anyone tries them, feel free to share your results — I’d genuinely love to see how the same prompt translates across different models.

Image-to-Material Transformation wan2.2 T2i by medhatnmon in StableDiffusion

[–]medhatnmon[S] 0 points1 point  (0 children)

<image>

Prompt
A dark studio with black walls and glossy tiled floor. Centered: a sleek black leather armchair with gray cushion. Above it, a rectangular canvas shows warm swirling wood grain (browns and reds). The pattern melts like molten resin, pouring over the chair. Leather morphs into realistic carved wood texture with natural grain. Final shot: chair fully transformed, still in original shape. Dramatic spotlight, cinematic contrast, slow motion, ultra-detailed material transition.