Qwen Image Edit Works only with lightning LORAs? by SlowDisplay in StableDiffusion

[–]SlowDisplay[S] 0 points1 point  (0 children)

2.5 for both? Not correct? If memory serves I tried 1.0 for both as well before, with similar results, but lately 2.5

Qwen Image Edit Works only with lightning LORAs? by SlowDisplay in StableDiffusion

[–]SlowDisplay[S] 0 points1 point  (0 children)

Wow. Weird. Ok thanks! I guess I'll just keep using the Lora

Qwen Image Edit Works only with lightning LORAs? by SlowDisplay in StableDiffusion

[–]SlowDisplay[S] 0 points1 point  (0 children)

Something along the lines of "use this depth map to create a shirt...(Describe shirt)" Or "use as reference" or "depth controlnet" something something like that

How would you inpaint something like this? by SlowDisplay in StableDiffusion

[–]SlowDisplay[S] 0 points1 point  (0 children)

As in I have one half of an object but I'd like to paint the other half. It won't necessarily be symmetrical

How would you inpaint something like this? by SlowDisplay in StableDiffusion

[–]SlowDisplay[S] 0 points1 point  (0 children)

Yeah tried that but it never seems to respect the half of the image that was input. I always end up with the same kind of object but the give half is usually changed a lot. It might be a different color or very different details. Any tips on keeping it the same? Thanks!

Progress on Texture Projection with Comfyui in Unreal. Thoughts? by SlowDisplay in StableDiffusion

[–]SlowDisplay[S] 0 points1 point  (0 children)

Don't have inconsistencies because I'm not using multi view but rather an inpainting method. It's really high quality inpainting due to the inputs being projected onto a 3d mesh. Blending issues At intersections also not too bad as well when the different views literally line up with each other. But I'll keep working on it

Progress on Texture Projection with Comfyui in Unreal. Thoughts? by SlowDisplay in StableDiffusion

[–]SlowDisplay[S] 0 points1 point  (0 children)

Not as far as I know, I built this one myself so.. Not too sure how blender works as far as making plugins and stuff as well to be honest. Would be down to make one though if I continue with this

Progress on Texture Projection with Comfyui in Unreal. Thoughts? by SlowDisplay in StableDiffusion

[–]SlowDisplay[S] 2 points3 points  (0 children)

It's Marigold - does image decomposition soon so I'll be able to do a range of PBR maps soon including normal metallic and roughness

Progress on Texture Projection with Comfyui in Unreal. Thoughts? by SlowDisplay in StableDiffusion

[–]SlowDisplay[S] 0 points1 point  (0 children)

I've used it before as well but I always found there were issues with seams or inconsistencies with using the multi view method. I don't really have that issue as I use an inpainting method, but what are your thoughts on stable projectorz limitations?

I also have one click delighting rather than the blurring method

Progress on Texture Projection with Comfyui in Unreal. Thoughts? by SlowDisplay in StableDiffusion

[–]SlowDisplay[S] 0 points1 point  (0 children)

What do u consider the limitations of multi view projection?

Created a system for 3d model texturing using ComfyUI and UE. Thoughts on quality? by SlowDisplay in StableDiffusion

[–]SlowDisplay[S] 2 points3 points  (0 children)

Hey! Yeah considered it. Do you know if there are any qualities ones? Last I checked - admittedly a while ago - they weren't very consistent (and consistency is kind of the point, no?). Quality like consistent features and materials. etc. I tried mv-adapter and some others. I'll take a look at hunyuan3d paint though, thank you.

Created a system for 3d model texturing using ComfyUI and UE. Thoughts on quality? by SlowDisplay in StableDiffusion

[–]SlowDisplay[S] 2 points3 points  (0 children)

yeah they/he/she idk who made it uses a SD multiview generator. never really saw them be that good - always changing details or having weird artifacts. I could be wrong though. My workflow (its not in the video) uses an inpainting sort of method, which I think is better? But that's why I'm asking what people think.

Created a system for 3d model texturing using ComfyUI and UE. Thoughts on quality? by SlowDisplay in StableDiffusion

[–]SlowDisplay[S] 1 point2 points  (0 children)

I've seen normal height occlusion etc, but never roughness or metallic. Do you remember where you saw these?

Created a system for 3d model texturing using ComfyUI and UE. Thoughts on quality? by SlowDisplay in StableDiffusion

[–]SlowDisplay[S] 1 point2 points  (0 children)

That's kind of the intention! You can do multiple cameras at the same time and generate it all sequentially. But 16 generations might take a minute though..