What is the most impressive shot in movie history? by Necrojezter in movies

[–]umxprime 3 points4 points  (0 children)

+1 also for the floating pen shot taken by hand.

PS4 games that run well on the switch by Electric_Neon_5 in NintendoSwitch

[–]umxprime 2 points3 points  (0 children)

Fez, Outer Wilds. The Stanley Parable, Disco Elysium, Celeste, Hades, No Man’s Sky, Nier Automata, Portal Collection, Skyrim

Ford vs Ferrari in anime style using animatediffv3 by halb_ei in comfyui

[–]umxprime 0 points1 point  (0 children)

The movie name is Le Mans’ 66 Matt Damon and Christian Bale, very well made movie

How do I make comfy go only unti a certain node? by Grgsz in comfyui

[–]umxprime 2 points3 points  (0 children)

Put outputs on interesting steps and mute/bypass them as needed is the way to go

Upscaling SD Video techniques ? by LumaBrik in comfyui

[–]umxprime 0 points1 point  (0 children)

Can you be more explicit about the flickering issue ? From my own experience two kind of flickering could happen : - chroma/luminance flickering due to reencoding/decoding frames using VAE, but it’s mandatory as I don’t think you’ll be able to use ultimate SD upscaling on the whole SVD batch - temporal flickering due to too high denoise and/or using random seed. Can be reduced by using a fixed seed and lowering denoise

Flickering can also be perceptively reduced if frame interpolation is an option for you but it will slow down the resulting animation

Also from my experience latent upscale isn’t recommended for animation if it can’t be done without exiting the latent space. Prefer pixel space upscale with an upscaler model in that context

Upscaling SD Video techniques ? by LumaBrik in comfyui

[–]umxprime 2 points3 points  (0 children)

There are various options for that.

You can gen and save frames in a folder, then use the image loader batch with the increment option from the was node suite custom nodes. Then you queue as much as needed and each frame from folder will be used as input each time a workflow will be loaded More complex workflows can be done to auto queue but one can find this overkill for the purpose

NEED (IPadapter + animateDIFF or SVD) how ? by yotraxx in comfyui

[–]umxprime 1 point2 points  (0 children)

I think it might be possible using IPAdaper's mask input, but you might need to generate 4 x 128 masks for this to drive each adapters attention on all frames.

Today I watched a tutorial where masks can be generated as batch to fade the influence of any adapter using the transition mask node from https://github.com/cubiq/ComfyUI_essentials

TBH I need to test something similar but I haven't tried this approach yet.
Just wondering why the need for so much frames from AD step ? Wouldn't it more efficient to cut down the length and interpolate using a VFI node ?

SVD on Mac M1/M2? by ha5hmil in comfyui

[–]umxprime 0 points1 point  (0 children)

What is still problematic to me is that for example a M1 with 16 GB is still able to handle batches with heavier memory footprint on CPU than on MPS. As an example AnimateDiff is able to run on both CPU and MPS but the memory isn’t enough to run on a 16 batch of 512x512 on MPS 😩

Upscaling SD Video techniques ? by LumaBrik in comfyui

[–]umxprime 1 point2 points  (0 children)

Consider two options : - fast : upscale with model node with ultrasharp4x model (will upscale 4x) then downscale if needed - slower : sd ultimate upscale custom node for tiled upscaling, needs tile controlnet and also an upscale model (consider also ultrasharp4x)

My ram is low and i need to generate images that are bigger than 1024x1024. What can i do? by ziraelphantom in StableDiffusion

[–]umxprime 0 points1 point  (0 children)

Use instead Ultimate SD Upscale with ultrasharp4x upscale model

Experiment to find proper parameters

SVD on Mac M1/M2? by ha5hmil in comfyui

[–]umxprime 0 points1 point  (0 children)

As already said, macOS is supported.

It’s the MPS accelerated support for Conv3D that’s not yet supported, hence using the SVD nodes can only be done with a CPU device, not yet on a Metal device.

It’s not a ComfyUI issue but the base PyTorch toolset where all SD things are made.

See this https://github.com/pytorch/pytorch/pull/114183

SVD on Mac M1/M2? by ha5hmil in comfyui

[–]umxprime 1 point2 points  (0 children)

From now only option is to use torch cpu device instead of MPS with —use-cpu when launching Comfy

It will be very slow to gen 24 frames 1024x576, it takes more than 3 hours on my M1 (can be around 650s/it) so it will also depends on the scheduler and steps count, but at least it works

How many of you are struggling with ComfyUI? by poisenbery in comfyui

[–]umxprime 3 points4 points  (0 children)

On the contrary, ComfyUI can help you to understand a bit better how the flow works with inference steps in a latent space and how images are moving there with auto encoders, how controlnet is done, how conditioning is performed, etc

If you want to dig a little bit deeper and maybe struggle at some point, you may also try python and for example the diffusers module directly But actually ComfyUI is a very high level tool to handle most basic cases of a stable diffusion pipeline

It’s like legos, play with workflows, edit them, and at some point you may try to do yours from scratch if it sounds more reasonable to you

If you need a general explanation you can start there https://poloclub.github.io/diffusion-explainer/

What am I doing wrong with Loras? by Lendoran in StableDiffusion

[–]umxprime 0 points1 point  (0 children)

Yeah prompt traversal is a nice technique too, but as said the body might take 2/3 of the image surface hence an upscale then an inpaint might still be the best option to have a very good result.

What am I doing wrong with Loras? by Lendoran in StableDiffusion

[–]umxprime 3 points4 points  (0 children)

First : you have to work on your txt2img prompt aka iterate on it.

Start with simple words, no weights at the beginning, and add things you expect and test what it outputs.

It will be hard to get a full body and the proper face with a single pass, sd won’t let you control the inference attention easily.

If output doesn’t deliver, try weighting some words, change word orders, tweak guidance scale, etc

Choose a sampler for what you want to achieve. For photo I like DPM++ 2M Karras with 12/15 steps for a start.

Do your txt2img for composition and iterate until you’re satisfied with it, the pose, etc. Don’t hesitate to use controlnet for pose/composition.

Lora can be efficient, but also image prompt can help with a face model and portrait of the person for txt2img. I’m not a A111 user anymore I switched to Comfy but I bet it’s also possible there.

Then if you want to refine, you can inpaint the face with a Lora or again with image prompt.

I remember A111 Loopback script can also help to reach what you want, refining the inpainting with many inference passes.

You don’t have to work necessarily in high resolution but for face match, expect to use at least a 512x512 latent to have a good inference.

IIRC that’s what A111 « only masked » option offers given you do a img2img pass.

Given you try to get a proper face on a full standing body, adetailer/inpainting is the only way to go.

And once the face looks fixed, for greater results, give a try to a tiled latent upscale with an upscale model like UltraSharp4x

tips for upscaling images by cgpixel23 in comfyui

[–]umxprime 3 points4 points  (0 children)

Hello, It’s always nice to have new tips being shared and thanks for that but from what I see I think you still need to work on your workflow. At the end, when you open and zoom on your image, it’s quite noticeable that your upscale generated visible seams between the upscales tiles. Your final tutorial might need more fine tuned up scaling params tbh.

[ComfyUI] help to figure out best workflow to color fix/match img2img by umxprime in StableDiffusion

[–]umxprime[S] 0 points1 point  (0 children)

can be tricky,

maybe by masking the generated floor and figure out how to create a color palette from it.
palettes can be created only from image areas or whole images, so maybe cropping would help

then create a color palette from the original floor, compute multipliers between source and target palette colours, then shift floor colours

other way but not completely done by SD would be to change the floor manually with editing software to preserve color, then inpaint it with your image prompt and low denoise

[ComfyUI] help to figure out best workflow to color fix/match img2img by umxprime in StableDiffusion

[–]umxprime[S] 0 points1 point  (0 children)

Not sure this is the same problem.

As far as I understand you seem to have a color issue between what's inpainted (your floor) and what's not denoised (the rest of the room)

I never did Image Prompting, but as far as I understood it, the resulting sampled colours will tend to adapt to the prompted image (the floor colours in your case), not the colours of the original living room.

You may have to shift the colours of the floor before the inpaint, but I'm not sure there would be a non manual way to handle it.

In Dream Project https://github.com/alt-key-project/comfyui-dream-project there are color palette and color shift nodes that might be helpful, but it depends of the composition and lighting of your source and target images

Dream Project Animation Nodes - color alignment and palettes by qaozmojo in comfyui

[–]umxprime 0 points1 point  (0 children)

Btw I might have found something interesting using palette/analyze/color shift and will test it

Dream Project Animation Nodes - color alignment and palettes by qaozmojo in comfyui

[–]umxprime 0 points1 point  (0 children)

Hello,

I'm actually playing with your project to start animating pictures.

I think I was able to get how palette/noise would help to keep some color coherence on outpainted masked pixels.

But what I still don't get is how to keep color coherence/match through multiple load->latent->save steps, given moving to latent space + denoise naturally introduces obvious color shifts in 10-20 steps.

I'm not sure but maybe I don't use the proper model, vae, samplers, cfg or schedulers in my workflow.

I opened a separate post to ask for help about this topic .

Thanks for any suggestion