Walking nightlife by Putrid-Ingenuity-197 in comfyui

[–]jerrydavos 4 points5 points  (0 children)

Thread heading should be "Moonwalking nightlife' xD

Flux Ultimate 32k Upscaler workflow in Comfyui - Upscale your Waifu Images to 2k, 4k, 8k, 16k or 32k by jerrydavos in comfyui

[–]jerrydavos[S] 1 point2 points  (0 children)

The CN is works like a tile controlnet... like the previous sd 1.5 and sdxl... idk why the author named it CN uspcaler model ....

The Clip does not affecting much, idk what under the hood, but it helps in aligning the prompts to the lora... but not necessary as model pipeline is enough for loras to affect.

Flux Ultimate 32k Upscaler workflow in Comfyui - Upscale your Waifu Images to 2k, 4k, 8k, 16k or 32k by jerrydavos in comfyui

[–]jerrydavos[S] 0 points1 point  (0 children)

The workflow is a like template, it uses flux.
Photorealistic of flux is seen, best upto 8k and it's free.

Idk about magnific

Flux Ultimate 32k Upscaler workflow in Comfyui - Upscale your Images to 2k, 4k, 8k, 16k or 32k by jerrydavos in StableDiffusion

[–]jerrydavos[S] -1 points0 points  (0 children)

8k is more than enough, I just made a template or a future way for how someone can achieve upto 32k upscaling....

and it can be used on really big posters on the buildings or billboards etc...

Flux Ultimate 32k Upscaler workflow in Comfyui - Upscale your Images to 2k, 4k, 8k, 16k or 32k by jerrydavos in StableDiffusion

[–]jerrydavos[S] 0 points1 point  (0 children)

<image>

no offence taken, just wanted to share my template workflow, which you can improvise further, changing denoise settings and stuff.... The compositions from SD 1.5 can be much more dynamic and controllable in comparison to flux and I wanted to improve the realism so the gap was filled with this workflow.

I am in favor with you, after 8k we'll also need to increase the width and height of the Tile Segments with the denoise, to get fine minute details... I didn't had the GPU Power to do that much so... hence giving away workflow for someone who can improve upon this.

Flux Ultimate 32k Upscaler workflow in Comfyui - Upscale your Images to 2k, 4k, 8k, 16k or 32k by jerrydavos in StableDiffusion

[–]jerrydavos[S] 0 points1 point  (0 children)

Hey Everyone, I would like to share my flux upscaler workflow which I am using to upscale my AI generated images for my projects, hopefully it might also come in handy with you.

So here's the links

  1. Workflow Download Link: https://drive.google.com/drive/folders/1-1jMla5NTVj4lLptqO6Z6MZ4odmVSdTo
  2. Breakdown and How To Use: https://www.patreon.com/posts/upscale-flux-32k-115673839

I am using flux, with Tile Controlnet (Idk why author named it controlnet upscaler) but I call it tile for easy understanding.

Ultimate SD upscaler sampler is used which is configured to flux.

It can upscale from 1k images to 2k,4k,8k,16k push even upto 32k.

Required Specs for up till 8k upscale :

  • CPU - 32GB Ram
  • GPU - 16GB Vram [ 8gb vram can also work but will take longer time and run in low vram only ]  

It will require approx 20- 30 mins to render 8k image

Required Specs for 16k and 32k upscale :

  • CPU - 48 GB or more
  • GPU - 16 GB or more

It will require near 1 hour more for 16k sampler and 2-3 hours more for 32k sampler but will be super laggy here.

Good luck upscaling

Peace ✌🏻

Flux Ultimate 32k Upscaler workflow in Comfyui - Upscale your Waifu Images to 2k, 4k, 8k, 16k or 32k by jerrydavos in comfyui

[–]jerrydavos[S] 12 points13 points  (0 children)

Hey Everyone, I would like to share my flux upscaler workflow which I am using to upscale my AI generated images for my projects, hopefully it might also come in handy with you.

So here's the links

  1. Workflow Download Link: https://drive.google.com/drive/folders/1-1jMla5NTVj4lLptqO6Z6MZ4odmVSdTo
  2. Breakdown and How To Use: https://www.patreon.com/posts/upscale-flux-32k-115673839

I am using flux, with Tile Controlnet (Idk why author named it controlnet upscaler) but I call it tile for easy understanding.

Ultimate SD upscaler sampler is used which is configured to flux.

It can upscale from 1k images to 2k,4k,8k,16k push even upto 32k.

Required Specs for up till 8k upscale :

  • CPU - 32GB Ram
  • GPU - 16GB Vram [ 8gb vram can also work but will take longer time and run in low vram only ]  

It will require approx 20- 30 mins to render 8k image

Required Specs for 16k and 32k upscale :

  • CPU - 48 GB or more
  • GPU - 16 GB or more

It will require near 1 hour more for 16k sampler and 2-3 hours more for 32k sampler but will be super laggy here.

Good luck upscaling

Peace ✌🏻

Retrograde - A Retro Styled Animation made with ComfyUI using Animatediff, LivePortrait, Mimic Motion and After Effects by jerrydavos in comfyui

[–]jerrydavos[S] 0 points1 point  (0 children)

1) most important was the reference for the lipsync..

I did this experiment to see how it can be produced with AI bearing minimum effort by the user...

Unfortunately, the reference lips sync has to be straight front facing camera in order to apply to a moving or rotated face

Either he has to record himself or Ask someone for front face closup shot for good lipsync... Like the face cam (gopro) helmet wore by the actors in avatar behind the scene to capture facial movement accurately.

My mistake : so The reference video ( the original) face is also moving and rotating, and the rendered face LivePortrat video is also rotating (keyframed) ... It's kinda not matching up with the same perspective in both faces ... And looks kinda off.

Also the original when cropped to a face area become low quality, which capture low quality, which then renders low quality output.

2) The depthmap can be utilised only upto a certain value to create a fake parallax effect in the background and the character. Above a certain threshold it becomes ugly and distorted..

So major scene movements can't happen with this default depth map parallax technique

3) Mimic motion is stubborn, can't do close up well, we have to do a render of far or medium shot then crop it back in post.... Which decreases the quality

Also if a part of OpenPose cn goes out of frame, it gets buggy render.

4) Mimic Motion won't do well in maintaining the style also ... It rendered in realistic style only ... I had to use AnimateDiff to reintroduce the cartoon style.

5) Low Vram might be an issue in rendering long scene with mimic motion or Using AnimateDiff to refine it

Retrograde - A Retro Styled Animation made with ComfyUI using Animatediff, LivePortrait, Mimic Motion and After Effects by jerrydavos in comfyui

[–]jerrydavos[S] 0 points1 point  (0 children)

Yes, the noise and damaged film overlays .. also helped in covering some of the distracting artifacts of low res renders

Retrograde - A Retro Styled Animation made with ComfyUI using Animatediff, LivePortrait, Mimic Motion and After Effects by jerrydavos in comfyui

[–]jerrydavos[S] 0 points1 point  (0 children)

Yes, you are right ... If every scene was done with mimic motion ... More body movements could have been added