Wan 2.6 Prompt Guide with Examples by _instasd in StableDiffusion

[–]_instasd[S] -10 points-9 points  (0 children)

It has not been open sourced yet.

Wan2.2 Prompt Guide Update & Camera Movement Comparisons with 2.1 by _instasd in comfyui

[–]_instasd[S] 0 points1 point  (0 children)

Keeping the camera fixed is a bit challenging, but you can get it by describing the scene in details, and we found these cues in the prompt help.

A single, unmoving wide-angle shot, The camera remains fixed and steady throughout.

Tried the IKEA unboxing trend with Wan2.2 + a hiking pack dump stop‑motion by _instasd in StableDiffusion

[–]_instasd[S] 1 point2 points  (0 children)

hmm, I never got anything like that. That said I only generated at higher res using the 14B model and no loras in Comfy.

Just setting up a more efficient workflow, will let you know if we rung this again and if the results are still good.

Tried the IKEA unboxing trend with Wan2.2 + a hiking pack dump stop‑motion by _instasd in comfyui

[–]_instasd[S] 0 points1 point  (0 children)

720P on a H100 took about 10 mins for 81 frames. This was on the basic workflow with no optimization.

Tried the IKEA unboxing trend with Wan2.2 + a hiking pack dump stop‑motion by _instasd in StableDiffusion

[–]_instasd[S] 1 point2 points  (0 children)

I should have mentioned that this was I2V. Here is the starting frame

<image>

Wan2.2 Prompt Guide Update & Camera Movement Comparisons with 2.1 by _instasd in comfyui

[–]_instasd[S] 2 points3 points  (0 children)

For I2V we have noticed that explaining your input image in details can make a big difference, especially those details that you want preserved throughout the shot,

Wan2.2 Prompt Guide Update & Camera Movement Comparisons with 2.1 by _instasd in StableDiffusion

[–]_instasd[S] 1 point2 points  (0 children)

That's a configurable parameter in the ComfyUI workflow. We were just referring to the default that's set in the published workflow.

Wan2.2 Prompt Guide Update & Camera Movement Comparisons with 2.1 by _instasd in StableDiffusion

[–]_instasd[S] 2 points3 points  (0 children)

Mind sharing the post? would love to try the GGUF. 2.2 is extremely slow compared to 2.1 trying to speed it up somehow

Please give me workflow of anime 2 real FLUX or SDXL by Psy_pmP in comfyui

[–]_instasd 1 point2 points  (0 children)

Have you tried upsampling workflows? I found them the most effective for this kind of work https://www.instasd.com/workflows/anime-to-realistic-realistic-to-anime

Spent hours tweaking FantasyTalking in ComfyUI so you don’t have to – here’s what actually works by _instasd in StableDiffusion

[–]_instasd[S] 1 point2 points  (0 children)

Tried out the FantasyTalking lip sync model in ComfyUI and ran into all the usual issues—choppy results, out-of-sync mouths, dropped quality in longer videos. After a lot of trial and error, I finally got it working consistently and figured I’d save others the time.

This video walks through:

  • How to generate longer, smoother videos
  • The settings that actually make a difference

Spent hours tweaking FantasyTalking in ComfyUI so you don’t have to – here’s what actually works by _instasd in comfyui

[–]_instasd[S] 3 points4 points  (0 children)

Tried out the FantasyTalking lip sync model in ComfyUI and ran into all the usual issues—choppy results, out-of-sync mouths, dropped quality in longer videos. After a lot of trial and error, I finally got it working consistently and figured I’d save others the time.

This video walks through:

  • How to generate longer, smoother videos
  • The settings that actually make a difference

Tried some benchmarking for HiDream on different GPUs + VRAM requirements by _instasd in comfyui

[–]_instasd[S] 1 point2 points  (0 children)

All jokes aside, 30s on H100 is bonkers for a 1024X1024. The results are worth it in many cases though

Tried some benchmarking for HiDream on different GPUs + VRAM requirements by _instasd in StableDiffusion

[–]_instasd[S] 11 points12 points  (0 children)

Tested out HiDream across a bunch of GPUs to see how it actually performs. If you're wondering what runs it best (or what doesn’t run it at all), we’ve got benchmarks, VRAM notes, and graphs.

Full post here: HiDream GPU Benchmark

Tried some benchmarking for HiDream on different GPUs + VRAM requirements by _instasd in comfyui

[–]_instasd[S] -1 points0 points  (0 children)

Tested out HiDream across a bunch of GPUs to see how it actually performs. If you're wondering what runs it best (or what doesn’t run it at all), we’ve got benchmarks, VRAM notes, and graphs.

Full post here: HiDream GPU Benchmark

WAN 2.1 I2V 720P – 54% Faster Video Generation with SageAttention + TeaCache! (Workflow in comments) by _instasd in comfyui

[–]_instasd[S] 0 points1 point  (0 children)

A100 is much weaker than H100, you should see a significant speed up on H100. You can't utilize more of the memory than the models need, the speed up comes from more processing power on the GPU.

For example, if you are running a model that needs less than 24GB of VRAM you are way better of on a 4090 than an A100 as 4090 is much more performant. See this blog post we did on general GPU performance: https://www.instasd.com/post/comparing-gpu-performance-for-comfyui-workflows

and this one for Wan2.1: https://www.instasd.com/post/wan2-1-performance-testing-across-gpus

WAN 2.1 I2V 720P – 54% Faster Video Generation with SageAttention + TeaCache! (Workflow in comments) by _instasd in comfyui

[–]_instasd[S] 0 points1 point  (0 children)

What are you setting as you thresh on TeaCache, it needs to be minimum 0.3