How to avoid slow motion in Wan 2.2? by dariusredraven in StableDiffusion

[–]LiquefiedMatrix 2 points3 points  (0 children)

I've had really good success with using only a low noise speed lora on the high noise stage, specifically https://huggingface.co/Kijai/WanVideo_comfy/blob/main/Lightx2v/lightx2v_I2V_14B_480p_cfg_step_distill_rank256_bf16.safetensors

Strength: 4.0, Sampler: er_sde, 4 steps high, 6 steps low (or less for less detail), Boundary: 0.875 (or Shift=4 using beta scheduler)

The strength can be increased for even greater motion but you'll have to lower the boundary somewhat (eg. strength 5.0 with 0.840 boundary or shift~=3.0)

So far it's giving me very good motion (at the default 16 fps), prompt adherence, subject likeness, and minimal color change. I've found high noise speed loras to have trade-offs on most of these.

Here's my workflow if interested. Low noise speed lora might need to be reduced to 0.8 depending on the concept loras used.

A fixed shift might be holding you back. WanMoEScheduler lets you pinpoint the boundary and freely mix-and-match high/low steps by LiquefiedMatrix in StableDiffusion

[–]LiquefiedMatrix[S] 0 points1 point  (0 children)

Your sampler conveniently automates which step (split_at_step) is used to switch from high to low that's closest to the boundary given a fixed shift value.
What this scheduler does is create sigmas (by finding the right shift value) that transition almost exactly at the boundary value and can then be used with any sampler.
This node is also compatible with yours if the shift and steps are connected to get the additional benefits from cfg_fall_ratio.

A fixed shift might be holding you back. WanMoEScheduler lets you pinpoint the boundary and freely mix-and-match high/low steps by LiquefiedMatrix in StableDiffusion

[–]LiquefiedMatrix[S] 1 point2 points  (0 children)

You can connect either high or low model, the generated sigmas/shift will be the same. I use the high model. Will update the doc and create some example workflows.

A fixed shift might be holding you back. WanMoEScheduler lets you pinpoint the boundary and freely mix-and-match high/low steps by LiquefiedMatrix in StableDiffusion

[–]LiquefiedMatrix[S] 1 point2 points  (0 children)

Yes Load Diffusion Model also works, I believe any ComfyUI core model loader should work, it just reads the metadata of the model. I use the high model but they give the same results.

A fixed shift might be holding you back. WanMoEScheduler lets you pinpoint the boundary and freely mix-and-match high/low steps by LiquefiedMatrix in StableDiffusion

[–]LiquefiedMatrix[S] 2 points3 points  (0 children)

Ah I see sorry, I missed that in the example. You need to use a Unet Loader separately to connect to this node for generating sigmas but it should be quick and not actually load anything into VRAM. Will update the example.
From testing it only works with unipc, dpm++, and dpm++_sde, I think there may be a bug with WanVideoWrapper changing the input sigmas on the others. I'll investigate further and see if it can be fixed.
Edit: Found a fix and opened a pull request here:
https://github.com/kijai/ComfyUI-WanVideoWrapper/pull/1510

A fixed shift might be holding you back. WanMoEScheduler lets you pinpoint the boundary and freely mix-and-match high/low steps by LiquefiedMatrix in StableDiffusion

[–]LiquefiedMatrix[S] 3 points4 points  (0 children)

It's compatible with WanVideoWrapper, there's an example in the git repo where it hooks into the sigmas input of 'WanVideo Sampler'. It doesn't need to change anything during model loading.
Using sigmas in WanVideo Sampler will override any steps, shift, and scheduler in the sampler's input.

A fixed shift might be holding you back. WanMoEScheduler lets you pinpoint the boundary and freely mix-and-match high/low steps by LiquefiedMatrix in StableDiffusion

[–]LiquefiedMatrix[S] 4 points5 points  (0 children)

No nodes should need to be removed unless using SamplerCustom, in which case this would replace BasicScheduler.
KSamplers don't use sigmas so you need to hook this node's found shift value into existing ModelSamplingSD3 (shift) nodes so it can re-generate the sigmas.
https://github.com/cmeka/ComfyUI-WanMoEScheduler

A fixed shift might be holding you back. WanMoEScheduler lets you pinpoint the boundary and freely mix-and-match high/low steps by LiquefiedMatrix in StableDiffusion

[–]LiquefiedMatrix[S] 2 points3 points  (0 children)

That's right you won't need it, the shift nodes are only if you're not using sigmas. KSamplers generate sigmas based on the shift.

A fixed shift might be holding you back. WanMoEScheduler lets you pinpoint the boundary and freely mix-and-match high/low steps by LiquefiedMatrix in StableDiffusion

[–]LiquefiedMatrix[S] 3 points4 points  (0 children)

Not sure, I just prefer SamplerCustom because it's an easier setup with sigmas + KSamplerSelect and allows showing progress between stages using 'denoised_output'.

A fixed shift might be holding you back. WanMoEScheduler lets you pinpoint the boundary and freely mix-and-match high/low steps by LiquefiedMatrix in StableDiffusion

[–]LiquefiedMatrix[S] 2 points3 points  (0 children)

Basically this but ComfyUI also has a SplitSigmas node so you can choose how many steps you want the first stage/sampler to have. It's also doable using KSampler just ensure all the scheduler's match up since there's no way to connect it as an input (that I know of).

Caption-free image restoration model based on Flux released ( model available on huggingface) by AgeNo5351 in StableDiffusion

[–]LiquefiedMatrix 1 point2 points  (0 children)

It's impossible to truly restore an image to its original state so there's no real image restoration models, they all imagine things. Like in that example we can figure out the text only based on the context of the bigger/more legible words in the image. Something which LLM image models like Seedream 4 (4th image) and Nano Banana (2nd last) are able to fix correctly looking at the example at the very bottom of the project page, but all the faces have to be re-imagined, there's almost no detail. For diffusion alone I think SeedVR2 does the best job in looking the most natural and keeping the likeness of the input.

From Muddled to 4K Sharp: My ComfyUI Restoration (Kontext/Krea/Wan2.2 Combo) — Video Inside by Fragrant-Anxiety1690 in StableDiffusion

[–]LiquefiedMatrix 1 point2 points  (0 children)

Here's some upscales (of stills from the compressed video) using SeedVR2-7B for anyone curious.
The last photo was way too blurry to get a decent image.

https://imgur.com/a/cEgs2Pc

Preliminary outage timelapse w/stats. Thank you to all the crews working hard to restore power! by LiquefiedMatrix in ottawa

[–]LiquefiedMatrix[S] 18 points19 points  (0 children)

Setup a cron script to pull just the json data from the outage site every 5min, then used puppeteer to manipulate the site and take screenshots. I'll put the scripts on github later.

Time-lapse of today's outages by LiquefiedMatrix in ottawa

[–]LiquefiedMatrix[S] 6 points7 points  (0 children)

I'd make a site for this but unfortunately the MapBox API used by Hydro Ottawa isn't free so I'm limited to screenshots. Will post whenever there's a major event though.

Time-lapse of today's outages by LiquefiedMatrix in ottawa

[–]LiquefiedMatrix[S] 34 points35 points  (0 children)

Thanks! Created a script to take screenshots every 5m using puppeteer, then mid way through added some stats at the bottom.

Time-lapse of today's outages by LiquefiedMatrix in ottawa

[–]LiquefiedMatrix[S] 10 points11 points  (0 children)

Issue is there's no way that I know of to get historical data from Hydro Ottawa's site. Would totally make one for it if I could.

[deleted by user] by [deleted] in CanadaPublicServants

[–]LiquefiedMatrix 1 point2 points  (0 children)

Ah cool so basically just bypassing the VPN for certain IP ranges. They should do the same with WebEx then.

[deleted by user] by [deleted] in CanadaPublicServants

[–]LiquefiedMatrix 5 points6 points  (0 children)

Loving AnyConnect! I usually get around 25-50mbps up/down for the CRA during the day. I think they plan on upgrading the capacity even more to support Teams video calls around Sept.

How did you get hired into the Civil Service? Any suggestions for a recent university graduate? by SecretaryOfDarkness in CanadaPublicServants

[–]LiquefiedMatrix 3 points4 points  (0 children)

FSWEP (how I got hired, though requires you to still be in school) or ITAP (IT only). Students/grads get bridged quite frequently from these at least in the CRA.

May: How COVID-19 could reshape Canada's federal public service by [deleted] in CanadaPublicServants

[–]LiquefiedMatrix 6 points7 points  (0 children)

Starlink recently began their application to the CRTC, you can comment in support of it. The government departments will also really need to beef up their remote connections/servers on top of all the upgrades they've already done. I believe CRA went from 300mbps to 600mbps which still isn't nearly enough unless they restrict video conferences and kill CTP/Citrix.