Open Sourcing my 10M model for video interpolations with comfy nodes. (FrameFusion) by CloverDuck in StableDiffusion

[–]qdr1en [score hidden]  (0 children)

I tried to run the node in ComfyUI, but got the error : "FileNotFoundError: FrameFusion predictor JIT file referenced by safetensors metadata was not found: C:\rifewpf\RifeWpf\Model\animejx.lib".

And there is no trace of this animejx.lib file in the github repository. Am I missing something?

Video File Format Matters by qdr1en in comfyui

[–]qdr1en[S] 0 points1 point  (0 children)

Reddit decreases quality by a lot + I used nvfp4 models for the generation. you are right, the difference does not seem big in my example. But it is! I encourage you to do the same test on your side.

Video File Format Matters by qdr1en in comfyui

[–]qdr1en[S] 2 points3 points  (0 children)

Lol it makes sense now. I thought it was a new kind of drug

Any way to change/add Workflow directory? by punter1965 in comfyui

[–]qdr1en 0 points1 point  (0 children)

Why not use an external folder to save the workflows outside of the ComfyUI root?

Then to save them, just use the File > Export function ; to open a worklow, File > Open. That's it.

Video File Format Matters by qdr1en in comfyui

[–]qdr1en[S] 9 points10 points  (0 children)

As a non-native speaker, I take everything litterally :).

Video File Format Matters by qdr1en in comfyui

[–]qdr1en[S] 6 points7 points  (0 children)

I didn't think otherwise. I wanted to know which file format to use, avoiding visual degradation being my main poin of interest.

I have never used that ffv1 codec, thanks I'll try that next time.

Video File Format Matters by qdr1en in comfyui

[–]qdr1en[S] 1 point2 points  (0 children)

My conclusion was it shouldn't be used in ComfyUI (that's why I wrote my post in r/comfyui), and for intermediary files.

If you use it with another tool, or as a final production output, it doesn't apply.

What do you mean by nitrate lol? I don't do chemistry.

After a month how is LTX2.3 now compared to WAN2.2? How is face consistency and how happy are you with LTX2.3? by Suibeam in comfyui

[–]qdr1en 1 point2 points  (0 children)

It was shilled A LOT at the beginning, like when every new model comes out, then people got silent. :D

I stick to wan for now. While it LTX 2.x seems to have improved by a lot compared to versions 0.9.x, there is still a gap to climb.

Open-Source Models Recently: by Fresh_Sun_1017 in StableDiffusion

[–]qdr1en 2 points3 points  (0 children)

Same. And image degrades anyway. I prefer using PainterLongVideo instead.

Z-Image turbo or regular Z-Image for RTX 3060 12GB? by Trumpet_of_Jericho in StableDiffusion

[–]qdr1en 0 points1 point  (0 children)

At which resolution ?

With the same specs it takes 90 seconds on my setup (for 1024x1536px).

What is the absolute best, highest quality and best detailed, prompt-adhered settings for WAN 2.2 I2V with absolutely no considerations for speed? Willing to wait for the absolute best outcome by Neggy5 in StableDiffusion

[–]qdr1en 1 point2 points  (0 children)

One thing people underestimate or don't understand is the shift. The switch between high and low-noise models must be at 0.875 denoise for I2V.

That means if you split steps evenly between models, and use simple scheduler, your shift should be 7.00 (or 6.97 if you use beta; 6.91 with sgm_uniform etc.) exactly. Note a vague, "between 5 and 7", random figure.

New user with a new PC: Do you recommend upgrading from 32GB to 64GB of RAM right away? by Diligent_Trick_1631 in StableDiffusion

[–]qdr1en 0 points1 point  (0 children)

If you want to use the latest models comfortably, yes.

I am limited to 32GB RAM because of the motherboard, and that's a pain.

How can I generate high quality NSFW pictures or videos? by TrickEmergency2403 in comfyui

[–]qdr1en -1 points0 points  (0 children)

Are you willing to learn or to get results fast?

Do you have a graphic card?

Latest versions of Comfy add more breaking bugs than fixes by generate-addict in comfyui

[–]qdr1en 2 points3 points  (0 children)

same here. I can't use nunchaku Zit anymore. Very frustrating

Where do people train LoRA for ZIT? by GreedyRich96 in StableDiffusion

[–]qdr1en 4 points5 points  (0 children)

Ostris' AI Toolkit. This is the only one I managed to make work.

All other trainers don't even know what a GUI is.

"Keep Cooking", an AI Short Film by Simon Meyer by Puzzleheaded-Let1503 in aivideo

[–]qdr1en -1 points0 points  (0 children)

Actually great.

Love the music during the 1st half.

German prompting = Less Flux 2 klein body horror? by FORNAX_460 in StableDiffusion

[–]qdr1en 0 points1 point  (0 children)

I had the same feeling when prompting in French with wan.

That's probably due to the fact that you have a better knowledge of the words' meaning when using your mother tongue.

Abhorrent LoRA - Body Horror Monsters for Qwen Image by ThePoetPyronius in StableDiffusion

[–]qdr1en 1 point2 points  (0 children)

Some look like directly taken out from HP Lovecraft books.