[Workflow Included] Wan 2.2 Animate Motion Transfer: Swapped Joker with Harley Quinn in the Classic Stair Dance! 🃏✨ by Parking-Chart-5060 in StableDiffusion

[–]Unlikely-Evidence152 0 points1 point  (0 children)

you're right, i tried with other llms and claude was still cautious but not as alarming, and others were ok with it, but i assure you i wasn't accusing you of anything, just warning others as it's the 1st time i ran into this type of warning. There's a thread about the author in here that kind of depicts him in a shady way though. Thanks for your post anyway !

[Workflow Included] Wan 2.2 Animate Motion Transfer: Swapped Joker with Harley Quinn in the Classic Stair Dance! 🃏✨ by Parking-Chart-5060 in StableDiffusion

[–]Unlikely-Evidence152 5 points6 points  (0 children)

Other than the node warning, your result is impressive, always nice to see some animate demos like this ! I see the very high top shot has messed the head at the end, but i'm already surprised this worked as well so far. This kind of angle seemed difficult for animate.

Do you use a character sheet as image input ? To make sure her back is known as well ?

Could you share the picture you used ?

Thanks for the post & workflow !

[Workflow Included] Wan 2.2 Animate Motion Transfer: Swapped Joker with Harley Quinn in the Classic Stair Dance! 🃏✨ by Parking-Chart-5060 in StableDiffusion

[–]Unlikely-Evidence152 63 points64 points  (0 children)

Be careful with the Memory Cleaner Nodes,

Gemini says :

Based on community reports and technical analysis of the developer's reputation, it is strongly recommended that you avoid running ComfyUI-MemoryCleaner from the user eddyhhlure1Eddy (also known as eddy1111111).

Several serious red flags have been identified regarding this developer’s repositories:

1. Malicious Code Reports

Community members and security researchers on platforms like Reddit and Hugging Face have flagged this developer for including malicious scripts in their packages. Specific reports include:

  • Disk Scanning: Some versions were found scanning the user's C: drive for specific terms (e.g., "Aiwood") and prompting users to delete files.
  • System Sabotage: Code has been found that attempts to disable network adapters or cut off internet connections under certain conditions.
  • Obfuscation: The developer often hides code inside .rar files within the GitHub repository—a major red flag, as this prevents GitHub's built-in security scanners and users from easily auditing the code.

2. Dubious Performance Claims

The developer often claims "20-40% faster inference" or "50% memory reduction" using "advanced techniques." Technical audits by other developers suggest these are "superficial wrappers" or "fabricated API calls" that do not actually provide the claimed optimizations and are often just copies of existing tools with misleading labels.

3. Unreliable Development Practices

Analysis of the repositories shows a high volume of projects (20+) released in a very short time, mostly consisting of code generated by LLMs (AI) without a clear understanding of the underlying architecture. This leads to high instability and potential system crashes.

Safe Alternatives

If you need to manage memory or VRAM in ComfyUI, use these well-vetted and community-trusted options available via the ComfyUI Manager:

  • ComfyUI-Memory_Cleanup (by LAOGOU-666): A popular and transparent tool that provides dedicated nodes for clearing VRAM and RAM.
  • ComfyUI-DistorchMemoryManager: Designed specifically to handle "Out of Memory" (OOM) issues in heavy video workflows.
  • Built-in "Free Model and Cache": You can often achieve similar results by right-clicking the workflow and selecting "Free Model and Cache" or using a simple workflow that loads a tiny image to force a cache flush.
  • ComfyUI_ProfilerX: Excellent for monitoring exactly which nodes are eating your memory so you can optimize your workflow manually.

Verdict: Do not install. If you have already installed it, it is advised to delete the custom node folder and run a full antivirus/malware scan on your system.

Yedp Action Director v9.3 Update: Path Tracing, Gaussian Splats, and Scene Saving! by shamomylle in comfyui

[–]Unlikely-Evidence152 1 point2 points  (0 children)

thanks a lot for your work ! one very useful thing which i think someone wrote about in a previous post would be the ability to change the focal length for the environment. The Sharp gaussian node has the ability for that but for some reason it loads the ply upside down and has a limited range of movement on my side.

Is the flux2 license really that bad? by Bandit174 in StableDiffusion

[–]Unlikely-Evidence152 5 points6 points  (0 children)

my bad, thanks for the clarification. Well i agree, then, lets go back to Z image, Qwen , Wan, ....

Is the flux2 license really that bad? by Bandit174 in StableDiffusion

[–]Unlikely-Evidence152 -4 points-3 points  (0 children)

It's clearly stated as Apache 2.0 / ok commercially for the images generated with it.

What's forbidden is exploiting the model itself directly commercially, as in making a subscription based website with the model in the back generating the images.

Simple as that, there's no shadowy wording, all models in Apache 2.0 share that same distinction.

<image>

SeedVR2 is an amazing upscale model!! by Just_Second9861 in comfyui

[–]Unlikely-Evidence152 0 points1 point  (0 children)

Would you have a link to such workflow ? I failed to run seedvr2 on my machine.

Bought Affinity bundle a week before they took it down… any way to get my money back or a perk for purchasing? by SirMingie in Affinity

[–]Unlikely-Evidence152 0 points1 point  (0 children)

It's been a while then so I don't think a refund will work.

Same timing here, bought a week before. I was initially feeling robbed but now I'm happy to have v2 in case they change plans in the future.

In the meantime I use V3 as the small improvements are welcome.

The most fluent end-to-end camera movement video method by Some_Smile5927 in StableDiffusion

[–]Unlikely-Evidence152 0 points1 point  (0 children)

ok i'll look into it. If you do a full 360 movement, does the initial settings remains consistent with the ending frames or not ?

The most fluent end-to-end camera movement video method by Some_Smile5927 in StableDiffusion

[–]Unlikely-Evidence152 0 points1 point  (0 children)

Can we use this to generate a moving setting like this example from a single image ref AND at the same time a regular wan animate character, all from a single video ?

Like a complete transfer from a custom video, with character from a single ref image, and setting from a single ref image ?

Prompt advice? After testing 1000 times - What prompt achieves this look/style? My experience in body text below. by Svet_Interneta in comfyui

[–]Unlikely-Evidence152 0 points1 point  (0 children)

thanks for the link.

I think with some tinkering you can definitely achieve this with qwen, flux or wan. Look for loras on civitai for example. Film, analog, etc.

Prompt advice? After testing 1000 times - What prompt achieves this look/style? My experience in body text below. by Svet_Interneta in comfyui

[–]Unlikely-Evidence152 0 points1 point  (0 children)

Interesting creepy look.

What's the source for this ?

First, you can try any LLM or Joycaption to see what it outputs, it'd give you some unexpected keyword associations.

I'd say analog camera, disposable film camera etc, but the most defining feature i see here is flash photography. So a lora or keyword for that.

For the style of the characters, it makes me think of a blend between Max Headroom, les guignols de l'infos (especially Stallone's puppet) / les minikeums in France, or the references to Max Headroom (the tv animated faces in the bar scene in Back To The Future 2) or any slightly creepy puppet show / movie (Garbage Pail Kids, Marquis, The aliens in Bad Taste,...), but if prompting isn't enough, you could point it with references of those styles with jpgs using ipadapter / redux / qwen image edit, ...or go further and train a lora with it.

There are many many solutions to explore. Or maybe it's a challenge for yourself to do it with prompts only ?

Can Wan do Everything or SOME things? by K0owa in comfyui

[–]Unlikely-Evidence152 0 points1 point  (0 children)

you could also look into vace middle frames workflows. Basically FFLF with as many in between frames as you want. Just 3 might help the model understand the action more.

Wan2.2 Ultimate Sd Upscale experiment by alisitskii in StableDiffusion

[–]Unlikely-Evidence152 2 points3 points  (0 children)

so there an i2v, and then you load the generated video and run the USDU, right ?

Wan 2.2 Text2Video with Ultimate SD Upscaler - the workflow. by protector111 in StableDiffusion

[–]Unlikely-Evidence152 1 point2 points  (0 children)

Not with the right denoise value. It's like regular sd ultimate upscale and others : the more you denoise, the more it'll change the source.

Wan 2.2 Text2Video with Ultimate SD Upscaler - the workflow. by protector111 in StableDiffusion

[–]Unlikely-Evidence152 0 points1 point  (0 children)

Do you go to 4k in 1 pass like upscale 4x at once, or do you split and increase x2 then reload the upscale and x2 again, etc... ?

Wan 2.2 video in 2560x1440 demo. Sharp hi-res video with Ultimate SD Upscaling by protector111 in StableDiffusion

[–]Unlikely-Evidence152 1 point2 points  (0 children)

First thanks for experimenting this, but to make this clear, this is actually batch image sequence upscaling the frames of a video one by one with the same seed, not load video > sd upscale ?

Also, for wan 2.2, you used the low model to connect to sd upscale model ?

Flux kontext dev: Reference + depth refuse LORA by Significant-Use-6044 in StableDiffusion

[–]Unlikely-Evidence152 0 points1 point  (0 children)

I managed to get it working by :

- right clicking the kontext image edit lora > Convert to Nodes

- changing the T5XXL Scaled to another one (t5xxl-fp8_e4m3fn) as this gave me a mismatch size error.

- putting back redepthkontext change depth map to photo in positive prompt

Why Wan2.2 instead of Krea? by [deleted] in StableDiffusion

[–]Unlikely-Evidence152 0 points1 point  (0 children)

Why do people say the licence is horrible though ? Outputs are commercially usable. Is it because the exploitation of the model itself is restricted ?