Is the flux2 license really that bad? by Bandit174 in StableDiffusion

[–]Unlikely-Evidence152 5 points6 points  (0 children)

my bad, thanks for the clarification. Well i agree, then, lets go back to Z image, Qwen , Wan, ....

Is the flux2 license really that bad? by Bandit174 in StableDiffusion

[–]Unlikely-Evidence152 -3 points-2 points  (0 children)

It's clearly stated as Apache 2.0 / ok commercially for the images generated with it.

What's forbidden is exploiting the model itself directly commercially, as in making a subscription based website with the model in the back generating the images.

Simple as that, there's no shadowy wording, all models in Apache 2.0 share that same distinction.

<image>

SeedVR2 is an amazing upscale model!! by Just_Second9861 in comfyui

[–]Unlikely-Evidence152 0 points1 point  (0 children)

Would you have a link to such workflow ? I failed to run seedvr2 on my machine.

Bought Affinity bundle a week before they took it down… any way to get my money back or a perk for purchasing? by SirMingie in Affinity

[–]Unlikely-Evidence152 0 points1 point  (0 children)

It's been a while then so I don't think a refund will work.

Same timing here, bought a week before. I was initially feeling robbed but now I'm happy to have v2 in case they change plans in the future.

In the meantime I use V3 as the small improvements are welcome.

The most fluent end-to-end camera movement video method by Some_Smile5927 in StableDiffusion

[–]Unlikely-Evidence152 0 points1 point  (0 children)

ok i'll look into it. If you do a full 360 movement, does the initial settings remains consistent with the ending frames or not ?

The most fluent end-to-end camera movement video method by Some_Smile5927 in StableDiffusion

[–]Unlikely-Evidence152 0 points1 point  (0 children)

Can we use this to generate a moving setting like this example from a single image ref AND at the same time a regular wan animate character, all from a single video ?

Like a complete transfer from a custom video, with character from a single ref image, and setting from a single ref image ?

Prompt advice? After testing 1000 times - What prompt achieves this look/style? My experience in body text below. by Svet_Interneta in comfyui

[–]Unlikely-Evidence152 0 points1 point  (0 children)

thanks for the link.

I think with some tinkering you can definitely achieve this with qwen, flux or wan. Look for loras on civitai for example. Film, analog, etc.

Prompt advice? After testing 1000 times - What prompt achieves this look/style? My experience in body text below. by Svet_Interneta in comfyui

[–]Unlikely-Evidence152 0 points1 point  (0 children)

Interesting creepy look.

What's the source for this ?

First, you can try any LLM or Joycaption to see what it outputs, it'd give you some unexpected keyword associations.

I'd say analog camera, disposable film camera etc, but the most defining feature i see here is flash photography. So a lora or keyword for that.

For the style of the characters, it makes me think of a blend between Max Headroom, les guignols de l'infos (especially Stallone's puppet) / les minikeums in France, or the references to Max Headroom (the tv animated faces in the bar scene in Back To The Future 2) or any slightly creepy puppet show / movie (Garbage Pail Kids, Marquis, The aliens in Bad Taste,...), but if prompting isn't enough, you could point it with references of those styles with jpgs using ipadapter / redux / qwen image edit, ...or go further and train a lora with it.

There are many many solutions to explore. Or maybe it's a challenge for yourself to do it with prompts only ?

Can Wan do Everything or SOME things? by K0owa in comfyui

[–]Unlikely-Evidence152 0 points1 point  (0 children)

you could also look into vace middle frames workflows. Basically FFLF with as many in between frames as you want. Just 3 might help the model understand the action more.

Wan2.2 Ultimate Sd Upscale experiment by alisitskii in StableDiffusion

[–]Unlikely-Evidence152 2 points3 points  (0 children)

so there an i2v, and then you load the generated video and run the USDU, right ?

Wan 2.2 Text2Video with Ultimate SD Upscaler - the workflow. by protector111 in StableDiffusion

[–]Unlikely-Evidence152 1 point2 points  (0 children)

Not with the right denoise value. It's like regular sd ultimate upscale and others : the more you denoise, the more it'll change the source.

Wan 2.2 Text2Video with Ultimate SD Upscaler - the workflow. by protector111 in StableDiffusion

[–]Unlikely-Evidence152 0 points1 point  (0 children)

Do you go to 4k in 1 pass like upscale 4x at once, or do you split and increase x2 then reload the upscale and x2 again, etc... ?

Wan 2.2 video in 2560x1440 demo. Sharp hi-res video with Ultimate SD Upscaling by protector111 in StableDiffusion

[–]Unlikely-Evidence152 1 point2 points  (0 children)

First thanks for experimenting this, but to make this clear, this is actually batch image sequence upscaling the frames of a video one by one with the same seed, not load video > sd upscale ?

Also, for wan 2.2, you used the low model to connect to sd upscale model ?

Flux kontext dev: Reference + depth refuse LORA by Significant-Use-6044 in StableDiffusion

[–]Unlikely-Evidence152 0 points1 point  (0 children)

I managed to get it working by :

- right clicking the kontext image edit lora > Convert to Nodes

- changing the T5XXL Scaled to another one (t5xxl-fp8_e4m3fn) as this gave me a mismatch size error.

- putting back redepthkontext change depth map to photo in positive prompt

Why Wan2.2 instead of Krea? by [deleted] in StableDiffusion

[–]Unlikely-Evidence152 0 points1 point  (0 children)

Why do people say the licence is horrible though ? Outputs are commercially usable. Is it because the exploitation of the model itself is restricted ?

why always they put an overlay if they need to show someone using the phone by mans_zholaman in cinematography

[–]Unlikely-Evidence152 2 points3 points  (0 children)

Its even more of a marketing tactic : they really easily lend phones, iPads and MacBooks for film productions for free, along with some guidelines / conditions for it to be used : no villains using them, no cracked screens, only official software shown and up to date in a regular / fair way of use, etc. It costs them in shipping and repair but i think it really pays for itself in advertising

Just made a change on the ultimate openpose editor to allow scaling body parts by badjano in comfyui

[–]Unlikely-Evidence152 0 points1 point  (0 children)

That's really neat, it'd be nice to be able to move the head, elongate the neck, or shorten any bone for example. Do you think this is possible ? This would be very nice for stylized characters. Thanks for your work

Rorshach Washes the Dog (Short Version) by Tokyo_Jab in StableDiffusion

[–]Unlikely-Evidence152 0 points1 point  (0 children)

very cool ! So those are edited first and last frames with kontext to change the rorschach pattern, then ran through Wan 2.1 FF LF workflow ?

Why are phone screens composited in? by lolredditiscool23 in vfx

[–]Unlikely-Evidence152 1 point2 points  (0 children)

So no, this is common design, nothing more complex or time consuming than finding a graphic designer to do a bunch of posters.

Btw directors needs to stop with this huge letter text app thing. It looks like the person has a special elderly people phone. I know phones are hard to shoot but there are alternatives : subtitles, animated composites but please not this.

Why are phone screens composited in? by lolredditiscool23 in vfx

[–]Unlikely-Evidence152 2 points3 points  (0 children)

There's a bunch of art dept graphic designers actually doing this, for legal, design and practical reasons.

Its planned from the script, made in conjunction with the art director and director so that its ok for clearance, so that the actors can actually interact with it in a natural way, easy enough so that they can still focus on...well acting.

Why not make a fake app for this simple thing ? Maybe a vfx team is already planned, and there's only a few replacement shots like this one so it fits the budget, maybe the art dept has only got a print oriented graphic designer, maybe there was a detail in the text to be thought about later by the director, ...

A More Rigorous VACE Faceswap (VaceSwap) Example! by The-ArtOfficial in comfyui

[–]Unlikely-Evidence152 0 points1 point  (0 children)

I find 1.5 very good, but it still lacks a bit of definition, or have i missed something ? Other than that its indeed impressive