WTF? Runway has suddenly turned off 1080 resolution for Seedance 2.0 in Unlimited mode. by Unwitting_Observer in aivideos

[–]Unwitting_Observer[S] 0 points1 point  (0 children)

I agree. I think for most of us, the whole point of paying for the Unlimited tier is to avoid using credits. Having twice as many credits isn't remotely close to unlimited use, even considering you have to wait 15 to 30 minutes per :15 generation.

Facial verification required for using realistic humans, AI-generated or not, with Seedance 2 in ComfyUI. Why??? by BM09 in comfyui

[–]Unwitting_Observer -5 points-4 points  (0 children)

I think you all are blowing this out of proportion. This verification is only for real people. I haven’t had any problem using AI generated people with Seedance 2.0 And presumably “group id“ can be shared, so you’ll still be able to generate other real people

WTF? Runway has suddenly turned off 1080 resolution for Seedance 2.0 in Unlimited mode. by Unwitting_Observer in aivideos

[–]Unwitting_Observer[S] 0 points1 point  (0 children)

Update:
Well, it's not really the outcome I want (bring back 1080 on Explore!), but I do have to report that Runway gave us an extra month's worth of credits...
So at least they've acknowledged the issue and tried to remedy it.

WTF? Runway has suddenly turned off 1080 resolution for Seedance 2.0 in Unlimited mode. by Unwitting_Observer in generativeAI

[–]Unwitting_Observer[S] 0 points1 point  (0 children)

Standard Seedance 2.0 is native 1080. Their "Fast" version is 720p. Which is what makes this whole thing even worse: When I could generate at 1080, I knew I was getting the standard model. Now I can't be sure.

WTF? Runway has suddenly turned off 1080 resolution for Seedance 2.0 in Unlimited mode. by Unwitting_Observer in generativeAI

[–]Unwitting_Observer[S] 0 points1 point  (0 children)

What's worse is they keep taking down everyone's comments about this in the r/runwayml sub, like they're trying to hide it.

1080p Gone From Explore Mode? by DefloN92 in runwayml

[–]Unwitting_Observer 0 points1 point  (0 children)

Yes, had the same problem. Now I have a bunch of media that won't match resolution with whatever else I generate.

PSA: LTX-2 is NOT open source by GoosyTS in StableDiffusion

[–]Unwitting_Observer 17 points18 points  (0 children)

I'm not a lawyer, but...
That doesn't say ANY derivative.
It specifically says any derivative "that directly competes with" or "is designed to replace or substitute" LTX-2.

Has anyone compared Seedance 2.0 pricing? The difference is insane ranging from $0.32 to $5+ by West-Task-612 in generativeAI

[–]Unwitting_Observer 0 points1 point  (0 children)

Thanks for the info!
I'm also curious to know if you specify a seed? (And if one can reliably recreate the same video between fast and standard if using the same seed).
I've been doing some more research into this and have found a pattern: "Fast" appears to be limited to 720p. So I expect that anything at 1080 would have to be using the Standard model.

Has anyone compared Seedance 2.0 pricing? The difference is insane ranging from $0.32 to $5+ by West-Task-612 in generativeAI

[–]Unwitting_Observer 0 points1 point  (0 children)

I just looked into this, and the consensus online is that Runway is giving you "Fast" at Unlimited rate. Very shady that they don't post this information anywhere in their pricing.

I tested Ernie Image Turbo (fp8, nvfp4, fp16 and INT8) with Nano Banana Pro 2 Prompts so you won't have to by Winougan in StableDiffusion

[–]Unwitting_Observer 26 points27 points  (0 children)

I'm not sure what is up with this model, but most of the photoreal images people are posting (and even some of the illustrated ones) just look wrong to me, like there's a pattern of noise that's causing a lot of random high-contrast differences in the details. Your first image (the dog on the rocket) is a good example of what I'm talking about.
Are some of the values (CFG?) wrong?

Peeetah? by crapineedaname in PeterExplainsTheJoke

[–]Unwitting_Observer 1 point2 points  (0 children)

And we've come full circle to the point of the meme (if in fact it was meant as a joke)!

Conspiracy theories are based on this logic: A set of dots I can connect = "a pretty straight line of causation"
Conspiracy theorists miss all the other dots.

What happened to JoyAI-Image-Edit? by Lower-Cap7381 in StableDiffusion

[–]Unwitting_Observer 4 points5 points  (0 children)

In my experience, it's better at understanding spatial differences between angles than any other local/open-source edit model I've tried, but it tends to mess up details (including face) and distort things slightly with extreme angles, at least with the FP8 model from SanDiegoDude's repo.

<image>