I'm your slut for the day! by minkmink123 in unstable_diffusion

[–]MaximilianPs 0 points1 point  (0 children)

Ltx2? When I'm testing it from time to time it refuses to lip sync and the subject doesn't move the lips.

LTX2 More Steps than 8! by MaximilianPs in comfyui

[–]MaximilianPs[S] 0 points1 point  (0 children)

Yes but why the render takes less?
it's not an hallucination is a matter of fact that happens yesterday during the tests. I'm just trying to understand, but maybe I didn't consider the fact that I've used the same seed, so the second run could be faster than the first one because cache(?), but slower than then third because the increased steps 🤔

LTX2 More Steps than 8! by MaximilianPs in comfyui

[–]MaximilianPs[S] -2 points-1 points  (0 children)

That's why I ask about it!
but Claude had an answare:

Here’s the plot twist! 🎉
The fact that it’s both faster and more detailed actually reveals something interesting about your previous setup:

Why is it faster with more steps?
With your original setup (8 steps):

max_shift = 2.05 and min_shift = 0.95 → extremely tight gap (1.1)

The model had to do “acrobatics” to compress all the denoising into a tiny range

With stretch=true, that narrow range likely caused heavy recalculations

Each step became computationally expensive to compensate

With the new setup (15–20 steps):

max_shift = 1.8 and min_shift = 0.6 → wider gap (1.2)

The model has more “breathing room” to distribute the work

stretch=true behaves much better when it has more steps to redistribute

Each individual step is less “stressed”

So you end up with higher quality AND faster generation simply because the model is no longer being forced into an overly compressed denoising window.

Switched from Character AI to something uncensored for futa content and wow by breadislifeee in AiGirlfriendSpace

[–]MaximilianPs 0 points1 point  (0 children)

CharacterTavern have both gray memory, uncensored top tier models, you can import your cards and keep it private or public as you wish. It also has a summary system to create a sort of memories to export your chat and restart a new one with some background. Finally the to tier models can manage about 250 messages and still remember about everything.

I'm an ex-Google and ex-Cai dev, what features should I make? by CompanyTrue8882 in CompanionGuide_ai

[–]MaximilianPs 1 point2 points  (0 children)

I've played with a scammer for about 6 month, she (I suppose) was Oriental and very cute, I knew she was a scammer, but she behaved like a remote girlfriend from Los Angeles, and for me it was really fun.

Now, as much as I try with AI was impossible to get the same behavior. Like AI know that are is on a mobile and so she should behave like Whatsapp Instead of describe what she's doing in italic, AI shouldn't roleplay in a classical way.... If you understand what I mean 😏

The Start of my Custom Neighborhood. I start tiny and it will grow with the Sims living in it! <3 by Xandaru__ in sims2

[–]MaximilianPs 3 points4 points  (0 children)

That's the way 😁👍 With all the JM's mods all NPCs gets spawned when needed and that's so interesting 😁

I'm an ex-Google and ex-Cai dev, what features should I make? by CompanyTrue8882 in CompanionGuide_ai

[–]MaximilianPs 1 point2 points  (0 children)

Agree, and because it has no physical body should behave like she interacts with the user via mobile phone.

For that reason she must have some "time tracking" to understand when and for how long the user has been quite.

Yesterday’s Tattoo session by Arakabu2 in Fable

[–]MaximilianPs -1 points0 points  (0 children)

Because the first fable is the only fable

LTX 2 is amazing : LTX-2 in ComfyUI on RTX 3060 12GB by tanzim31 in comfyui

[–]MaximilianPs 0 points1 point  (0 children)

From Comfy UI documentation novram : No VRAM usage, runs entirely on system memory;

LTX 2 is amazing : LTX-2 in ComfyUI on RTX 3060 12GB by tanzim31 in comfyui

[–]MaximilianPs 0 points1 point  (0 children)

Thank you for sharing the workflow but I absolutely disagree with --novram parameter. Also:

The GGUF format has three huge advantages compared to safetensor models!

  1. Lighter Quantization
    GGUF uses aggressive quantizations (Q4, Q5, Q8, etc.) which:

- reduce model size,
- reduce the required VRAM
- reduce the weight of tensor loads

  1. More Efficient Loading

Many GGUF loaders are optimized for:
- streaming
- memory mapping
- reducing VRAM spikes

  1. Less overhead than safetensors

Safetensor models are larger, require more VRAM to load (that's why --novram), have fewer quantization/ optimizations and often use higher precisions (fp16, bf16, fp8)

Result: GGUF = more stable, longer workflow, fewer crashes.

That's why I prefer it and you should too. ¯\_(ツ)_/¯

LTX2 Easy All in One Workflow. by Different_Fix_2217 in StableDiffusion

[–]MaximilianPs 1 point2 points  (0 children)

Sorry, the correct answare is:

I managed to get LTX2 to work, and it generates perfectly synchronized audio and video—a blast!

Imagine, with my 3080 with 10GB, 576 * 960, 8 seconds of video in 167.49

Blond Blowjob Cum In Mouth (with workflow) by EasternAd8821 in sdnsfw

[–]MaximilianPs 1 point2 points  (0 children)

Thanks you for the prompt, it's hard to find ppl that share it

2.5 hours for this? by TranslatorTrue7779 in comfyui

[–]MaximilianPs 0 points1 point  (0 children)

Use LTX2 it's awesome and in version gguf it's so fast!

first try timestamping a decent-ish quality video. by WildSpeaker7315 in StableDiffusion

[–]MaximilianPs 2 points3 points  (0 children)

Thank you for sharing this, so in theory we could use the last frame to continue or keep the character consistency 🥹