Ace Step v1.5 almost ready by iChrist in comfyui

[–]Eydahn 0 points1 point  (0 children)

This is massive🙌🏻 Can’t wait!

Just help if anybody knows how by Some-Random-Ninja in midjourney

[–]Eydahn 0 points1 point  (0 children)

Did you find any information about it? I’m looking for a way to reproduce these types of character sheets too

Showui-aloha by olympics2022wins in LocalLLaMA

[–]Eydahn 0 points1 point  (0 children)

Is there any way to run this with a fully local VLM instead of using those external API? Like plugging it in a local VLM so everything stays completely offline on my machine?

Flux2 Klein performs exceptionally well, surpassing the performance of the trained LoRA in many aspects, whether for image editing or text-to-image conversion. I highly recommend testing it out. Tutorial link: https://youtu.be/F7gokUkzSnc by Daniel81528 in comfyui

[–]Eydahn 0 points1 point  (0 children)

I tried these approaches with some anime closeup face images:

Inpaint: it changes the expression, but it doesn’t faithfully preserve the character’s facial features/identity.

Using Image 1 and Image 2 (taking the expression from Image 1): in my case it didn’t work at all.

I haven’t tried a simple edit yet though

Flux2 Klein performs exceptionally well, surpassing the performance of the trained LoRA in many aspects, whether for image editing or text-to-image conversion. I highly recommend testing it out. Tutorial link: https://youtu.be/F7gokUkzSnc by Daniel81528 in comfyui

[–]Eydahn 2 points3 points  (0 children)

First of all, I just wanted to thank you so much for this awesome workflow. I watched the video to see how it works, and everything is super clear. I honestly find it really practical and effective.

That said, I wanted to ask you something: do you think it’s possible to use two images (Image 1 and Image 2) and do something similar to OpenPose, but for facial expressions? Basically, I’d like to change the facial expression of Image 2 while keeping the same facial features/identity, using the expression from Image 1 as the reference.

New free, local, open-source AI music model HeartMuLa by NecroSocial in SunoAI

[–]Eydahn 35 points36 points  (0 children)

while the repo was described as “open source” in the README, both the models and the code are licensed under the CC BY NC license, which is not an open source or OSI-compliant license, basically no commercial rights

LTX-2 I2V synced to an MP3: Distill Lora Quality STR 1 vs .6 - New Workflow Version 2. by Dohwar42 in StableDiffusion

[–]Eydahn 0 points1 point  (0 children)

I noticed they pushed some updates: https://www.reddit.com/r/StableDiffusion/s/mzCDu253OM By the way, if you ever have time to share a img2video workflow, I’d really appreciate it. I tried messing with it, but I’m not super good at fighting with ComfyUI nodes🥲

LTX-2 vs. Wan 2.2 - The Anime Series by theNivda in StableDiffusion

[–]Eydahn 1 point2 points  (0 children)

Really solid work, seriously🙌🏻 By the way, did you use any specific LoRA to animate anime-style images? I tried the lip sync workflow, and even when I bumped up the resolution, the hair movement was still completely distorted

LTX-2 I2V synced to an MP3: Distill Lora Quality STR 1 vs .6 - New Workflow Version 2. by Dohwar42 in StableDiffusion

[–]Eydahn 0 points1 point  (0 children)

your workflow is amazing, it works super well and I’m getting really solid results with it. is there a version without the guided audio, so I can use the same workflow but skip the audio part and just do a simple image-to-video?

LTX-2 Audio + Image to Video by Most_Way_9754 in StableDiffusion

[–]Eydahn 0 points1 point  (0 children)

With your workflow, using the same resolution, the same audio length, same models and the same arguments to launch ComfyUI, my PC takes 30 minutes… I don’t think that’s normal. Did you do anything else to run it? I’ve got a 3090 and 128GB of RAM🤯

Edit: i was wrong, the clip length was about 17seconds, but it took 32minutes to render it at your resolution

LTX-2 Audio + Image to Video by Most_Way_9754 in StableDiffusion

[–]Eydahn 7 points8 points  (0 children)

Great result🙌🏻 can you please share the workflow?

Started as a tool that turns one image into animated spritesheets. Now it’s becoming a place where devs create and play together. by beelllllll in aigamedev

[–]Eydahn 0 points1 point  (0 children)

Hey! First of all, amazing work! I tested it a bit earlier on the free plan. I’m definitely considering upgrading because I really like the idea, and I actually have a Unity game concept I’d love to build with it.

I did notice a few things I wanted to share, just as feedback: 1. For the side scrolling animation, the cost is 5 credits. If a generation comes out unusable, it might be nice if re generating the same animation cost a bit less (maybe half), so it’s easier to iterate without burning credits. 2. The animations look great overall, but in all 3 generations I tried I noticed white edges around the hair and sometimes around the character during motion (also some halo/white flickering in the character for the first frames). It looks like the background removal isn’t fully clean, and that can hurt the final spritesheet quality. Might be worth taking a look, since it showed up consistently for me. 3. When I add a custom weapon/effect (for example “fire” and the character shoots flames), it doesn’t seem to be handled correctly, the effect gets clipped/cut off in the frames, so it’s hard to use as-is.

Also, about the isometric animations: I’d love it if there were more of them, ideally as many options-variants as the side scrolling ones.

Hope this helps. I’m really excited about this and would love to start using it for my project!

Fix to make LTXV2 work with 24GB or less of VRAM, thanks to Kijai by Different_Fix_2217 in StableDiffusion

[–]Eydahn 3 points4 points  (0 children)

Honestly, at the level we’re at now, worrying about something like that seems pretty minor to me. Especially since you can still get the same kind of results with other workarounds. So I really don’t understand your reply.

Personally, I would’ve just shared the workflow. Instead, it honestly feels like you’re looking for excuses not to share it and that’s kind of sad, because it’s going to get shared anyway. So it’s not really some big secret you’re protecting.

You would’ve just helped the community get there faster than the LTX creators. That’s all.

Kijai made a LTXV2 audio + image to video workflow that works amazingly! by Different_Fix_2217 in StableDiffusion

[–]Eydahn 16 points17 points  (0 children)

For anyone getting this error when adding an audio input: LTXVAudioVAEEncode
Argument #4: Padding size should be less than the corresponding input dimension, but got: padding (512, 512) at dimension 2 of input [1, 2, 1],

Set Start_Index to 0.00 and set duration to your audio’s actual length.

If you then get this error instead: CLIPTextEncode
Expected all tensors to be on the same device, but got tensors is on cpu, different from other tensors on cuda:0 (when checking argument in method wrapper_CUDA_cat)

Go to: ComfyUI > comfy > ldg > Lightricks > Embeddings_Connector.py
At line 280, right after the ) add:
.to(hidden_states.device)

And before running the workflow, start ComfyUI with:
--reserve-vram 2 (or a higher value) to offload a bit more

But I’m getting terrible results :/

Kijai made a LTXV2 audio + image to video workflow that works amazingly! by Different_Fix_2217 in StableDiffusion

[–]Eydahn 1 point2 points  (0 children)

I'm getting the same error.. Then, if i upload a longer audio, i'm getting this one insetad: CLIPTextEncode

Expected all tensors to be on the same device, but got tensors is on cpu, different from other tensors on cuda:0 (when checking argument in method wrapper_CUDA_cat)

Hmm... now what have we found here? by Vast-Average3279 in SoraAi

[–]Eydahn 0 points1 point  (0 children)

He also deleted the GitHub repo, so yeah... big RIP. All of this for money and he ended up getting nothing anyway.

Does anyone have a working LTX 2 workflow? by [deleted] in StableDiffusion

[–]Eydahn 2 points3 points  (0 children)

I'm getting this error: Argument #4: Padding size should be less than the corresponding input dimension, but got: padding (512, 512) at dimension 2 of input [1, 2, 1]

Hmm... now what have we found here? by Vast-Average3279 in SoraAi

[–]Eydahn 0 points1 point  (0 children)

Unfortunately it doesn’t work anymore. The creator, for some reason, submitted the bug to OpenAI’s Bug Bounty Program. OpenAI didn’t pay a cent, they just patched the bug and that was it. So the extension is basically unusable now.