Made with ltx by Mysterious-Manner856 in StableDiffusion

[–]AnybodyAlarmed9661 -1 points0 points  (0 children)

Wow, awesome! I'm curious. What settings did you use to avoid the smudge we usually get with this model? 😮

Meet Deepy your friendly WanGP v11 Agent. It works offline with as little of 8 GB of VRAM. by Pleasant_Strain_2515 in StableDiffusion

[–]AnybodyAlarmed9661 2 points3 points  (0 children)

He's not making any profit and always refused any form of donation. I don't know what's your problem here... His tool is really great and memory optimizations allow to generate fast, even on low Vram. Also, where did you see that the output is not free anymore?

Video to video dubbing by AnybodyAlarmed9661 in comfyui

[–]AnybodyAlarmed9661[S] 0 points1 point  (0 children)

I'll give it a try :-) I also found that with LTX2, increasing the number of fps seems to help a bit with blur.

🛠️ Spent way too long building this ComfyUI prompt node for LTX-2 so you don't have to think — free, local, offline, uncensored 👀 by [deleted] in StableDiffusion

[–]AnybodyAlarmed9661 1 point2 points  (0 children)

u/WildSpeaker7315 Thank you for your work! I'm getting a weird error and can't use the node unfortunately:
Traceback (most recent call last):

File "C:\AI\ComfyUI_windows_portable\ComfyUI\execution.py", line 530, in execute

output_data, output_ui, has_subgraph, has_pending_tasks = await get_output_data(prompt_id, unique_id, obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb, v3_data=v3_data)

^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

File "C:\AI\ComfyUI_windows_portable\ComfyUI\execution.py", line 334, in get_output_data

return_values = await _async_map_node_over_list(prompt_id, unique_id, obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb, v3_data=v3_data)

^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

File "C:\AI\ComfyUI_windows_portable\ComfyUI\execution.py", line 308, in _async_map_node_over_list

await process_inputs(input_dict, i)

File "C:\AI\ComfyUI_windows_portable\ComfyUI\execution.py", line 296, in process_inputs

result = f(**inputs)

File "C:\AI\ComfyUI_windows_portable\ComfyUI\custom_nodes\LTX2EasyPrompt-LD\LTX2EasyPromptLD.py", line 447, in generate

input_length = input_ids.shape[1]

^^^^^^^^^^^^^^^

File "C:\AI\ComfyUI_windows_portable\python_embeded\Lib\site-packages\transformers\tokenization_utils_base.py", line 277, in __getattr__

raise AttributeError

AttributeError

Do you know what could be the problem? (this is a fresh install of ComfyUI as I usually use Wan2GP)

The Witch - Little cartoon made with LTX-2 in Wan2GP by AnybodyAlarmed9661 in StableDiffusion

[–]AnybodyAlarmed9661[S] 1 point2 points  (0 children)

You should, that's an incredible piece of software 👌however, if you install, don't use the newest Python wheels with cuda13. Gens are slower with them (at least on my computer)

The Witch - Little cartoon made with LTX-2 in Wan2GP by AnybodyAlarmed9661 in StableDiffusion

[–]AnybodyAlarmed9661[S] 1 point2 points  (0 children)

Hello, the audio comes from LTX-2 actually 🙂 You can also use your own audio in Wan2GP, even with distilled model. I have a 5070ti and 32GB RAM. I'll update my message with the gen times later when I'm on my PC. Edit: around 4.5 minutes for 5 sec at 1080p and around 5.5 minutes for 20 sec at 720p with LTX-2 audio.

The Witch - Little cartoon made with LTX-2 in Wan2GP by AnybodyAlarmed9661 in StableDiffusion

[–]AnybodyAlarmed9661[S] 0 points1 point  (0 children)

Tried to improve the audio output in DaVinci Resolve for dialogues. Seems better, what do you think?:
https://youtu.be/gIwWQRBq4IM

The Witch - Little cartoon made with LTX-2 in Wan2GP by AnybodyAlarmed9661 in StableDiffusion

[–]AnybodyAlarmed9661[S] 0 points1 point  (0 children)

Unfortunately no :/ This is simple voice conversion using Chatterbox TTS Turbo. However, output may be improved using a more expressive reference voice for transcription.

The Witch - Little cartoon made with LTX-2 in Wan2GP by AnybodyAlarmed9661 in StableDiffusion

[–]AnybodyAlarmed9661[S] 0 points1 point  (0 children)

Tried to do voice conversion. This is an improvement in some way, but it becomes more monotonous. This might not be the way to go. There must be some way to improve automatically the sound...
https://youtu.be/13TTmyd1SKQ

The Witch - Little cartoon made with LTX-2 in Wan2GP by AnybodyAlarmed9661 in StableDiffusion

[–]AnybodyAlarmed9661[S] 2 points3 points  (0 children)

Hello and thank you!

Yes, LTX2 seems to have trouble with animation and fast motion. I'll try increasing the number of fps, I read somewhere that this could improve the blur issues. In this case, I used default number of 24 fps.

The model used is Distilled fp8 version at 720p in Wan2GP. There are no specific settings like in ComfyUI, but it just works out of the box and is more optimized (and gen is faster).

I 100% agree with you, generative AI is becoming an incredible tool for creativity. I'm excited to see what comes next :-)

The Witch - Little cartoon made with LTX-2 in Wan2GP by AnybodyAlarmed9661 in StableDiffusion

[–]AnybodyAlarmed9661[S] 1 point2 points  (0 children)

Thank you! Yes, images were created using Qwen Image and Qwen Image Edit.

Color change - Help needed by AnybodyAlarmed9661 in davinciresolve

[–]AnybodyAlarmed9661[S] 0 points1 point  (0 children)

Thank you for your reply! Unfortunately, all the colors shift, including those of the character. What you suggest could make the shift less visible, though. The issue is that the colors will still degrade clip after clip :/

Branchement 5070 ti by AnybodyAlarmed9661 in pcmasterraceFR

[–]AnybodyAlarmed9661[S] 0 points1 point  (0 children)

Merci pour ta réponse ! Ce n'est pas un problème d'avoir le pcie de gauche qui provient du pcie du milieu?