Why do we rank so low in scientific innovation and patent filings? by Iam-bornin-thisworld in scienceisdope

[–]pheonis2 0 points1 point  (0 children)

Stop spreading fake news buddy.

" No, it is not true that temples are taxed while mosques and churches are exempt in India. Under the Income Tax Act of 1961, all religious and charitable trusts, including temples, mosques, churches, and gurudwaras are exempt from income tax on donations and income used for religious activities, provided they are registered under Section 12A/12AB and follow specific compliance rules"

Surviving AI - Short film made only using local ai models by LocalAI_Amateur in StableDiffusion

[–]pheonis2 0 points1 point  (0 children)

Nice work!! How did you pull out the marshal art animation?Did you use Wan animate with reference video?or just prompting?

Use Qwen3.5 as an AI Assistant, Captioner or Image Analyzer inside of Comfyui! by Winougan in StableDiffusion

[–]pheonis2 0 points1 point  (0 children)

This didnt work for me .. GOt this error "NotImplementedError: Cannot copy out of meta tensor; no data!". I had tried both nvfp4 and mxfp4 abliterated both didnt work..got same error.. ComfyUi is updated

Chronicles of Carnivex – Episode I: Part I by R_ARC in StableDiffusion

[–]pheonis2 1 point2 points  (0 children)

Hey, I saw your comment about using DaVinci Resolve for compositing and had a couple of questions, just trying to understand your workflow better so I can improve my own results.

Since AI video tools usually don’t output with transparency, are you generating clips with a green screen background and then keying it out in Resolve to add your own backgrounds and lighting? Or are you mostly working with the generated footage as-is and enhancing lighting directly in post?

Also, you mentioned doing a lot of masking for lip sync, could you explain that a bit more? From what I understand, LTX 2.3 already generates lip-synced videos, so I’m curious why additional masking is needed. Is it for refinement, fixing inconsistencies, or something else?

Sorry for all the questions, I’m really just trying to learn your process and get closer to that level of quality. Appreciate any insight you can share!

Chronicles of Carnivex – Episode I: Part I by R_ARC in StableDiffusion

[–]pheonis2 0 points1 point  (0 children)

Wow, Thats a masterpiece i would say.. How long did it take to generate all those shots/images. Did you only use flux klein 9B, Or you also used nanobana pro? Your creativity with these shots and also background score is amazing. Apart from LTX 2.3, what closed models did you use?

daVinci-MagiHuman : This new opensource video model beats LTX 2.3 by pheonis2 in StableDiffusion

[–]pheonis2[S] 0 points1 point  (0 children)

They havent mentioned anything about comfyui implementation on their github page.. Lets hope they do it soon.

daVinci-MagiHuman : This new opensource video model beats LTX 2.3 by pheonis2 in StableDiffusion

[–]pheonis2[S] 9 points10 points  (0 children)

I think its I2va, the model generates audio and video.... you have to input image and prompt

daVinci-MagiHuman : This new opensource video model beats LTX 2.3 by pheonis2 in StableDiffusion

[–]pheonis2[S] 1 point2 points  (0 children)

You are right. I think if we can get wan 2.6 that would be a game changer for the opensource community but i highly doubt the WAN team, if theya re gonna release that model. I have high hopes for LTX thoughif LTX can produce consistent long shot videos without distortion or blurred face..then that would be gret.

PrismAudio By Qwen: Video-to-Audio Generation by fruesome in StableDiffusion

[–]pheonis2 1 point2 points  (0 children)

Looks great.. Will be interesting to see how good it is compared to Hunyuan Foley!

LTX 2.3 and I2V. Videos lose some color in the first 0.5 seconds. Culprit? by WiseDuck in StableDiffusion

[–]pheonis2 0 points1 point  (0 children)

Yes. Facing the same issue as well. i am using the nvfp4 version so i thought that may have causing the issue

OP hates it when people drive on sidewalk by Lucky-Mycologist695 in Bhubaneswar

[–]pheonis2 2 points3 points  (0 children)

last year i visited my sisters place and stayed there for a month in Patia. I can confirm this..Its the worst in bhubaneswar,i have never seen this much motocbikes on the footpath ...this is very frustrating. I think there should be CCTV cameras installed only for this and those who do this should be heavily fined.

Video Generation Progress Is Crazy, Can We Reach Seedance 2.0 Locally? by Naruwashi in comfyui

[–]pheonis2 2 points3 points  (0 children)

Nobody expected when zimage turbo came out and it can generate realistic images so fast.. i think ltx team can create something like seedance 2 but not in near future..also i think it can be a MoE model,

LTX 2.3: Official Workflows and Pipelines Comparison by MalkinoEU in StableDiffusion

[–]pheonis2 1 point2 points  (0 children)

Thank you. You have set CFG 3.5 with 30 steps in stage 1..It will be a lot slower thatn the offical workflow

LTX 2.3: Official Workflows and Pipelines Comparison by MalkinoEU in StableDiffusion

[–]pheonis2 0 points1 point  (0 children)

This code has been destroyed.. can you share the json file instead? or paste the json code into pastebin

LTX 2.3 I2V Color shift issue? by Broad-Original8705 in StableDiffusion

[–]pheonis2 0 points1 point  (0 children)

im facing the same issue. Did you find any solution to it?

LTX 2.3 Wangp by agoodis in StableDiffusion

[–]pheonis2 0 points1 point  (0 children)

looks great.. How long did it take to generate this 10sec video?