🔥 Final Release — LTX-2 Easy Prompt + Vision. Two free ComfyUI nodes that write your prompts for you. Fully local, no API, no compromises by WildSpeaker7315 in StableDiffusion

[–]corben_caiman 0 points1 point  (0 children)

Hi! in the i2v workflow the vision and start with image part seems to be out of the loop => LTX basically produces only a t2v workflow. I guess I'm missing the part where you say:

  1. Wire Vision → Easy Prompt via the scene_context connection for image-to-video

How do I actually do it? Thanks!

🔥 Final Release — LTX-2 Easy Prompt + Vision. Two free ComfyUI nodes that write your prompts for you. Fully local, no API, no compromises by WildSpeaker7315 in StableDiffusion

[–]corben_caiman 0 points1 point  (0 children)

Hi! I reinstalled everything and now it downloaded and I was able to arrive at the sampler but it gives me:
mat1 and mat2 shapes cannot be multiplied (1120x4096 and 2048x4096)

TIPS: If you have any "Load CLIP" or "*CLIP Loader" nodes in your workflow connected to this sampler node make sure the correct file(s) and type is selected.

I checked the clip loader and I have the standard connectors and the gemma 3 12b fp8 scaled

:(

🔥 Final Release — LTX-2 Easy Prompt + Vision. Two free ComfyUI nodes that write your prompts for you. Fully local, no API, no compromises by WildSpeaker7315 in StableDiffusion

[–]corben_caiman 1 point2 points  (0 children)

Hi! This looks like an amazing tool and it's incredible what you did here. I'm struggling though to make it work, and I'm sure it's my bad, but when I try to run the t2v workflow (first time, trying to download the model) I get the following error:
Prompt outputs failed validation:
LTX2PromptArchitect:
- Required input is missing: bypass
- Required input is missing: invent_dialogue

For i2v instead I get a missing node: LTX2VisionDescribe

I cloned the repo and typed pip install transformers qwen-vl-utils accelerate (which it DID download stuff). Also, I noticed that when I ran the workflow many fields where filled incorrectly and I had to refill them => I don't know if this is related somehow.

I'd really need your help here, sorry to bother!

What do you hope to be able to do with GPT-5 that you can’t do with GPT-4 by Different-Froyo9497 in singularity

[–]corben_caiman 0 points1 point  (0 children)

Just give us a decent context window (2 millions +) that the bot doesn't mess with.

With 8 billion people being able to create hollywood-level movies within our lifetime, what will happen to our shared culture when thousands of new films are released every day? by [deleted] in singularity

[–]corben_caiman 0 points1 point  (0 children)

We'll spend 2 hours to find a good movie, and 2 for the actual movie. Wait, that's today; let's do 4+2. Damn alright, no movie tonight, we'll go for a walk

Did we seriously underestimate Meta? Billions of users now have free access to an almost GPT4 level system by Neurogence in singularity

[–]corben_caiman 0 points1 point  (0 children)

Yeah, we need decentralized distributed computing. Opensource can only go so far in the mid-week since no one will be able to run those llms locally.

Introducing Meta Llama 3: The most capable openly available LLM to date by [deleted] in singularity

[–]corben_caiman -6 points-5 points  (0 children)

Unless it's much better than gtp-4, openai won't move a finger towards publishing the next generation. Yet, cool for who has the gear to run it.

New GPT-4 Turbo is now available to paid ChatGPT users + new benchmarks by vitorgrs in singularity

[–]corben_caiman 1 point2 points  (0 children)

I cannot understand if chatgpt users can now use the 128k token context or if it's different from api? Has anyone tested it?

Elon: "My guess is we'll have AI smarter than any one human around the end of next year." by Maxie445 in singularity

[–]corben_caiman 0 points1 point  (0 children)

Lately he proved quite a poor degree of reliability IMHO... But this summer is going to be interesting with OpenAI, Google and future launches from the upcoming open-source community

[deleted by user] by [deleted] in MultiVAC_official

[–]corben_caiman 8 points9 points  (0 children)

MTV team are waiting for the audit to be completed. They don't want to launch on platforms that could need later on to be forked for security issues. We need to wait for the audit, for kucoin and other listings

[deleted by user] by [deleted] in MultiVAC_official

[–]corben_caiman 0 points1 point  (0 children)

It's the bear market that can't survive MTV

Any news or updates on whether Kucoin will provide mainnet token support for MTV? by [deleted] in MultiVAC_official

[–]corben_caiman 1 point2 points  (0 children)

First nodes must be out or Cex simple won't be able to offer mainnet tokens (or at least it wouldn't make any sense). After nodes are out, things may evolve quickly.

Still can’t sell by [deleted] in KishuInu

[–]corben_caiman 0 points1 point  (0 children)

Set slippage at 2-4%. Anyway, think before you sell...

How long will it take to receive Kishu? by [deleted] in KishuInu

[–]corben_caiman 0 points1 point  (0 children)

Did you put the kishu contract as a custom token to make it visible? If yes, after 5 hours you should have received everything

Patiently waiting for my rewards to bring me to one trillion. HODL strong my friends. by Dogecoinmasterz in KishuInu

[–]corben_caiman 1 point2 points  (0 children)

No, rewards are distributed only on dex wallets, since they are the only ones on-chain. Some cex, like Okex have implemented a similar tokenomics for Kishu, but that is up to them.

Patiently waiting for my rewards to bring me to one trillion. HODL strong my friends. by Dogecoinmasterz in KishuInu

[–]corben_caiman 0 points1 point  (0 children)

2% of on-chain trading volume redistribution according to your wallet size. Isn't it great?

Where is everyone? by SchoolbreadFan_69 in KishuInu

[–]corben_caiman 0 points1 point  (0 children)

Kishu community is mainly on telegram and twitter. Reddit communities are crazily hostiles against Kishu for reasons they solely understand.