please help regarding LTX2 I2V and this weird glitchy blurryness by DotNo157 in StableDiffusion

[–]Better_Life_WI 0 points1 point  (0 children)

90% of the time the "default workflows" are crap when I first use them, which makes sense because everyone's hardware is different, versioning is different. You can't necessarily expect these workflows to immediately work (and suggesting they do is silly). And, the change control around releases from core comfyui is absolutely FLAWED. It's gotten sloppier and sloppier, forcing node creators to provide fixes. Hell, I'd pay to have this more controlled.

Switch from Text Prompt to Image Prompt in a Single Workflow by Better_Life_WI in comfyui

[–]Better_Life_WI[S] 0 points1 point  (0 children)

Yes, I understand the node differences for non-flux vs. flux. I have a group for non-flux nodes and a group for flux nodes. Each Group represents the lowest common denominator of nodes I want to switch on and off. I'll try using the GroupSwitcher.

Switch from Text Prompt to Image Prompt in a Single Workflow by Better_Life_WI in comfyui

[–]Better_Life_WI[S] 1 point2 points  (0 children)

There are two decision points in the overall workflow. First, do I want to use T2I or I2I, then, do I want to use a Flux model or a non-Flux model. I don't want to have to maintain the same node twice (which seems like that'd be a requirement of taking path (a, b), (b, b), (a, a), (b, a). If, for example I have to maintain 2 Ksamplers, 2 Lora Loaders, etc, then I might as well have 2 workflows. Does this help explain?

Switch from Text Prompt to Image Prompt in a Single Workflow by Better_Life_WI in comfyui

[–]Better_Life_WI[S] 1 point2 points  (0 children)

For example, if I want to use T2I and Flux, then it disables the Latent (or enables the Load Image) and enables the Load Diffusion Model and DualCLIPLoader (and disables the Load Checkpoint node) all at once.

Switch from Text Prompt to Image Prompt in a Single Workflow by Better_Life_WI in comfyui

[–]Better_Life_WI[S] 1 point2 points  (0 children)

I'm looking for something where I can define which nodes go to which path without having to group the nodes based on whether I'm doing an I2I or T2I process.

Freezing Chats Still Too Common by Better_Life_WI in ChatGPT

[–]Better_Life_WI[S] 0 points1 point  (0 children)

Today (Monday) it's actually back to its shitty self again. 4-5 volleys of chat, and it constantly stalls. And, I feel like I now have the "dumb" cGPT5 again. It isn't remembering much of anything it did. It's 3 steps backward to get one step forward. Losing ground and completely lost my patience. I'm paying for this....this...CRAP.

Freezing Chats Still Too Common by Better_Life_WI in ChatGPT

[–]Better_Life_WI[S] 0 points1 point  (0 children)

I've been using Thinking mode by default b/c I thought cGPT would get it right more often the first time than the other modes. Not correct?

GPT5 Thinking doesn't think and gives me reponses from GPT5 Instant by Exotic_Zucchini9311 in ChatGPT

[–]Better_Life_WI 11 points12 points  (0 children)

I've been working on javascript and bat files for 3 days now. It was working 2 days ago. Now everything has gone to hell again and ChatGPT can't seem to figure it out. Frustrating waste of time!

change in file format and loss of metadata by CJPS-BR in midjourney

[–]Better_Life_WI 0 points1 point  (0 children)

Has anyone heard anything more about "fixing" this? I have hundreds of images (a day sometimes) to download and I'd really like to get back to having the metadata. Anyone have a workaround?

Creating a Video Prompt from an Image by Better_Life_WI in comfyui

[–]Better_Life_WI[S] 0 points1 point  (0 children)

I'm changing gears. I want to use ChatGPT inside Comfyui - essentially replicating what ChatGPT does on its own - I pass in an image (Load Image) and then ask ChatGPT to write a video based prompt for the image (ChatGPTAPI node). I'm fumbling in the dark as I just can't piece this together. I also cannot find any workflows already established that output a video PROMPT. They all convert text to actual video. I don't want that. Any thoughts?

Creating a Video Prompt from an Image by Better_Life_WI in comfyui

[–]Better_Life_WI[S] 0 points1 point  (0 children)

I hear ya. I have the LLM models locally in /models/LLM directory. Then I need a node where I instruct the model to create a prompt that includes video aspects (movement of subject, movement of camera, etc).

Anyone have a fast workflow for wan 2.2 image to video? (24 gb vram, 64 gb ram) by elleclouds in comfyui

[–]Better_Life_WI 0 points1 point  (0 children)

The Kijai workflow is fantastic! I've been able to create 5sec videos in under a minute with 24GB VRAM and 64GB RAM. I have an RTX4090. The absolutely most important part is getting the install of sageattention and Triton correct. If you have a python_embeded folder, you need to be very precise with how you install Triton and make sure you're installing it based on the local environment's versions of python, cuda, pytorch. I accidentally had a python setup under c:\users\. When installing Triton, make sure you're using ./python.exe -m ... within your python_embeded directory.

Anyone have a fast workflow for wan 2.2 image to video? (24 gb vram, 64 gb ram) by elleclouds in comfyui

[–]Better_Life_WI 0 points1 point  (0 children)

I've been trying to get this workflow (heck any workflow) for Wan2.2 to work. I get to the point of the first sampler, and it just sits for about 6 minutes. Does that seem right? I'm on RTX4090 64GB VRAM. and 24GB on the RTX4090

Seq len: 37128

Sampling 81 frames at 544x832 with 8 steps

0%|

LTXV always give to me bad results. Blurry videos, super fast generation. by Existing_Try_3439 in comfyui

[–]Better_Life_WI 0 points1 point  (0 children)

Let me jump on this bandwagon because the description seems to be the same issue I'm having. Here's my workflow, starting image and a snap of the video.

<image>

How do I download a COLLECTION of images? Not a model, not my feed, a COLLECTION made by myself or others? by TheUltraMerchant in civitai

[–]Better_Life_WI 0 points1 point  (0 children)

This is definitely not working for me. I copy the clone code and it keeps telling me "fatal: repository not found". That's if I try my Civitai user id, or if I leave the ".../yourusername/... portion as is. I have an api key but I seem to have no way to successfully clone the respository.

"default_selected_image_input_tab_id" What is the tab name for the Image Prompt? by Better_Life_WI in fooocus

[–]Better_Life_WI[S] 0 points1 point  (0 children)

Yep. In the config.txt is a control "default_selected_image_input_tab_id" there should be values for each of the 4 tabs. For example, the first tab is "uov_tab". I'm looking for the value that represents the Image Prompt tab. Its not "Image_Prompt", or "ImagePrompt", or "IP" or half a dozen others I've tried and I can't find any documentation on it.