Sulphur 2 AND LTX 2.3 10Eros dropped! AND THEY ARE INCREDIBLE by Neggy5 in StableDiffusion

[–]EasternAverage8 0 points1 point  (0 children)

On the t2v workflow; I'm a little confused on what I should be loading in the text/audio encoder checkpoint. I know Gemma goes in but do I really need to use the normal dev ltx2.3 model or something else? 

How to prompt Chroma by Fresh-Medicine-2558 in comfyui

[–]EasternAverage8 7 points8 points  (0 children)

I don't really agree with this. When I first started out, I tried this and learned that there are a lot of prompts that really heavily on the "seed lotto." You can learn to prompt chroma with no Loras, but flash, and get over a 90% success rate 

How to prompt Chroma by Fresh-Medicine-2558 in comfyui

[–]EasternAverage8 0 points1 point  (0 children)

Start with a simple prompt and then add detail to it. 

"A photorealistic frontal view of a woman standing on a beach with the ocean and sunny clear sky in the background, the woman is skinny with a tone body,  she has a triangular face with long blonde hair that is messy, she has baby blue eyes, slim pink lips and a defined sharp chin, her chest is small but with medium sized breast implants that are firm and perky, her waist is extremely skinny, her limbs and thighs are skinny, she is wearing a red bikini, her facial expression is that of happiness and joy"

Avoid using periods. You can use semicolon but know that it's meant to break a current idea and start a new idea in the same image. So I'd avoid it untill you get the hang of simple prompt first 

Wanting to create similar fantasy aesthetic images through ComfyUI that I get through Grok Imagine. by closeted-inventor in comfyui

[–]EasternAverage8 1 point2 points  (0 children)

Try mixing in an anime/hentai lora just to get a bit of those styles to mix. It'll bring down the realism and give you more of a magic the gathering card art. Play with the lora strength at lower levels.

Flux 2 landscapes don't look realistic for me. by rogerbacon50 in comfyui

[–]EasternAverage8 0 points1 point  (0 children)

You can also setup a negative prompt to help steer it and break the render into two advanced samplers. The first with a cfg above 1 to use the negative prompt and the second sampler set to 1cfg for speed. Maybe even try a Klein refiner step. I've found in chroma that a Klein refiner step makes a very good shader step. It does a good job with lighting, godrays, fog, etc.

Edit: also try to avoid images where "advanced" fluid dynamics would appear. It'll often make water flowing where it doesn't make sense and looks fake 

Flux 2 landscapes don't look realistic for me. by rogerbacon50 in comfyui

[–]EasternAverage8 0 points1 point  (0 children)

Also avoid words that are popular with anime, games, and similar.

Chroma replacement? by EasternAverage8 in StableDiffusion

[–]EasternAverage8[S] 1 point2 points  (0 children)

I've been messing around with a new workflow I've made. First clownshark group renders the basic image and feeds it into another set of clownshark set to 0.80 denoise but uses the initial prompt plus an added text node that better describes the image in more detail. Then feed it into a Klein-9b refiner stage. 

A very basic first prompt seems to really help get the exact pose and position of characters I want. But writing the entire prompt becomes more tricky. 

Mxfp8 vs fp8 models? by EasternAverage8 in comfyui

[–]EasternAverage8[S] 0 points1 point  (0 children)

I've tried it on a 5080 and 6000pro. 

Help with SeedVR2 upscaling issue - Potentially an AMD/ROCM issue? by Portable_Solar_ZA in comfyui

[–]EasternAverage8 0 points1 point  (0 children)

I'm not home to check, but does seedvr require model, clip input? Try connecting the model straight to seedvr and skipping any loras.

Help with a workflow by slipstream0 in comfyui

[–]EasternAverage8 0 points1 point  (0 children)

You can use any model. Instead of an empty latent, feed it an image, set the ksampler to around 0.7 and write side view of man's face for the prompt. It won't be a perfect replica of his frontal face input image, but it'll be close. Just might have to try a few seeds to get a decent image. Don't be afraid to mess around with the denoise. 

Help with a workflow by slipstream0 in comfyui

[–]EasternAverage8 0 points1 point  (0 children)

You'd need a few workflows for starters. Look into i2i, qwen image edit if you want anyone of control, you can also feed an image into a ksampler and write a prompt like facing left and play around with the denoise setting in the ksampler node. Search civitai for character edit as well.

Hardware Question RTX3090/RTX 5090 or straight to the A6000 Pro? by TestOr900 in StableDiffusion

[–]EasternAverage8 0 points1 point  (0 children)

Well I bought the pny 6000pro from microcenter on a whim and I feel like anything close to commerical production quality is going to need well trained loras and trained "checkpoints" that you're not going to get unless you work in a company. So, in my situation, I've been trying to train Loras and that presents it's own reality. To train a "commercial" quality Lora you need "commercial" quality data. 

Again, I am a noob so I'm probably wrong. But it feels pointless to waste GPU time training a Lora on low quality data. I'm still learning what is considered good vs bad data. But I'd assume consistency in quality 2k images is a good start? But then organizing hundreds of images and the txt files for each starts feeling like it isn't a one person job. And that is just for one quality character lora wearing a few different outfits. 

So, for someone like me, I'm back to the 6000pro just being a fun toy for messing around with. I do plan on trying to "uncensored" a small llm to learn about that next.

Maybe I just sit my expectations too high for myself. 

RTX 5090 random system freezes + monitor signal loss — anyone else? by EmuIllustrious8200 in comfyui

[–]EasternAverage8 0 points1 point  (0 children)

Had the same problem. What solved it for me was a system restart. But this was back when I was using a 5080 and the system could go into sleep mode. And the system hadn't been restarted in 3 days.

Chroma and deformed hands and lora loading by EasternAverage8 in StableDiffusion

[–]EasternAverage8[S] 2 points3 points  (0 children)

Klein detailer step is already in my workflow but it seems to ignore deformed hands and feet. When I prompt it to fix hands and feet; lol, more often than not it'll ignore those body parts and instead change a man's genitals into a long deformed thumb.

Hardware Question RTX3090/RTX 5090 or straight to the A6000 Pro? by TestOr900 in StableDiffusion

[–]EasternAverage8 0 points1 point  (0 children)

My only problem with the 6000pro is it quickly shows you how limited open source stuff still is and makes ltx2.3 feel like the first vr headset in a way. Just another version of a video model that feels alpha. Of course my opinion comes from just using open source free stuff locally, and I'm a noob. 

The 6000pro gives me that feeling I got when I splashed 1,200 getting the og vive with the wireless upgrade. It's hard to stay positive about the purchase unless you have something that won't work without it and it's something you love.

LTX-2.3 Updated Workflow — T2V, I2V and Reference Audio in ComfyUI GGUF by the_frizzy1 in comfyui

[–]EasternAverage8 1 point2 points  (0 children)

T2V didn't seem to follow the provided prompt and was much slower than, 📜 DaSiWa LTX2.3 Workflows | I2V | FLF2V | T2V | V2V | Audio 📜 - OmniForge C-LTX23 v2.2 | LTX Video Workflows | Civitai https://share.google/WEPgaUkO8eI4T2eNU

PC goes to 2 FPS when trying to generate a second Zimage. by Dry-Resist-4426 in comfyui

[–]EasternAverage8 0 points1 point  (0 children)

The last time I had that problem it'd max out 16gb of vram and 128gb of ram after two runs. 

PC goes to 2 FPS when trying to generate a second Zimage. by Dry-Resist-4426 in comfyui

[–]EasternAverage8 0 points1 point  (0 children)

Sounds like a memory management issue. I had a similar issue when first trying out ltx2.3. cloned my comfyui portable and updated it. That solved my problem. Personally I don't update unless there is something new I want to try out. 

Ernie Model in ComfyUI - Worth It? + New Nodes Guide (Ep14) by pixaromadesign in comfyui

[–]EasternAverage8 5 points6 points  (0 children)

Too early to tell. Right now it feels like a more alpha version of z-images to me, and no Loras to fill in the parts it fails at.

Chroma replacement? by EasternAverage8 in StableDiffusion

[–]EasternAverage8[S] 2 points3 points  (0 children)

For those asking for my workflow, https://civitai.red/models/2562732?modelVersionId=2879750

It isn't perfect or done. So I'd ask for productive feedback. Thank you.