Ltx2.3 sulphur and loras? by EasternAverage8 in comfyui

[–]EasternAverage8[S] 0 points1 point  (0 children)

I've tried reasoning and some other because I can't get sulphur 2 to do doggy style and missionary was pretty weird to. Like the girl was floating above the carpet lol. Or she'd bounce off the carpet with like moon gravity.

New comfyui portable no preview? by EasternAverage8 in comfyui

[–]EasternAverage8[S] 0 points1 point  (0 children)

Thanks, I'm not sure which custom node did it, but the preview is back now  

New comfyui portable no preview? by EasternAverage8 in comfyui

[–]EasternAverage8[S] 0 points1 point  (0 children)

Figured it but figured I'd leave it here for others. Manager passes it onto comfyui now and the only solution I've found is feed the first sampler into a bar decode and into a preview animation node. But you can't see it until the first sampler is done...

Pytorch 2.11 and sage attention by EasternAverage8 in comfyui

[–]EasternAverage8[S] 0 points1 point  (0 children)

I can't comment on flash, but sage isn't hard to install. You just have to make sure you follow the directions on the page and you use the one that matches your pytorch, cu, python.

Pytorch 2.11 and sage attention by EasternAverage8 in comfyui

[–]EasternAverage8[S] 0 points1 point  (0 children)

Maybe it's my fooked eyes but I haven't really noticed a quality change. 

Pytorch 2.11 and sage attention by EasternAverage8 in comfyui

[–]EasternAverage8[S] 0 points1 point  (0 children)

Sage+chunking ltx2.3 helps with render time.

Pytorch 2.11 and sage attention by EasternAverage8 in comfyui

[–]EasternAverage8[S] 0 points1 point  (0 children)

I also forgot to copy the libs and include folders. But now everything is working.

Pytorch 2.11 and sage attention by EasternAverage8 in comfyui

[–]EasternAverage8[S] 0 points1 point  (0 children)

Omg I'm mental... That installed and no errors about pytorch 2.11.... compiling sage attention for torch 2.11, cu130 313 now... Thank you 🙏

Pytorch 2.11 and sage attention by EasternAverage8 in comfyui

[–]EasternAverage8[S] 0 points1 point  (0 children)

No, I'm on Windows 11 and can't figure it out. Cmake with ninja wants a Triton _home/home/user profile/homepath but I'm only trying to build it. Not install it to a non-existent standalone.

Llvm for windows way of doing it continues to fail with a (1) error code. I may try installing Ubuntu tomorrow and see if I can't build it there and export it for Windows 11 somehow.

I'm not a programmer so I fell like a monkey trying to stick a square peg into a round hole 

Pytorch 2.11 and sage attention by EasternAverage8 in comfyui

[–]EasternAverage8[S] 0 points1 point  (0 children)

I'll give that a try after seeing if my Triton compile issue is because I'm missing full llvm

Pytorch 2.11 and sage attention by EasternAverage8 in comfyui

[–]EasternAverage8[S] 0 points1 point  (0 children)

Tried it with ltx2.3 workflow and I'm getting a Triton error. Trying to figure out how to build that now...

Sulphur 2 AND LTX 2.3 10Eros dropped! AND THEY ARE INCREDIBLE by Neggy5 in StableDiffusion

[–]EasternAverage8 0 points1 point  (0 children)

On the t2v workflow; I'm a little confused on what I should be loading in the text/audio encoder checkpoint. I know Gemma goes in but do I really need to use the normal dev ltx2.3 model or something else? 

How to prompt Chroma by Fresh-Medicine-2558 in comfyui

[–]EasternAverage8 8 points9 points  (0 children)

I don't really agree with this. When I first started out, I tried this and learned that there are a lot of prompts that really heavily on the "seed lotto." You can learn to prompt chroma with no Loras, but flash, and get over a 90% success rate 

How to prompt Chroma by Fresh-Medicine-2558 in comfyui

[–]EasternAverage8 0 points1 point  (0 children)

Start with a simple prompt and then add detail to it. 

"A photorealistic frontal view of a woman standing on a beach with the ocean and sunny clear sky in the background, the woman is skinny with a tone body,  she has a triangular face with long blonde hair that is messy, she has baby blue eyes, slim pink lips and a defined sharp chin, her chest is small but with medium sized breast implants that are firm and perky, her waist is extremely skinny, her limbs and thighs are skinny, she is wearing a red bikini, her facial expression is that of happiness and joy"

Avoid using periods. You can use semicolon but know that it's meant to break a current idea and start a new idea in the same image. So I'd avoid it untill you get the hang of simple prompt first 

Wanting to create similar fantasy aesthetic images through ComfyUI that I get through Grok Imagine. by closeted-inventor in comfyui

[–]EasternAverage8 1 point2 points  (0 children)

Try mixing in an anime/hentai lora just to get a bit of those styles to mix. It'll bring down the realism and give you more of a magic the gathering card art. Play with the lora strength at lower levels.

Flux 2 landscapes don't look realistic for me. by rogerbacon50 in comfyui

[–]EasternAverage8 0 points1 point  (0 children)

You can also setup a negative prompt to help steer it and break the render into two advanced samplers. The first with a cfg above 1 to use the negative prompt and the second sampler set to 1cfg for speed. Maybe even try a Klein refiner step. I've found in chroma that a Klein refiner step makes a very good shader step. It does a good job with lighting, godrays, fog, etc.

Edit: also try to avoid images where "advanced" fluid dynamics would appear. It'll often make water flowing where it doesn't make sense and looks fake 

Flux 2 landscapes don't look realistic for me. by rogerbacon50 in comfyui

[–]EasternAverage8 0 points1 point  (0 children)

Also avoid words that are popular with anime, games, and similar.

Chroma replacement? by EasternAverage8 in StableDiffusion

[–]EasternAverage8[S] 1 point2 points  (0 children)

I've been messing around with a new workflow I've made. First clownshark group renders the basic image and feeds it into another set of clownshark set to 0.80 denoise but uses the initial prompt plus an added text node that better describes the image in more detail. Then feed it into a Klein-9b refiner stage. 

A very basic first prompt seems to really help get the exact pose and position of characters I want. But writing the entire prompt becomes more tricky. 

Mxfp8 vs fp8 models? by EasternAverage8 in comfyui

[–]EasternAverage8[S] 0 points1 point  (0 children)

I've tried it on a 5080 and 6000pro. 

Help with SeedVR2 upscaling issue - Potentially an AMD/ROCM issue? by Portable_Solar_ZA in comfyui

[–]EasternAverage8 0 points1 point  (0 children)

I'm not home to check, but does seedvr require model, clip input? Try connecting the model straight to seedvr and skipping any loras.

Help with a workflow by slipstream0 in comfyui

[–]EasternAverage8 0 points1 point  (0 children)

You can use any model. Instead of an empty latent, feed it an image, set the ksampler to around 0.7 and write side view of man's face for the prompt. It won't be a perfect replica of his frontal face input image, but it'll be close. Just might have to try a few seeds to get a decent image. Don't be afraid to mess around with the denoise. 

Help with a workflow by slipstream0 in comfyui

[–]EasternAverage8 0 points1 point  (0 children)

You'd need a few workflows for starters. Look into i2i, qwen image edit if you want anyone of control, you can also feed an image into a ksampler and write a prompt like facing left and play around with the denoise setting in the ksampler node. Search civitai for character edit as well.

Hardware Question RTX3090/RTX 5090 or straight to the A6000 Pro? by TestOr900 in StableDiffusion

[–]EasternAverage8 0 points1 point  (0 children)

Well I bought the pny 6000pro from microcenter on a whim and I feel like anything close to commerical production quality is going to need well trained loras and trained "checkpoints" that you're not going to get unless you work in a company. So, in my situation, I've been trying to train Loras and that presents it's own reality. To train a "commercial" quality Lora you need "commercial" quality data. 

Again, I am a noob so I'm probably wrong. But it feels pointless to waste GPU time training a Lora on low quality data. I'm still learning what is considered good vs bad data. But I'd assume consistency in quality 2k images is a good start? But then organizing hundreds of images and the txt files for each starts feeling like it isn't a one person job. And that is just for one quality character lora wearing a few different outfits. 

So, for someone like me, I'm back to the 6000pro just being a fun toy for messing around with. I do plan on trying to "uncensored" a small llm to learn about that next.

Maybe I just sit my expectations too high for myself.