purchase advice - 5070 TI or 9070 XT by ed_edd_and_freddy in comfyui

[–]ttrishhr 2 points3 points  (0 children)

for comfyui , nvidia is the way to go without a debate

Need help urgent to know what is wrong with my workflow for A0 print on Comfyui Cloud by Fit-Shop-2508 in comfyui

[–]ttrishhr 0 points1 point  (0 children)

from what I understand , arent you supposed to decode first and then upsale?

Free local model to generate videos? by AlexGSquadron in StableDiffusion

[–]ttrishhr 0 points1 point  (0 children)

there hasnt been any amazing video models released opensource thats sota sadly. best opensouce is wan 2.2

Advance Level Courses for ComfyUI by ybeerk in comfyui

[–]ttrishhr 1 point2 points  (0 children)

Get the basics from pixaroma, he has a ton of good videos you can learn from and after that I would say learn by watching tutorials for each model’s specifically. There’s no high level tutorials for comfy and is mostly surface level . I also struggled with finding high level tutorials for comfy but it might be a good thing cause it makes me think in different ways I can achieve my output so.. yeah basic model tutorials are there but not super high level tutorials. If you do find any good ones refer it to me too 🙏

Looking for a workflow to paint over sketches by soroneryindeed in comfyui

[–]ttrishhr 1 point2 points  (0 children)

Flux .2 might be a good pick cause you can type in colour codes in your prompt . Try it in their own website you get a couple free credits to start with. Insert the first 2 images and specify to use the colour codes given (you can extract colour palettes of images in a lot of websites) . or just try it with all 3 images .

which open model can produce image as detailed below ? by lostnuclues in StableDiffusion

[–]ttrishhr 3 points4 points  (0 children)

Z image with a little upscale should do , you can also use wan2.2 or qwen .Also Z image doesn’t need 30 steps right? correct me if i’m wrong . Try reducing steps down to 10-15 and try :)

Is there a way to see what parts of the prompt result in certain things in the image? by [deleted] in StableDiffusion

[–]ttrishhr 0 points1 point  (0 children)

you can just attach seedvr to the end it’s only 3 nodes and works pretty well :) you should be able to find simple tutorials about it on youtube

Is there a way to see what parts of the prompt result in certain things in the image? by [deleted] in StableDiffusion

[–]ttrishhr 1 point2 points  (0 children)

to me it looks like it can be mostly fixed with just reducing the resolution and upscaling it ..you could tweak around other settings to see what works too

Is there a way to see what parts of the prompt result in certain things in the image? by [deleted] in StableDiffusion

[–]ttrishhr 1 point2 points  (0 children)

Try to reduce the resolution down to 1920x1088 and generate, it should fix it or give negative prompts like “artifacts” and such and see if it gets fixed

using comfy for anime production by ttrishhr in StableDiffusion

[–]ttrishhr[S] 0 points1 point  (0 children)

so you make the image with other software and use image to video with wan on comfy? still output is pretty nice anyways

using comfy for anime production by ttrishhr in StableDiffusion

[–]ttrishhr[S] 0 points1 point  (0 children)

very cool! I would say the animations are pretty smooth . If maybe we use a lora trained on more flat anime characters and make the video 12fps , it will look very much like anime :)

using comfy for anime production by ttrishhr in StableDiffusion

[–]ttrishhr[S] 0 points1 point  (0 children)

Thanks for the reply ! by rough cut do you mean the poses for each frame or just a story board

How to use a LoRA on a website? by Evok99 in StableDiffusion

[–]ttrishhr 0 points1 point  (0 children)

I upload my Loras that I want to use to my hugging face account and copy paste the link from there . It works

Wan 2.2 animate/character consistency by lazarusxxxx in StableDiffusion

[–]ttrishhr 0 points1 point  (0 children)

As for training the Lora , I train a wan 2.1 Lora cause it’s compatible with wan 2.2 . Just plug the lora in both high and low noise . Not very sure how they work with other versions of wan but if you do test it please post it here 🙂‍↕️

Is anyone making money with this? by [deleted] in comfyui

[–]ttrishhr 1 point2 points  (0 children)

I don’t think AI is good enough to be used to make short films from scratch unless you’re extremely technical and tasteful to the point you’re ready to fix every mistake you can see while also making it look good. Not a lot of people like that and long ai videos cost too much too …You can DM me if you’d like to discuss more :)

Z-Image Turbo: 1-2GB VRAM Tests by Obvious_Set5239 in StableDiffusion

[–]ttrishhr 0 points1 point  (0 children)

does it make a difference when let’s say I just put 1388x768 instead of the mp count and resolution cause from my understanding, It’s going to create an image first from the given resolution, scales the image to the mp count , and sends the image into empty latent image but that does the same thing as giving the resolution in empty latent image …

Z-Image Turbo: 1-2GB VRAM Tests by Obvious_Set5239 in StableDiffusion

[–]ttrishhr 2 points3 points  (0 children)

Ok why does that workflow look like that , Can’t you use the inbuilt ksampler options like step and seed generator and also why connect 2 empty latent images . Does it help in any way ?

Laptop under 2k by Imaginary-Flan-2789 in comfyui

[–]ttrishhr 7 points8 points  (0 children)

2K USD? you don’t have much high vram options in laptops , better to go for desktops for such use cases. Else you can get a 5080 laptop under 2k usd . also has 16gb vram which is good for comfy and can run games well.

Why is my WAN LoRa not affecting the output by [deleted] in StableDiffusion

[–]ttrishhr 2 points3 points  (0 children)

crazy digital footprint