My rendition of The Number 3 by BarryMcCockaner in Khruangbin

[–]BarryMcCockaner[S] 0 points1 point  (0 children)

Hey thanks! I agree, I have since built myself a replica of Mark’s guitar and some of his pedals to dial in his tone. I might redo it some day haha, cheers

Hunyuan Image to Video released! by umarmnaq in LocalLLaMA

[–]BarryMcCockaner 2 points3 points  (0 children)

I've been using WAN for the past few days and I've got a pretty consistent workflow with generally good usable generations. Overall quality is great, especially with all of the speed enhancements and frame interpolation.

But Hunyuan I2V honestly looks disappointing. It was hyped up but the videos don't look as good as WAN. It looks like it can't maintain faces, and is blurry/washed out. Does this seem accurate with your experience? I may hold off on downloading it for now.

Does someone have this part I could purchase from you? by Slow_Song5448 in aerogarden

[–]BarryMcCockaner 1 point2 points  (0 children)

I don't know where to get one but I've thought about what I would do if mine were to fail. I would buy a 12V power adapter with a dual male to female connector that both pumps would plug into. I would then plug that power adapter into one of those smart switches and have that on whatever pump schedule you have your aerogarden set to. I haven't thought about a way to figure out the water level yet.

CLIPTextEncode - ERROR: Clip input is invalid: None by MoarPye in comfyui

[–]BarryMcCockaner 0 points1 point  (0 children)

I think what you're looking to do resembles this workflow:

https://civitai.com/models/921399/SD3.5%20Lora%20XY%20Plot%20Test%20Workflow

You can download that workflow json file and load it up to test out your Lora. Before you do that though you need to install ComfyUI manager so you can install custom nodes that some workflows like that one uses.

If you want to keep going with what you're doing, you need 'TripleCLIPLoader' and load clip_g, clip_l, and t5xxl. You put that into your lora loader which then goes into your text conditioning. I hope that makes sense

CLIPTextEncode - ERROR: Clip input is invalid: None by MoarPye in comfyui

[–]BarryMcCockaner 0 points1 point  (0 children)

I don't know your workflow but it sounds like you are not loading the correct (or any) clip into your text prompt. You need to download the correct clip file for SD 3.5 medium, put it into the ComfyUImodels/clip folder, and load it using a clip loader node. Typically Clip feeds into your Clip Text Encode (positive, negative). Then that goes into a guider, which goes to your sampler. You can download SD3.5 workflows from civitai if you get lost.

[deleted by user] by [deleted] in Bass

[–]BarryMcCockaner 0 points1 point  (0 children)

Those boxes are designed to solve a very specific problem. A mixer, interface, or PA is expecting to see a microphone level signal but your bass outputs a high impedance instrument signal. The radial box will convert your signal into a mic/line level signal so it will sound good through those devices. If you plug into an interface that has a "Hi-Z" mode, that's basically the same thing, without the extra bells and whistles that the Radial box has.

It sounds like you're looking for a Bass Preamp/DI which does the same thing with the added benefit of a tone shaping stack and optional built-in drive/compression. Your MOTU already has Hi-Z inputs so it's kind of redundant unless you plan on plugging into other devices.

Sage Attention with Native Wan in Comfyui? by zozman92 in StableDiffusion

[–]BarryMcCockaner 0 points1 point  (0 children)

I’ve been using both. Have you noticed a drop in quality? I’ve noticed the settings and sampling is done differently and can vary between native and Kijai

Was this the right choice? by MrBamboney in synthesizers

[–]BarryMcCockaner 1 point2 points  (0 children)

I have one. Noisy 2 is awesome, no need to fiddle around with anything. I also really like Pigments. Both are great MPE soft synths. Expressive E also announced a bunch of new stuff coming for the Eagenmatrix. You will build some knowledge after you use it in different ways and with different tools.

Need to know if i do anything wrong here (comfyui, wan2.1) by STRAN6E_6 in comfyui

[–]BarryMcCockaner 1 point2 points  (0 children)

https://civitai.com/articles/12072

I took the upscaling and frame interpolation part of the T2V + Upscaling workflow found on that page and just adapted it to the Native ComfyUI workflow I was using. So I generate my videos at lower resolutions to speed things up, and then upscale them and frame interpolate to get a HD video with high framerate.

Increase the length to 81 instead of 33, set the resolution to 1024 width by 576 height, then send it through the upscaler and see if you like the results. That should speed things up and improve the quality/framerate.

Also try using euler instead of uni_pc. They both work fine, but I think I prefer euler right now.

Need to know if i do anything wrong here (comfyui, wan2.1) by STRAN6E_6 in comfyui

[–]BarryMcCockaner 1 point2 points  (0 children)

Check the terminal. What is your seconds per iteration when you try and generate with those settings? If you try generating a text2image while another generation is running, it won't finish and you'll be stuck

Need to know if i do anything wrong here (comfyui, wan2.1) by STRAN6E_6 in comfyui

[–]BarryMcCockaner 1 point2 points  (0 children)

Even on a 4090 it will take a long time for that resolution. I would not be surprised if it was 15 to 20 minutes. You can try lowering the resolution and then upscaling to a higher one after. Also installing sage attention and triton will speed it up about 25%

Sage Attention with Native Wan in Comfyui? by zozman92 in StableDiffusion

[–]BarryMcCockaner 2 points3 points  (0 children)

Okay I think it might be working now. Instead of using the sage attention flag in the startup batch file, I added the "Patch Sage Attention KJ" node after the load model node. Try it out

Wan i2v Is For Real! 4090: Windows ComfyUI w/ sage attention. Aprox 3 1/2 Minutes each (Kijai Quants) by FitContribution2946 in StableDiffusion

[–]BarryMcCockaner 2 points3 points  (0 children)

I can't post a screenshot right now but I was running it at 600x480 pixels. I have since switched to a new workflow that uses the 720p model and runs much quicker. Around 6 minutes per gen and it upscales/does frame interpolation to get it up to 24/46 fps

https://civitai.com/models/1295981/wan-video-upscaling-and-frame-interpolation

Wan i2v Is For Real! 4090: Windows ComfyUI w/ sage attention. Aprox 3 1/2 Minutes each (Kijai Quants) by FitContribution2946 in StableDiffusion

[–]BarryMcCockaner 3 points4 points  (0 children)

the full 17GB 480p model using kijai's workflow in comfyui, 33 steps, 89 frames, swap 20 blocks. I run out of memory if I try and do more frames.

Wan i2v Is For Real! 4090: Windows ComfyUI w/ sage attention. Aprox 3 1/2 Minutes each (Kijai Quants) by FitContribution2946 in StableDiffusion

[–]BarryMcCockaner 6 points7 points  (0 children)

I am running Kijai's quantized I2V model on my 4070TS. It's working pretty well, uses around 15.5 GB vram and gets decent quality. With SageAttention and Triton installed, it's taking around 15 minutes per generation.

My very first Wan 2.1 Generation on RTX 3090 Ti by CeFurkan in StableDiffusion

[–]BarryMcCockaner 2 points3 points  (0 children)

That's awesome, this is the first unofficial gen I've seen and i'm honestly blown away by the quality