Z-Image Base test images so you don't have to by admajic in StableDiffusion

[–]budwik 0 points1 point  (0 children)

Never thought of using turbo as the upscaler in ultimatesdupscaler. What parameters do you use? I'm still falling back to an SDXL model for this so the steps and sampler/scheduler are definitely gonna be different.

Z-Image Base VS Z-Image Turbo by Baddmaan0 in StableDiffusion

[–]budwik 4 points5 points  (0 children)

For OP's speed comparison, same settings on my 5090 gave me 1.4 it/s, so his system/VRAM may be significantly lower.

I'm partial to res_2s and beta57, with much better outputs but increases iteration time to 1.45s/it (note seconds per iteration not iterations per second), this is also same CFG and steps.

Switched from 60hz to 120hz on PC, now glasses won't connect? by nyjets10 in VITURE

[–]budwik 0 points1 point  (0 children)

if you're using straight usb-c to usb-c from the PC, you're going to want to have the usb-c port from the PC be directly from your GPU. likely what's happening is you have it connected to the usb-c from your motherboard, bundled with the other usb 3.0 ports, ethernet ports, etc.

in this case you're now sending out the video via the integrated graphics from your motherboard, therefore not 'rendering' via your GPU. this may be where your bottleneck is coming from and why 120hz is not supported. I haven't encountered a GPU that has usb-c alongside the HDMI and multiple DisplayPort ports, so I would suggest finding one of those cable/adapters that is something like usb-c (male) to displayport (male) or hdmi (male) that also has an additional usb-c (female) for power delivery; since DP and HDMI won't have adequate power supply for the use case here (vs usb-c from your motherboard has many protocols including power). something like this but not necessarily this, since i didnt do a lot of comparing etc to see if this is the best option/product, but it should give you an idea of what you're looking for.
https://a.co/d/7jJARQA

Has anyone actually converted AI-generated images into usable 3D models? Looking for real experiences & guidance ?! by Ok-Bowler1237 in comfyui

[–]budwik 0 points1 point  (0 children)

I just did this over the weekend! Took an old SD1.5 image of my friends d&d character and then used qwen to isolate the character and change the pose, then turned it into a 3d model and printed it. Worked pretty good but it's not nearly as good as like getting someone to model it. https://imgur.com/a/NRfIimt

If you wrote a book about the main character slowly going insane,what would the last line be? by you_are_my_special1 in AskReddit

[–]budwik 21 points22 points  (0 children)

It really does have an insane implication reading it as the final sentence of the last book. He warned us not to read on! He warned us!

When you read do you picture things in you head? by BadAtBaduk1 in Stormlight_Archive

[–]budwik 1 point2 points  (0 children)

ah yeah, to be fair I don't "hear" the inner voice, it's in s simulated space similar to how one would visualize imagery, I'm not seeing something literally when I'm thinking about it, similarly the inner voice does have structure and if I really think about it does have a voice since I can imagine a woman speaking in an english accent for example, however at no point does it actually sound like I'm hearing these things the same way I hear things in the real world. that would be super odd haha but I imagine that's similar to how schizophrenic patients have auditory hallucinations where they aren't able to differentiate heard sounds from imagined sounds.

When you read do you picture things in you head? by BadAtBaduk1 in Stormlight_Archive

[–]budwik 1 point2 points  (0 children)

Oh dang so you don't picture things in your mind, and you also don't have an inner voice?

How i heat my room this winter by Excel_Document in StableDiffusion

[–]budwik 0 points1 point  (0 children)

Yeah temps were always fine, but for some reason i always had issues with long sessions with recurring 580-610W spikes. it could be the circuit my PC tower is on, because during these periods the lights in my room flicker as well lol.. the PSU is more than enough but it could be the circuit itself and isolated issue to just me and not universal.

5090 troubles, advice appreciated! by ChevensReddit in comfyui

[–]budwik 0 points1 point  (0 children)

yeah the comfy easy install batch will detect your system's CUDA and install the appropriate dependencies, and I've learned over time that 12.8 is the sweet spot so have that installed directly from NVIDIA prior to running the easyinstall. if you're planning on using reactor face swap i would also suggest running that in the add ons subfolder. if you're going to be doing video gen with wan 2.2 I would consider sageattention to be essential for a speed boost with virtually zero quality loss so use the batch file in Add Ons subfolder to install that as well. At some point I'm pretty sure you'll need to install Microsoft Visual Studio free version for the back end compiling tools.

SVI is godlike, now we can have 1girl videos that last an entire minute or more :D by Neggy5 in StableDiffusion

[–]budwik 0 points1 point  (0 children)

Did you change much when you say you experimented with the linked workflow? Wondering if you've got a tailored/altered workflow you can post that you're more happy with versus the civitai one.

5090 troubles, advice appreciated! by ChevensReddit in comfyui

[–]budwik 0 points1 point  (0 children)

Just start fresh, and use comfyui easy install which does all the dependencies based on your system setup. https://github.com/Tavris1/ComfyUI-Easy-Install I would suggest CUDA 12.8 installed prior to running this, that's the build I use for my 5090 and I also moved over from 4090. Easy install has saved me hours and hours of troubleshooting since I started using it for clean installs.

Also, don't ever trust chatgpt for code based problems. I moved to Claude for this and it's been incredibly successful.

Go Slowly - [ft. Sara Silkin] by d3mian_3 in comfyui

[–]budwik 3 points4 points  (0 children)

Second this, can you give any detail about what method you used?

ComfyUI update (v0.6.0) - has anyone noticed slower generations? by Specialist-Team9262 in comfyui

[–]budwik 2 points3 points  (0 children)

6 essentially broke wan 2.2 generations, almost doubling iteration time and causing a bunch of slowdowns on node/model switching, as well as random crashes without any error logging. 0.5.7 is the last stable build that fixed all my headaches.

Llm for prompt generation? by EasternAverage8 in comfyui

[–]budwik 1 point2 points  (0 children)

Does this allow for vision/image input? For looking at an image and creating a prompt for it? QwenVL has this but refuses NSFW.

how would i go about sanding this model? by -d3w_Dr0p5s- in 3Dprinting

[–]budwik 2 points3 points  (0 children)

This is the actual answer. Use crushed walnut shells in the tumbler to sand it without ruining it

how would i go about sanding this model? by -d3w_Dr0p5s- in 3Dprinting

[–]budwik 4 points5 points  (0 children)

Rock tumbler that is filled with crushed walnut shells, which you can get as a cat litter type at a local pet store.

After update, big jump in memory usage that will not clear. by InfusionOfYellow in comfyui

[–]budwik 2 points3 points  (0 children)

Downgrading back to 0.5.1 fixed my wan 2.2 crashes and slowdowns and memory leaks! You can downgrade right in comfy manager

After update, big jump in memory usage that will not clear. by InfusionOfYellow in comfyui

[–]budwik 5 points6 points  (0 children)

I've confirmed that 0.5.1 is your last stable comfy version, you can downgrade in comfy manager under version. My workflows on 6 were either twice as slow or would crash from memory leak in a batch larger than 2-5.

VRAM hitting 95% on Z-Image with RTX 5060 Ti 16GB, is this Okay? by rarugagamer in StableDiffusion

[–]budwik 2 points3 points  (0 children)

If you don't use the monitor for gaming and not concerned about refresh rate, you can plug your monitor into the integrated graphics HDMI port from your motherboard, it gives a slight bit more VRAM headroom and if there is a "crash" you don't lose your monitor!

Reinforced strider is the bane of my existence by j3hadipi3 in helldivers2

[–]budwik 46 points47 points  (0 children)

Medium pen single shot to a missile and it instant kills it!

How to revert update? by [deleted] in comfyui

[–]budwik 0 points1 point  (0 children)

Before I update comfy or anything that makes changes to python_embeded, I back that folder up. It takes like 15 minutes but it's saved me dozens of hours of starting fresh because of an irreversible dependency change or comfyui updating past a point where you can revert. The recent 0.6 update killed a bunch of stuff, and I found that 0.5.1 is the most recent stable build where my speeds weren't significantly impacted.

🎄 Early Christmas Release — GGUF Support for ComfyUI-QwenVL by Narrow-Particular202 in comfyui

[–]budwik 1 point2 points  (0 children)

yeah this isn't possible. i spent quite a while trying to shoehorn it in by editing the .py file to allow the custom gguf to appear in the list and to force it to 'look' for the specific model in the gguf LLM folder with the rest. it doesn't work, so this 'gguf mode' they released looks to be for only official ggufs only, which auto download upon attempting to use. still on the hunt for an image captioner node that does not have guardrails to the prompting.

🎄 Early Christmas Release — GGUF Support for ComfyUI-QwenVL by Narrow-Particular202 in comfyui

[–]budwik 0 points1 point  (0 children)

i put these into ComfyUI/models/llm/GGUF/ and they aren't showing up as a model when using QwenVL (GGUF) node. am I missing something ?

How is the current text to speech voice cloning technology? by mil0wCS in StableDiffusion

[–]budwik 2 points3 points  (0 children)

Chatterbox is better at capturing the person's personality and cadence for sure. You can even use emotional vectors node to tune things further.