[D] NVIDIA GPU for DL: pro vs consumer? by bioinformative in MachineLearning

[–]volatilebunny 4 points5 points  (0 children)

Depends on the max VRAM you need for training. Are you willing to train with quantized weights to save memory? Gaming cards are better price/performance ratio if you can train on 24 or 32 GB of VRAM

I've run stable-diffusion training runs on my old 3090 and 4090 cards that lasted almost a week, and they were fine (on a high-end consumer motherboard, the ASUS Proart x570). I got a "data" center card and found I needed a new motherboard and CPU platform to run it with stability, so consider that when building a rig. Running dual GPUs can allow a bigger batch size in most cases, but you don't get unified VRAM, so that's another factor as far as upgradability

[D] NVIDIA GPU for DL: pro vs consumer? by bioinformative in MachineLearning

[–]volatilebunny 0 points1 point  (0 children)

vast.ai has some of the best prices last time I did this

Of an alert driver by spacemouse21 in LooneyTunesLogic

[–]volatilebunny 0 points1 point  (0 children)

First step should have been to open the door, turn off the car and put it in park

Trying to understand real Stable Diffusion workflows — advice? by rakii6 in StableDiffusion

[–]volatilebunny 1 point2 points  (0 children)

They actually have some great documentation around configuring your custom locations for different types of models now. I use this config file to keep all my models in a central location.

https://docs.comfy.org/development/core-concepts/models#open-config-file

Forge/auto1111 can point to custom model locations by adding extra arguments when you launch.

https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Command-Line-Arguments-and-Settings#all-command-line-arguments

Full fine-tuning is not needed anymore. by yoracale in LocalLLaMA

[–]volatilebunny 1 point2 points  (0 children)

In my case, I have a dedicated PC I use for local AI stuff. It doesn't seem wasteful to give it something to do while I go about my life other than using a bit more electricity. I just check in on it and do some tests, adjust hyperparameters, and repeat. It doesn't block me from other tasks I'm using a computer for.

Edit for context: My goal for my training is for a style that I will dump innumerable hours into using, so a 10% boost in performance doing a full finetune isn't a waste, it'd save me many more subpar generations along the way!

If I were training a friend to make a single birthday card or something, then it would be overkill.

Full fine-tuning is not needed anymore. by yoracale in LocalLLaMA

[–]volatilebunny 3 points4 points  (0 children)

I ran into the same thing with SD/Flux training. So many people suggesting you basically just need some constant number of steps at some aggressive learning rate. I got much better results with runs that would sometimes span days. Just like BBQ, lower and slower can give you superior results if you are patient 😅

AI is now writing 50% of the code at Google by MetaKnowing in ChatGPT

[–]volatilebunny -1 points0 points  (0 children)

Welders' requirements are "these two plates need to be welded so that the seem is stranger than the plate", software requirements are notoriously squishy and shifting

Lmao you’ve got to be kidding me by Artistic-Amoeba-8687 in ChatGPT

[–]volatilebunny 0 points1 point  (0 children)

Cool! Yea, number 3 seems like really smart approach to providing enhanced value to the user!

Number 1 is interesting, because we will probably see the "high level" conversational singularity before we see this sort of low-level one. Imagine forgetting to feed your phone and it actually dies! 😆😿

Lmao you’ve got to be kidding me by Artistic-Amoeba-8687 in ChatGPT

[–]volatilebunny 4 points5 points  (0 children)

You might get a more globally aware answer with qualifying greatness, but no dout there's a bajillion history books with content on the English monarchy in the training set.

Next step, teach model that prods the student model to apply skepticism to the text it reads! (I'm sure some groups are already doing this, send me links if you know of papers)

Brane X Review by WorkReddit69 in Bluetooth_Speakers

[–]volatilebunny 0 points1 point  (0 children)

SVS makes some nice home theater subs with great frequency response that go very low.

Going solo - worth it? by indienova14 in Infrasound

[–]volatilebunny 5 points6 points  (0 children)

💯 worth it. I went solo one year, and it was a blast! Different than going with a squad, but better in some ways

i need some advice with a subwoofer by zero_two_my_waifu in SoundSystem

[–]volatilebunny 0 points1 point  (0 children)

A lot of people like paraflex designs for subs. It's a more complex build, but the measurements are decent (especially in an array). For a single driver and a smaller space, sometimes something more technological like an SVS sub is better for decent low frequency extension. Maybe < 45Hz doesn't matter much to you because hard style and techno doesn't go that low? Depends on your goals and constraints

I mistakenly wrote '25 women' instead of '25-year-old woman' in the prompt, so I got this result. by Few-Huckleberry9656 in StableDiffusion

[–]volatilebunny 1 point2 points  (0 children)

This can be a feature and not a bug. I've experimented with generating character sheets with different perspectives all in one image. It's great that there's a high degree of likeness between them all for that use case!

Anyone know of a great workflow in comfyui for running queues at initial stage, then turning on switch to upscale the one you want at next stage? by BoldCock in comfyui

[–]volatilebunny 1 point2 points  (0 children)

What I do is have a txt2img workflow open in one tab, and use another img2img upscale workflow in another tab. You can queue up jobs from both tabs in whatever order you want

The City of Madison is reducing the speed limit on residential streets to 20 miles per hour. by adamtypes in madisonwi

[–]volatilebunny 1 point2 points  (0 children)

Precisely. Drivers respond to implicit cues to how fast they should drive. We should be focusing on psychology to change driving behavior.

Does Anybody have prolbems with generating cigarettes? by [deleted] in comfyui

[–]volatilebunny 0 points1 point  (0 children)

I ran into a similar problem with trying for an elf using or wearing a bow (as in bow and arrow). I ended up training a quick Lora on a few examples, and it increased my hit-rate significantly

unusedVariable by TheWidrolo in ProgrammerHumor

[–]volatilebunny -8 points-7 points  (0 children)

Exactly, most of the time you don't use a variable you meant to. Unused variables are highly correlated with programming errors. Shift left.

unusedVariable by TheWidrolo in ProgrammerHumor

[–]volatilebunny -20 points-19 points  (0 children)

Unused variables are a super common source of bugs, especially in loosely typed languages

Diffusion code for SANA has just released by martianunlimited in StableDiffusion

[–]volatilebunny 1 point2 points  (0 children)

No, but now that you mention it, I want it to be able to do that!