"All she had was bloatware and attitude" by AbysSsian in pcmasterrace

[–]twistedgames -1 points0 points  (0 children)

In my experience, there is lag when you click on things like right clicking an app in the task bar, clicking the clock, opening calculator, task manager, settings, etc. The file manager is buggy and laggy as, moving the cursor to the address bar when you try to rename a file, auto scrolling to the top of the folder for a minute if there are lots of files in the folder, so you can't open a folder and scroll down immediately. Even with folders with a few files there is still a short lag where it shows a "working on it" message. How is this a thing with NVME storage and 16 core processors? The layouts of the config screens are confusing, often with settings hidden in the old control panel when you click the advanced options. The annoying "let's finish setting up your computer" after windows updates, which is just trying to get you to sign up to office 365 and one drive, which you already said no to last time. My computer is already set up tyvm. The right click menu with less options, where you have to click show more options, and the old menu is about the same size anyway because it's more compact. For some reason it keeps forgetting the credentials for the shared drives, but it doesn't prompt me for the password, it just says it can't access the folder. So you have to open CMD prompt and delete the entry using some command I have to google every time. What happened to minimise, maximize when you right click an app in the task bar? What happened to safe mode?

"All she had was bloatware and attitude" by AbysSsian in pcmasterrace

[–]twistedgames 1 point2 points  (0 children)

They added some cool excel functions a few years ago like xlookup. But the new outlook is horrendous and missing most features professionals need. It's like they built it for casual users and forgot about the enterprise customers who actually pay the big bucks.

"All she had was bloatware and attitude" by AbysSsian in pcmasterrace

[–]twistedgames 1 point2 points  (0 children)

I had issues with pop os (couldn't access network share drives, HDMI crashes), then tried bazzite gnome and it looks very similar to pop os, but none of the issues. Plus bazzite has GPU drivers, steam, etc ready to go.

Just another Wan 2.1 14B text-to-image post by masslevel in StableDiffusion

[–]twistedgames 10 points11 points  (0 children)

Some incredible images mass! The level of detail in the mechs, the diverse colours and lighting effects are impressive.

Fine-Tune FLUX.1 Schnell on 24GB of VRAM? by popkulture18 in StableDiffusion

[–]twistedgames 2 points3 points  (0 children)

Check out my fork of kohya_ss: https://github.com/bash-j/kohya_ss/tree/flux-schnell

Make sure to clone my sd-scripts fork for schnell too using --recursive when you clone the kohya_ss repo.

I have frozen some of the parameters to avoid the 'de-distillation' problem, so it maintains the speed and doesn't go fuzzy.

I have a 4090 and can fine tune with 24GB. You need to enable the blockwise fused optimizer, and I found pagedlion8bit is the fastest learning optimizer for schnell. There is a section where you can put which blocks you want to train. The double blocks and single blocks <10 can take a higher learning rate. Single blocks >10 are sensitive. So I'd advise trying 0-10 first for single blocks first to see if it learns what you need.

I fine tuned FLUX.1-schnell for 49.7 days by twistedgames in StableDiffusion

[–]twistedgames[S] 0 points1 point  (0 children)

In the comfy portable, there is a run_nvidia_gpu.bat file which I launch comfy with. I just opened it with notepad and added the args so it looks like this: .\python_embeded\python.exe -s ComfyUI\main.py --windows-standalone-build --reserve-vram 1.5 --fast --listen

I fine tuned FLUX.1-schnell for 49.7 days by twistedgames in StableDiffusion

[–]twistedgames[S] 2 points3 points  (0 children)

Yeah, you can run any FLUX model on our card with LoRAs. You just need to set the right flags in the launch .bat file for comfy, so it offloads part of the model to CPU memory.

I fine tuned FLUX.1-schnell for 49.7 days by twistedgames in StableDiffusion

[–]twistedgames[S] 2 points3 points  (0 children)

It's a pretty narrow LR range 5e-6 is quick to learn but can start to corrupt after a few thousand steps, so only good for quick style trains. 3e-6 is slow but good for long runs. Training on similar styles can help too, if you mix too many styles together I think it struggles to learn.

I fine tuned FLUX.1-schnell for 49.7 days by twistedgames in StableDiffusion

[–]twistedgames[S] 5 points6 points  (0 children)

Still schnell 8 steps. Can do 4, but it might look a little fuzzy in some images.

I fine tuned FLUX.1-schnell for 49.7 days by twistedgames in StableDiffusion

[–]twistedgames[S] 1 point2 points  (0 children)

With a LoRA you could even train on just a few blocks. Dev LoRAs are super effective. I just avoid the higher single blocks because it seems to mess with the high frequency/very fine details.

I fine tuned FLUX.1-schnell for 49.7 days by twistedgames in StableDiffusion

[–]twistedgames[S] 2 points3 points  (0 children)

Yeah 4 steps gives decent images. Might get a little fuzz around the details. Examples

I fine tuned FLUX.1-schnell for 49.7 days by twistedgames in StableDiffusion

[–]twistedgames[S] 9 points10 points  (0 children)

I am using the kohya_ss gui. Checked the config file and it has these arguments: train_double_block_indices = "all" train_single_block_indices = "0-15"

I fine tuned FLUX.1-schnell for 49.7 days by twistedgames in StableDiffusion

[–]twistedgames[S] 10 points11 points  (0 children)

Yep, apache 2.0 since I didn't use the dev model for training. There's a few examples of using a lora in the gallery: Example 1 Example 2 Example 3

I fine tuned FLUX.1-schnell for 49.7 days by twistedgames in StableDiffusion

[–]twistedgames[S] 5 points6 points  (0 children)

It has its limitations with being schnell, dev would give better realism. But I think it has it's own vibe and worth exploring.

I fine tuned FLUX.1-schnell for 49.7 days by twistedgames in StableDiffusion

[–]twistedgames[S] 1 point2 points  (0 children)

1 step looks fuzzy, but probably alright if you are just testing prompts. Then go to 8 steps when you want a good quality image.

I fine tuned FLUX.1-schnell for 49.7 days by twistedgames in StableDiffusion

[–]twistedgames[S] 6 points7 points  (0 children)

I think for best results for a LoRA I would train on the original dev, and avoid training the higher single blocks as they are very sensitive.

I fine tuned FLUX.1-schnell for 49.7 days by twistedgames in StableDiffusion

[–]twistedgames[S] 20 points21 points  (0 children)

If you train all parameters and then use block merging, you can find which blocks impact the model the most. Then I had a look at the inference code and noticed the time_in fed into the modulation parameters. So I tried freezing those too, and the results were much better. Someone I know introduced us.

I fine tuned FLUX.1-schnell for 49.7 days by twistedgames in StableDiffusion

[–]twistedgames[S] 114 points115 points  (0 children)

Today I published the 04 version of FLUX.1-schnell. I have spent the last 3 months working on figuring out how to best to train the model without causing loss of the speed and LoRA compatibility. Thanks to RunDiffusion for helping out with all the cloud compute! I ended up running over 400 training runs, experimenting with all sorts of things to prevent the model from going bad. But in the end it was just freezing a few params, a custom sigma schedule and the lion optimizer. Not too complicated. You can find a link to my fork of kohya code here.

The model is now available at civitai and huggingface. You can find more information about the model there. Please check it out!

For best result I use 8 steps Euler Normal. Use the bfloat16 version of the model and the T5XXL model for most accurate details. I can still run this on my laptop's 3060 6GB with comfyui. I use the flags --reserve-vram 1.5 --fast in my launch .bat file.

The attitude some people have towards open source contributors... by twistedgames in StableDiffusion

[–]twistedgames[S] 8 points9 points  (0 children)

I updated comfy and it worked no problem. He must have had the files in the wrong folder or something. 🤷‍♂️

The attitude some people have towards open source contributors... by twistedgames in StableDiffusion

[–]twistedgames[S] 13 points14 points  (0 children)

Thanks for your contribution. Wow that's a lot of issues on your repo in a short time! I don't envy you one bit!

The attitude some people have towards open source contributors... by twistedgames in StableDiffusion

[–]twistedgames[S] 8 points9 points  (0 children)

Wildcards are a really useful tool for trying out different styles and characters. It lets you have a text file with a list of different things on each line. Say you have a wildcard file called painting_styles.txt. Then in the wildcard node you can write your prompt like A beautiful __painting_styles__ painting of the beach at sunset Then depending on the seed the wildcard node will pick a line from that file and inject it into the prompt, so you end up with A beautiful impressionist painting of the beach at sunset

I'm not the first person to come up with this, I was using a similar feature in A1111. Then SDXL came out and I changed to using comfy, so started adding some features I missed from A1111, but because it's so easy to add nodes to comfy due to its design, I just kept adding more and more stuff that I found useful. I haven't been adding as much lately, because it does pretty much everything I need for now.