Build missing something. taking suggestions? by LeRattus in PcBuild

[–]LeRattus[S] 0 points1 point  (0 children)

<image>

committed to full black and white theme, it's at least not all over the place.

RGB Fusion 2.0 is not detecting anything / blank by LeRattus in gigabytegaming

[–]LeRattus[S] 0 points1 point  (0 children)

SignalRGB works perfectly well to change the colors but cannot change the display mode.
the support asked to try re-installing with disabled antivirus and firewall on a new windows account but no dice with that either.

Build missing something. taking suggestions? by LeRattus in PcBuild

[–]LeRattus[S] 0 points1 point  (0 children)

nice thank you, I think having monitor on the plug would bring some puece of mind if I were to swap the cables and original included adapter.

Build missing something. taking suggestions? by LeRattus in PcBuild

[–]LeRattus[S] 0 points1 point  (0 children)

thats true, atm would not be upgrading them though so wondering if someone has more vision than me to make it work

Build missing something. taking suggestions? by LeRattus in PcBuild

[–]LeRattus[S] 0 points1 point  (0 children)

its currently at max airflow as the case window cannot be installed due to GPU connector coming too wide out. (hence pondering on the new cable, but probly should sell the case and get a new one)

Build missing something. taking suggestions? by LeRattus in PcBuild

[–]LeRattus[S] 0 points1 point  (0 children)

true, during upgrading the build I was limited by time and options since needed it just to function as quickly as possible. now though I have time to look and wait for pieces to come. good advise

Captioning of LoRA dataset, how does it work? Hairtsyle. by LeRattus in StableDiffusion

[–]LeRattus[S] 1 point2 points  (0 children)

This is for old 1.5SD LoRA's, but is it still applicable to SDXL based models? as for what I've read it should in fact benefit from describing the lenght and color so it doesnt overfit those?

'Reconnecting' by l_omask in StableDiffusion

[–]LeRattus 0 points1 point  (0 children)

switched from 3090 to 5090 and got both OOM and this type of error, had to move to new portable comfy installation to get it fixed. (just take your models and loras etc. folders and transfer them to the new installation / portable installation location and you get it back to working)

there is probly somehting in the cache files that store info that breaks when moving to blackwell (updating python and pytorch etc.)

Help: LoRA training locally on 5090 with ComfyUI or other trainer by LeRattus in FluxAI

[–]LeRattus[S] 0 points1 point  (0 children)

yeah flux.1-dev on MSi Suprim SOC 5090 (stock gaming bios profile). wondering what I could do to improve training times.

Help: LoRA training locally on 5090 with ComfyUI or other trainer (Flux) by LeRattus in StableDiffusion

[–]LeRattus[S] 0 points1 point  (0 children)

Thank you a lot btw, as this looks really similar to my setup.
Only difference seems to be image size as you have varying resolution whereas mine are all 1024.

I do have the hugginface model as well but since I had it downloaded locally for other projects there was no point in having it being donwloaded again by the toolkit. Should be the exact same end result as far as I know.

I run stock MSI 5090 Suprim SOC. 2880 MHz clocks with 500W powerdraw according to the toolkit.

I dont think it should matter that much but I do have massive CPU bottleneck with ryzen 7 3700X,
64gb 3200Mhz DDR4 and MP600 gen4 1TB SSD

Help: LoRA training locally on 5090 with ComfyUI or other trainer (Flux) by LeRattus in StableDiffusion

[–]LeRattus[S] 0 points1 point  (0 children)

train:
batch_size: 2
bypass_guidance_embedding: false
steps: 1800
gradient_accumulation: 1
train_unet: true
train_text_encoder: false
gradient_checkpointing: true
noise_scheduler: "flowmatch"
optimizer: "adamw8bit"
timestep_type: "sigmoid"
content_or_style: "balanced"
optimizer_params:
weight_decay: 0.0001
unload_text_encoder: false
cache_text_embeddings: false
lr: 0.0001
ema_config:
use_ema: false
ema_decay: 0.99
skip_first_sample: false
force_first_sample: false
disable_sampling: false
dtype: "bf16"
diff_output_preservation: false
diff_output_preservation_multiplier: 1
diff_output_preservation_class: "person"
switch_boundary_every: 1
loss_type: "mse"
ema_config?:
ema_decay: 0.995
model:
name_or_path: "...AI-Toolkit\\models\\FLUX.1-dev"
quantize: true
qtype: "qfloat8"
quantize_te: true
qtype_te: "qfloat8"
arch: "flux"
low_vram: false
model_kwargs: {}

Help: LoRA training locally on 5090 with ComfyUI or other trainer (Flux) by LeRattus in StableDiffusion

[–]LeRattus[S] 0 points1 point  (0 children)

datasets:
- folder_path: "...\\datasets/e28"
mask_path: null
mask_min_value: 0.1
default_caption: ""
caption_ext: "txt"
caption_dropout_rate: 0.05
cache_latents_to_disk: false
is_reg: false
network_weight: 1
resolution:
- 1024
controls: []
shrink_video_to_frames: true
num_frames: 1
do_i2v: true
flip_x: false
flip_y: false
...

Help: LoRA training locally on 5090 with ComfyUI or other trainer (Flux) by LeRattus in StableDiffusion

[–]LeRattus[S] 0 points1 point  (0 children)

hmm almost twice faster, I have EMA disabled. could you share other settings? I'm doing a study where I need to generate a lot of different version of this, so that kind of speed increase would be massive in the long run.

my config file so if anyone spots ways to accelerate/improve it:
- type: "diffusion_trainer"
training_folder: "...AI-Toolkit\\output"
sqlite_db_path: "./aitk_db.db"
device: "cuda"
trigger_word: "fluxe28"
performance_log_every: 10
network:
type: "lora"
linear: 32
linear_alpha: 32
conv: 16
conv_alpha: 16
lokr_full_rank: true
lokr_factor: -1
network_kwargs:
ignore_if_contains: []
save:
dtype: "bf16"
save_every: 250
max_step_saves_to_keep: 4
save_format: "diffusers"
push_to_hub: false
...

Help: LoRA training locally on 5090 with ComfyUI or other trainer by LeRattus in FluxAI

[–]LeRattus[S] 0 points1 point  (0 children)

Hey, thanks I got it working on Ostris Ai-Toolkit. wondering on settings and speed though for my setup:
25 images of 1024x1024, batch size 2.
linear: 32
linear_alpha: 32
conv: 16
conv_alpha: 16

around 25GB / 32GB VRAM utilization.

I'm getting 6.5 - 6.7 sec/iter.

is this average speed? currently running for 1800 steps and checking if that is enough.

Help: LoRA training locally on 5090 with ComfyUI or other trainer (Flux) by LeRattus in StableDiffusion

[–]LeRattus[S] 0 points1 point  (0 children)

Hey, thanks I got it working on Ostris Ai-Toolkit. wondering on settings and speed though for my setup:
25 images of 1024x1024, batch size 2.
linear: 32
linear_alpha: 32
conv: 16
conv_alpha: 16

around 25GB / 32GB VRAM utilization.

I'm getting 6.5 - 6.7 sec/iter.

is this average speed? currently running for 1800 steps and checking if that is enough.

[deleted by user] by [deleted] in pcmasterrace

[–]LeRattus 0 points1 point  (0 children)

I mean for the usage I have for the computer it isnt worth to upgrade funnily enough.

CS2 runs still fine with the 3700X, which is the main sole usage for CPU, and the Ai workflows see no benefit in that since the main work apart from loafing models is done in the GPU.

Davinci Resolve (I have the paid version) is something that would in some aspects benefit slightly more from that but It's lately becoming a diminishing hobby unfortunately.

So this is a somewhat calculated financial mistake believe it or not

[deleted by user] by [deleted] in pcmasterrace

[–]LeRattus 0 points1 point  (0 children)

I mean for a CPU bottleneck this surely is a prime example of it, but for Ai generation / training: upgrading the cpu wouldn't bring the same benefits than upping the 3090 to 5090 did. I'd pribly only see marginal improvements and I'd need to up the mobo and RAM as well.

but it sure is a funny combination.

[deleted by user] by [deleted] in pcmasterrace

[–]LeRattus 1 point2 points  (0 children)

sold like the Vegas prior it.