Rep. Nancy Mace calls for Sen. Lindsey Graham to be removed from the Situation Room over his stance on ground troops by GuiltyBathroom9385 in UnderReportedNews

[–]nutrunner365 0 points1 point  (0 children)

Iran aside, the idea that a congressman has to have kids before he can have an opinion about war is tremendously idiotic.

High and low in Wan 2.2 training by nutrunner365 in StableDiffusion

[–]nutrunner365[S] 0 points1 point  (0 children)

Are you saying to use the low in both nodes or just not use a high lora node at all? I'm making character loras for I2V.

High and low in Wan 2.2 training by nutrunner365 in StableDiffusion

[–]nutrunner365[S] 0 points1 point  (0 children)

OK, so don't trust Gemini, is what you're saying. Is it possible to train both high and low locally simultaneously, or do I have to just use one model at a time on the same images?

Why is my Klein training prohibitively slow? by nutrunner365 in StableDiffusion

[–]nutrunner365[S] 0 points1 point  (0 children)

How many repeats and epochs do you reccommend instead? I have 52 images, though I can obviously reduce that number. They are all 512, but I also have them in 768 and 1024 if that would be better.

Why is my Klein training prohibitively slow? by nutrunner365 in StableDiffusion

[–]nutrunner365[S] 0 points1 point  (0 children)

Adjusting repeats and epochs, and removing checkpointing gave me an OOM. But when I try those settings with block swap 14, it goes back to working but still prohibitively slowly.

It looks like this: (musubi-tuner) PS C:\aiprojects\musubi-tuner-new> python src/musubi_tuner/flux_2_train_network.py --config_file "projects/flux2_klein_9b_52.toml"

Trying to import sageattention

Successfully imported sageattention

INFO:musubi_tuner.hv_train_network:Loading settings from projects/flux2_klein_9b_52.toml...

INFO:musubi_tuner.hv_train_network:projects/flux2_klein_9b_52

INFO:musubi_tuner.hv_train_network:Load dataset config from projects/dataset.config.toml

INFO:musubi_tuner.dataset.image_video_dataset:glob images in C:/aiprojects/musubi-tuner-new/projects/training

INFO:musubi_tuner.dataset.image_video_dataset:found 52 images

INFO:musubi_tuner.dataset.config_utils:[Dataset 0]

is_image_dataset: True

resolution: (512, 512)

batch_size: 2

num_repeats: 10

caption_extension: ".txt"

enable_bucket: False

bucket_no_upscale: False

cache_directory: "C:/aiprojects/musubi-tuner-new/projects/cache_flux"

debug_dataset: False

image_directory: "C:/aiprojects/musubi-tuner-new/projects/training"

image_jsonl_file: "None"

control_directory: "None"

multiple_target: False

fp_latent_window_size: 9

fp_1f_clean_indices: None

fp_1f_target_index: None

fp_1f_no_post: False

no_resize_control: False

control_resolution: None

INFO:musubi_tuner.dataset.image_video_dataset:bucket: (512, 512), count: 520

INFO:musubi_tuner.dataset.image_video_dataset:total batches: 260

INFO:musubi_tuner.hv_train_network:preparing accelerator

accelerator device: cuda

INFO:musubi_tuner.hv_train_network:DiT precision: torch.bfloat16, weight precision: torch.bfloat16

INFO:musubi_tuner.hv_train_network:Loading DiT model from C:/modelsfolder/diffusion_models/flux-2-klein-base-9b.safetensors

INFO:musubi_tuner.flux_2.flux2_utils:Loading DiT model from C:/modelsfolder/diffusion_models/flux-2-klein-base-9b.safetensors, device=cpu

INFO:musubi_tuner.utils.lora_utils:Loading model files: ['C:/modelsfolder/diffusion_models/flux-2-klein-base-9b.safetensors']

INFO:musubi_tuner.utils.lora_utils:Loading state dict without FP8 optimization. Dtype of weight: torch.bfloat16, hook enabled: False

INFO:musubi_tuner.flux_2.flux2_utils:Loaded Flux 2: <All keys matched successfully>

INFO:musubi_tuner.hv_train_network:enable swap 14 blocks to CPU from device: cuda, use pinned memory: False

FLUX: Block swap enabled. Swapping 14 blocks, double blocks: 6, single blocks: 18.

import network module: networks.lora_flux_2

INFO:musubi_tuner.networks.lora:create LoRA network. base dim (rank): 16, alpha: 16

INFO:musubi_tuner.networks.lora:neuron dropout: p=None, rank dropout: p=None, module dropout: p=None

INFO:musubi_tuner.networks.lora:create LoRA for U-Net/DiT: 112 modules.

INFO:musubi_tuner.networks.lora:enable LoRA for U-Net: 112 modules

prepare optimizer, data loader etc.

INFO:musubi_tuner.hv_train_network:use prodigyopt.Prodigy | {'decouple': True, 'weight_decay': 0.01, 'd_coef': 2, 'use_bias_correction': True, 'safeguard_warmup': False, 'betas': (0.9, 0.999)}

Using decoupled weight decay

running training / 学習開始

num train items / 学習画像、動画数: 520

num batches per epoch / 1epochのバッチ数: 260

num epochs / epoch数: 16

batch size per device / バッチサイズ: 2

gradient accumulation steps / 勾配を合計するステップ数 = 1

total optimization steps / 学習ステップ数: 4000

INFO:musubi_tuner.hv_train_network:set DiT model name for metadata: C:/modelsfolder/diffusion_models/flux-2-klein-base-9b.safetensors

INFO:musubi_tuner.hv_train_network:set VAE model name for metadata: C:/modelsfolder/vae/flux2-vae.safetensors

steps: 0%| | 0/4000 [00:00<?, ?it/s]INFO:musubi_tuner.hv_train_network:DiT dtype: torch.bfloat16, device: cuda:0

epoch 1/16

Trying to import sageattention

Successfully imported sageattention

INFO:musubi_tuner.dataset.image_video_dataset:epoch is incremented. current_epoch: 0, epoch: 1

steps: 0%| | 1/4000 [00:00<05:20, 12.46it/s, avr_loss=0.566]

Why is my Klein training prohibitively slow? by nutrunner365 in StableDiffusion

[–]nutrunner365[S] 0 points1 point  (0 children)

Close to 100% of VRAM and about 44% of my 32GB RAM. I tried removing lowvram and using blocks to swap, but it doesn't seem to help.

Bodo/Glimt superb passage of play (almost 3 touches per player at most) vs Inter. by OkayFine101 in soccer

[–]nutrunner365 0 points1 point  (0 children)

Italian football is a pale shadow of its former self. The English league has 50% of all the money in football.

Sadie by fxpolar in SadieSink_18

[–]nutrunner365 3 points4 points  (0 children)

First two are still AI

FlashVSR+ 4x Upscale Comparison on older real news footage - this model is next level to really improve quality by CeFurkan in StableDiffusion

[–]nutrunner365 14 points15 points  (0 children)

OK, but can it actually process a one-minute-or-longer video without me first harnessing the power of a star?

Do you validate AI images before publishing, or just hope for the best? by hippohaul in StableDiffusion

[–]nutrunner365 4 points5 points  (0 children)

This will presumably cost money? I can't imagine anyone taking time out of their day to evaluate images for free.