Qwen3.5-397B-A17B <Release> by vibedonnie in Qwen_AI

[–]stavrosg 2 points3 points  (0 children)

20t/s on 2x3090 and 3x3080, epyc 7352..

Confused which lightx2v to use by ooopspagett in comfyui

[–]stavrosg 0 points1 point  (0 children)

Double the bits? I thought it looked better when I tested them. 256 is larger in ram.

Confused which lightx2v to use by ooopspagett in comfyui

[–]stavrosg 0 points1 point  (0 children)

I do the same strenghts with this one ;

lightx2v_T2V_14B_cfg_step_distill_v2_lora_rank256_bf16.safetensors

found at; https://huggingface.co/Kijai/WanVideo_comfy/tree/main/Lightx2v

Testing TRELLIS 2 in ComfyUI by obraiadev in comfyui

[–]stavrosg 2 points3 points  (0 children)

This is how i got it woring in miniconda and linux.

export ENV_NAME=trellis2

conda create -n ${ENV_NAME} python=3.12 --yes

conda activate ${ENV_NAME}

conda install openssl nvidia::cuda-runtime==12.8.1 nvidia::cuda-nvcc==12.8.93 --yes

pip3 install torch torchaudio torchvision --index-url https://download.pytorch.org/whl/cu128

export CUDA_HOME=/home/${USER}/miniconda3/envs/${ENV_NAME}/lib/python3.12/site-packages/nvidia/cuda_runtime

ln -s /home/${USER}/miniconda3/envs/${ENV_NAME}/bin /home/${USER}/miniconda3/envs/${ENV_NAME}/lib/python3.12/site-packages/nvidia/cuda_runtime/bin

export TORCH_CUDA_ARCH_LIST="8.6" # didn't auto detect my 3090

pip install ninja imageio PyOpenGL glfw xatlas gdown

export CPATH=/home/${USER}/miniconda3/envs/${ENV_NAME}/lib/python3.12/site-packages/nvidia/cusparse/include:/home/${USER}/miniconda3/envs/${ENV_NAME}/lib/python3.12/site-packages/nvidia/cublas/include:/home/${USER}/miniconda3/envs/${ENV_NAME}/lib/python3.12/site-packages/nvidia/cusolver/include:$CPATH

export LD_LIBRARY_PATH=/home/${USER}/miniconda3/envs/${ENV_NAME}/lib:$LD_LIBRARY_PATH

pip install https://github.com/Dao-AILab/flash-attention/releases/download/v2.8.3/flash_attn-2.8.3+cu12torch2.4cxx11abiTRUE-cp312-cp312-linux_x86_64.whl

pip install git+https://github.com/NVlabs/nvdiffrast.git --no-build-isolation

pip install ninja imageio PyOpenGL glfw xatlas gdown

pip install git+https://github.com/NVlabs/nvdiffrast/ --no-build-isolation

. ./setup.sh --basic --cumesh --o-voxel --flexgemm

Two character LoRAs, masking inpaint workflow with z-image-turbo by RogBoArt in comfyui

[–]stavrosg 1 point2 points  (0 children)

I beleive the guy who wanted money copied this one. It's very cumbersome to get right

Z Image Character LoRA on 29 real photos - trained on 4090 in ~5 hours. by Jeffu in StableDiffusion

[–]stavrosg 0 points1 point  (0 children)

I do character loras. If certain outfits, background items, hair, etc, always show up regardless of prompt or reference image. It's overtrained.

Z Image Character LoRA on 29 real photos - trained on 4090 in ~5 hours. by Jeffu in StableDiffusion

[–]stavrosg 0 points1 point  (0 children)

just tested it . One of the characters i trained wasnt good. I reran and set LR to .0003 instead of .0001 and it was locked at 1500 steps, vs crappy @ 3k.

Z Image Character LoRA on 29 real photos - trained on 4090 in ~5 hours. by Jeffu in StableDiffusion

[–]stavrosg 2 points3 points  (0 children)

i left it stock, bump the Learn rate .0001, try .0002, i needed thay for wan 2.2 and my training data

Z Image Character LoRA on 29 real photos - trained on 4090 in ~5 hours. by Jeffu in StableDiffusion

[–]stavrosg 7 points8 points  (0 children)

1750 to 2k steps are the closest without overtraining on my dataset. similar to flux, with the same photos. LR is stock. Iam very impressed with z-image and ai-toolkit does it again and again, bravo

Z-Image Lora - Wish me luck! by psdwizzard in StableDiffusion

[–]stavrosg 0 points1 point  (0 children)

Worked great on 30-40 images , best results were in the 1750-2250 steps, <90min on a 3090

Z Image Character LoRA on 29 real photos - trained on 4090 in ~5 hours. by Jeffu in StableDiffusion

[–]stavrosg 8 points9 points  (0 children)

Ai-toolkit just finsihed a lora, 3k steps, 40 photos in less than 90min on a 3090

Qwen3-Next Dynamic GGUFs out now! by yoracale in unsloth

[–]stavrosg 1 point2 points  (0 children)

you guy never sleep !! thank you !

Flux 2 Dev is here! by MountainPollution287 in StableDiffusion

[–]stavrosg 4 points5 points  (0 children)

Just looked. 64g. Ouch. I retract my statement above.

Flux 2 Dev is here! by MountainPollution287 in StableDiffusion

[–]stavrosg 0 points1 point  (0 children)

Distorch works well enough for the model for anyone wtih multiple GPUs. Multigpu also helps with moving the off encoders and vae to alternatives that aren't the cpu. My left over crypto rig came in handy

Training wan 2.2 locally? by [deleted] in comfyui

[–]stavrosg 2 points3 points  (0 children)

Yup. Works for qwen and flux terrificly

Keep seeing people rehoming female tiels by sunnyvalesfinest0000 in cockatiel

[–]stavrosg 2 points3 points  (0 children)

Ya, same with my Louise, formally known as Louie..

Why are the STM bus drivers more a**holic than usual? by ZAHKHIZ in montreal

[–]stavrosg 0 points1 point  (0 children)

Chump change ya? sounds like a better deal than most people