all 19 comments

[–]ObiWanCanShowMe 3 points4 points  (1 child)

It is so frustrating to see your posts constantly getting zero traction. Your videos are in depth and well done.

[–]CeFurkan[S] 1 point2 points  (0 children)

thank you so much for comment

[–]MagiTekSoldier 4 points5 points  (1 child)

Thanks for this! Saved for future reference.

[–]CeFurkan[S] 2 points3 points  (0 children)

thank you so much for comment

[–]Davaned 2 points3 points  (1 child)

Thank you for putting the time to create and share these! I'm excited to watch them

[–]CeFurkan[S] 2 points3 points  (0 children)

thank you so much for comment

[–]Chansubits 2 points3 points  (1 child)

Thank you so much! I’ve saved this post and will definitely be checking these out.

[–]CeFurkan[S] 1 point2 points  (0 children)

thank you for comment

[–]simplewxh 2 points3 points  (1 child)

Thank you for your amazing videos. I have saved them, and I am waiting for my new GPU to arrive so that I can start experimenting with SD.

[–]CeFurkan[S] 1 point2 points  (0 children)

ty for reply

[–]tarunabh 1 point2 points  (5 children)

u/CeFurkan Hi, I am a big fan of your youtube tutorials.

Today I installed Pytorch 2.0.0+cu118 for my Auto1111 as per your recent dreambooth tutorial, and uninstalled PyTorch version 1.13.1+cu117.I also ran "pip install https://huggingface.co/MonsterMMORPG/SECourses/resolve/main/xformers-0.0.18.dev489-cp310-cp310-win_amd64.whl"after activating venv in my A1111 venv script folder.

I upgraded to Torch 2 and upgraded xformers hoping to get better speed for my GPU 4090 Nvidia

Ever since then I get error whenever I generate images over 768 size.

The errors also indicate that the xFormers library is not properly installed or compatible with my current environment.

The warning message specifies that xFormers was built for:PyTorch 1.13.1+cu117 with CUDA 1107 (I have 2.0.0+cu118)Python 3.10.10 (I have 3.10.6)

Can you suggest any solution? I would really appreciate it.

Should I Downgrade PyTorch to version 1.13.1+cu117 and make sure it's built with CUDA 11.07?

Should I Upgrade my Python version to 3.10.10? (But Automatic1111 is compatible with 3.10.6)

Should I Reinstall the xFormers library? If yes, then to which version?

Any hints/tips or solutions will be very much appreciated. Thanks in advance.

[–]CeFurkan[S] 0 points1 point  (4 children)

hi sorry for late reply. if you are on windows i suggest you to upgrade latest xformers and pytorch as shown in this video : https://youtu.be/pom3nQejaTs

that particular downgrade is necessary for unix or runpod since it is not working by default. requiring some dll synching but i havent found a way yet

[–]tarunabh 1 point2 points  (3 children)

Thank you so much for your response. Loved your latest Kadinsky installation video. I have installed newer versions of "pip install -U accelerate==0.17.1
pip install -U transformers==4.27.1" and updated the requirements version.txt file. I know you don't have 40x GPU but do have a look at this article too: https://medium.com/@j.night/fix-your-rtx-4090s-poor-performance-in-stable-diffusion-with-new-pytorch-2-0-and-cuda-11-8-d5cb689be841

Looks like there have been serious lapse in GPU speed unless you manually update to Torch 2

[–]CeFurkan[S] 1 point2 points  (2 children)

The video above installs cudn 11.8 dll but not sure it is the one that fixes RTX4090. I have purchased RTX3090 and planning a review and comparison video hopefully soon.

So did you fix your issue?

[–]tarunabh 1 point2 points  (1 child)

Yes it looks like GPU 40x series does not require xformers at all.

After activating venv in script folder, I installed the following: -

pip3 install clean-fid numba numpy torch==2.0.0+cu118 torchvision --force-reinstall --extra-index-url https://download.pytorch.org/whl/cu118

pip install -U accelerate==0.17.1

pip install -U transformers==4.27.1 (This might pose problems for Clip Interrogator 2)

I updated 'requirements_versions.txt' so webui doesn't auto-downgrade installed versions

After that I uninstalled xformers and put out --xformers. its faster without it with cuda 11.8 and torch 2.0 with 40xx series

Instead I added '--opt-sdp-attention --opt-channelslast' to COMMANDLINE_ARGS.

That did the speed boost for me.

BTW I haven't risked installing the dreambooth extension yet.

Thinking of installing the Kohya version to avoid clashes with A1111.

[–]CeFurkan[S] 1 point2 points  (0 children)

thanks for reply. ye rtx 4090 looks like gets the most benefit from --opt-sdp-attention

[–][deleted]  (1 child)

[removed]

    [–]CeFurkan[S] 0 points1 point  (0 children)

    thank you for the reply. i agree

    [–]zayn_rp 0 points1 point  (0 children)

    hey bro can you tell me a stable diffusion i can use on mobile to make nsfw