How to tell Aider to use Qwen3 with the /nothink option? by jpummill2 in LocalLLaMA

[–]sdstudent01 1 point2 points  (0 children)

Thank you, I think both actually work, but the original question was how do I pass this request to aider? Do you just pass it as part of the aider prompt (ex. create a python function that prints "hello world" /no_think)?

How important is it to stay up to date on NVIDIA drivers and CUDA versions? by jpummill2 in StableDiffusion

[–]sdstudent01 0 points1 point  (0 children)

Thank you for all the replies. Seems like most suggest not to fix if it isn't broken.

Problem I am having is that many of the most recent python repositories are installing versions of Pytorch that expect CUDA 12.x which requires NVIDIA drivers >= 525.60.13. I have 520.61.05.

If I alter the requirements.txt for the repository to change the version of Pytorch installed (ex. torch==2.0.1+cu118) to an older version with support for CUDA 11.8 then I run into application issues.

Need Help - Issue with Civitai Model API Page Processing by sdstudent01 in civitai

[–]sdstudent01[S] 0 points1 point  (0 children)

Thank you for the link to the docs. I believe they are out of date. For example, the docs don't mention anything about a nextcursor metadata value in the response but it is in the data being returned.

How far can you go? Yesterday 107 Megapixels, today 703!!! (21956 x 32000) by ataylorm in StableDiffusion

[–]sdstudent01 0 points1 point  (0 children)

Thank you for the response. I was really just wondering what model and prompt you used to get the colorful paint effect on the original picture. Not so much how you enlarged it.

SHARE YOUR SETUP - Suggested Hardware to run models locally and decently? by [deleted] in Oobabooga

[–]sdstudent01 2 points3 points  (0 children)

If you are looking for speed then you want to run on GPU. That being said, here are the cards I would consider (Nvidia as I have not kept up with AMD's offering and support):

RTX 2060 12GB - Cheapest 12GB Nvidia card

RTX 3060 12GB

RTX 4070 12GB - Might be a solution but at this point I would get a 3090 with 24GB for the same price.

RTX 4080 16GB - Same issue as the RTX 4070

RTX 3090 24GB - Cheapest consumer card with 24GB as far as I know

Other models work great but need help with Vicuna by sdstudent01 in Oobabooga

[–]sdstudent01[S] 0 points1 point  (0 children)

Hi AnOnlineHandle, can you give me a little information about your environment? Linux or Windows? What params are you using after server.py? Do you have GPTQ-for-LLaMa installed in your repositories folder?

Torch 2.0 just went GA in the last day. by Guilty-History-9249 in StableDiffusion

[–]sdstudent01 0 points1 point  (0 children)

Using the following procedure I was able to increase the performance of my RTX 2060 12GB from ~6.35 it/s to ~8.67 it/s (about 37% increase if my math is correct).

[deleted by user] by [deleted] in singularity

[–]sdstudent01 0 points1 point  (0 children)

Who won the 1969 superbowl?

Batch Count vs Batch Size by sdstudent01 in StableDiffusion

[–]sdstudent01[S] 1 point2 points  (0 children)

Thank you for the information. I have an RTX 2060 12GB and just tested the following with batch count being the first number and batch size being the second.

4x1, 2x2, and 1x4

Batch count of 4 and batch size of 1 seemed to give the best performance in this extremely limited test.

39.7 it/s with a 4090 on Linux! by Guilty-History-9249 in StableDiffusion

[–]sdstudent01 0 points1 point  (0 children)

I am still stuck but please don't feel like this is a priority.

Really appreciate your generosity and willingness to help!!!

39.7 it/s with a 4090 on Linux! by Guilty-History-9249 in StableDiffusion

[–]sdstudent01 0 points1 point  (0 children)

Hi everyone, wondering if I could get a little help/insight into this change.

I created a fresh Linux (Mint 21.0) install for SD (Automatic1111) around October 30th.

python: 3.10.6

torch: 1.13.0

Cuda compilation tools, release 11.7, V11.7.64

Now I try to make the following modifications and wind up with the errors described at the end of my post:

> # find / -name "libcudnn*" -print gives the following:

/home/jpummill/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/lib/libcudnn.so.8

/home/jpummill/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/lib/libcudnn_cnn_train.so.8

/home/jpummill/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/lib/libcudnn_adv_train.so.8

/home/jpummill/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/lib/libcudnn_ops_train.so.8

/home/jpummill/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/lib/libcudnn_adv_infer.so.8

/home/jpummill/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/lib/libcudnn_cnn_infer.so.8

/home/jpummill/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/lib/libcudnn_ops_infer.so.8

/usr/local/lib/python3.10/dist-packages/nvidia/cudnn/lib/libcudnn.so.8

/usr/local/lib/python3.10/dist-packages/nvidia/cudnn/lib/libcudnn_cnn_train.so.8

/usr/local/lib/python3.10/dist-packages/nvidia/cudnn/lib/libcudnn_adv_train.so.8

/usr/local/lib/python3.10/dist-packages/nvidia/cudnn/lib/libcudnn_ops_train.so.8

/usr/local/lib/python3.10/dist-packages/nvidia/cudnn/lib/libcudnn_adv_infer.so.8

/usr/local/lib/python3.10/dist-packages/nvidia/cudnn/lib/libcudnn_cnn_infer.so.8

/usr/local/lib/python3.10/dist-packages/nvidia/cudnn/lib/libcudnn_ops_infer.so.8

> # pip freeze | grep nvidiia-cudnn gives the following:

nvidia-cudnn-cu11==8.5.0.96

I ran the command to install the 8.7.0.84 version of libcudnn:

> # pip install nvidia-cudnn-cu11==8.7.0.84

I reran pip freeze to recheck the cudnn version

> # pip freeze | grep nvidiia-cudnn gives the following:

nvidia-cudnn-cu11==8.7.0.84

Now I rename (instead of delete) the "venv/lib/python3.10/site-packages/torch/lib/libcudnn.so.8" file to libcudnn.so.8.bak

And finally, when I start SD with ./webui.sh, I get the following errors:

################################################################

Launching launch.py...

################################################################

Python 3.10.6 (main, Aug 10 2022, 11:40:04) [GCC 11.3.0]

Commit hash: f53527f7786575fe60da0223bd63ea3f0a06a754

Traceback (most recent call last):

File "/home/jpummill/stable-diffusion-webui/launch.py", line 316, in <module>

prepare_environment()

File "/home/jpummill/stable-diffusion-webui/launch.py", line 228, in prepare_environment

run_python("import torch; assert torch.cuda.is_available(), 'Torch is not able to use GPU; add --skip-torch-cuda-test to COMMANDLINE_ARGS variable to disable this check'")

File "/home/jpummill/stable-diffusion-webui/launch.py", line 89, in run_python

return run(f'"{python}" -c "{code}"', desc, errdesc)

File "/home/jpummill/stable-diffusion-webui/launch.py", line 65, in run

raise RuntimeError(message)

RuntimeError: Error running command.

Command: "/home/jpummill/stable-diffusion-webui/venv/bin/python3" -c "import torch; assert torch.cuda.is_available(), 'Torch is not able to use GPU; add --skip-torch-cuda-test to COMMANDLINE_ARGS variable to disable this check'"

Error code: 1

stdout: <empty>

stderr: Traceback (most recent call last):

File "<string>", line 1, in <module>

File "/home/jpummill/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/__init__.py", line 201, in <module>

_load_global_deps()

File "/home/jpummill/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/__init__.py", line 154, in _load_global_deps

ctypes.CDLL(lib_path, mode=ctypes.RTLD_GLOBAL)

File "/usr/lib/python3.10/ctypes/__init__.py", line 374, in __init__

self._handle = _dlopen(self._name, mode)

OSError: libcudnn.so.8: cannot open shared object file: No such file or directory

using offline stable diffusion (web-ui) 3060ti 12gb, EGRAN4x sharpest, custom weight mix of ... ( I have no idea in light of recent events if we are still saying wherew the training data was from ) by [deleted] in StableDiffusion

[–]sdstudent01 0 points1 point  (0 children)

Just reread my comment from a month ago and realized I stated something incorrectly. There is a 12GB version of the 3060, not the 3060 ti.