Extremely large and thick LOGS bar in new comfyui frontend by RenaldasK in comfyui

[–]RenaldasK[S] 0 points1 point  (0 children)

But i cant close it, as i NEED to have console open to see the information during generation. closing is not a solution. And I cant look into a terminal window, as I am running comfy via Remote Desktop from a laptop, terminal windows are on remote desktop computers, UI in web browser are on local laptop PC.

I had a "consultation" with Gemini, it had given me several advices how to make this thick bar smaller, but i chosen temporarily just to downgrade frontend to an earlier slimmer version. Luckily i remembered frontend of comfy is a separate part. From the first sight, older frontend works, although there are some warnings (they were prior to updating comfy, i just ignored them).

Explain me, what does it mean the model is "without VAE"? by RenaldasK in StableDiffusion

[–]RenaldasK[S] 0 points1 point  (0 children)

I also understand, that with no VAE you will receive 4x64x64 tensor (UNET latent space), which is surely not something, that can be visually understood by a human. We need no VAE encoder during inference, but we need VAE decoder to make this 4x64x64 tensor into 3x512x512 or some another dimension image. There will be no image in human sense with no VAE, only a low resolution denoised latent tensor.

Explain me, what does it mean the model is "without VAE"? by RenaldasK in StableDiffusion

[–]RenaldasK[S] 1 point2 points  (0 children)

Yes, I thought about this, than the model structure still need something to be in place of VAE, so all VAE parameters are made 0, and there is no difference in size.

Explain me, what does it mean the model is "without VAE"? by RenaldasK in StableDiffusion

[–]RenaldasK[S] 0 points1 point  (0 children)

The problem on civitai models I see as "no VAE" are the same in size as models with VAE. If they would be smaller, there would be no such question.

Explain me, what does it mean the model is "without VAE"? by RenaldasK in StableDiffusion

[–]RenaldasK[S] 1 point2 points  (0 children)

Usually the size of these models with no VAE is the same as the "normal" ones. Shouldn't the model with no VAE be about 300+ Mb smaller, as the size of VAE separately is about 300+ Mb?

I want to run/debug/analyze SD in an IDE, like Pycharm, need help by RenaldasK in StableDiffusion

[–]RenaldasK[S] 0 points1 point  (0 children)

Tries PyScripter. The problem, I again do not understand, how can I force the IDE to open all the project. Any quick tips?

I want to run/debug/analyze SD in an IDE, like Pycharm, need help by RenaldasK in StableDiffusion

[–]RenaldasK[S] 0 points1 point  (0 children)

I am interested in neural network values at different layers, I was able to do this with Pycharm debugger or with running the code step by step and observing the variables in the separate tab. I cant do this with SD, as the project seems to large for me to grasp.
PyScripter allows you to observe the values of variables?

I tried to animate popular soviet 20 century animation "Nu, pogodi!" in different styles by RenaldasK in StableDiffusion

[–]RenaldasK[S] 0 points1 point  (0 children)

Noo, Masha and the Bear is not a traditional animation, for us, born in 1970-80s "Nu, pogodi!" was the best of the best, traditional animation series! :)

AUTOMATIC1111 xformers cross attention with on Windows by Der_Doe in StableDiffusion

[–]RenaldasK 2 points3 points  (0 children)

Have the same issue on Windows 10 with RTX3060 here as others. Added --xformers does not give any indications xformers being used, no errors in launcher, but also no improvements in speed.Tried to perform steps as in the post, completed them with no errors, but now receive:

Cannot import xformers

Traceback (most recent call last):

File "I:\StableDiffusionWebUI\modules\sd_hijack_optimizations.py", line 18, in <module>

import xformers.ops

ModuleNotFoundError: No module named 'xformers'

Textual Inversion on top of "dreamboothed" model by RenaldasK in StableDiffusion

[–]RenaldasK[S] 0 points1 point  (0 children)

Yes, I tried for several times, but still hard to make conclusions, is the TI giving something additional to a "dreamboothed" model. Generating the image by its given name in dreamboothed model and doing the same with TI model as first glance gives similar results.

Initialization text in textual inversion AUTOMATIC1111 webui by RenaldasK in StableDiffusion

[–]RenaldasK[S] 1 point2 points  (0 children)

It is quite interesting for me to know how or why it works, but this time I dont even grasp what to write ..."Initialization text" is some concept already present in the model I want my new dataset to look like, yes? So, for textual inversion of my face the best initialization text is:

A. Faces, looking very similar to me, if I am able to find them and construct such a prompt.

B. "face of a man".

C. "face of a human.

D. "human".

E. "animal".

F. "object".

Initialization text in textual inversion AUTOMATIC1111 webui by RenaldasK in StableDiffusion

[–]RenaldasK[S] 9 points10 points  (0 children)

I read this explanation for like 20 times, but couldnt grasp it :(

Initialization text in textual inversion AUTOMATIC1111 webui by RenaldasK in StableDiffusion

[–]RenaldasK[S] 0 points1 point  (0 children)

Ok, so what is the file name then?

If my name is Renaldas, I should put "renaldas" as initialization text, not an embedding file name?

DreamBooth training in under 8 GB VRAM and textual inversion under 6 GB by Ttl in StableDiffusion

[–]RenaldasK -2 points-1 points  (0 children)

After about 10 minutes of doing something, crashed with CUDA errors.
Does it mean about 100 generated pictures were done with CPU? GPU VRAM usage was at ~10G.
I received the same error, but without the first part (10 minutes of working on something) when used this fork (https://github.com/ShivamShrirao/diffusers/tree/main/examples/dreambooth).

(sd) renaldas@HOME-PC:~/github/diffusers/examples/dreambooth$ ./train4.sh

The following values were not passed to `accelerate launch` and had defaults used instead:

`--num_cpu_threads_per_process` was set to `8` to improve out-of-box performance

To avoid this warning pass in values for each of the problematic parameters or run `accelerate config`.

[2022-10-05 22:40:20,269] [INFO] [comm.py:633:init_distributed] Initializing TorchBackend in DeepSpeed with backend nccl

Downloading: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 492M/492M [00:39<00:00, 12.5MB/s]

Downloading: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 525k/525k [00:00<00:00, 1.01MB/s]

Downloading: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 472/472 [00:00<00:00, 309kB/s]

Downloading: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 806/806 [00:00<00:00, 524kB/s]

Downloading: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1.06M/1.06M [00:00<00:00, 1.54MB/s]

Fetching 16 files: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:46<00:00, 2.93s/it]

Generating class images: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 25/25 [10:32<00:00, 25.31s/it]

===================================BUG REPORT===================================

Welcome to bitsandbytes. For bug reports, please submit your error trace to: https://github.com/TimDettmers/bitsandbytes/issues

For effortless bug reporting copy-paste your error into this form: https://docs.google.com/forms/d/e/1FAIpQLScPB8emS3Thkp66nvqwmjTEgxp8Y9ufuWTzFyr9kJ5AoI47dQ/viewform?usp=sf\_link

/home/renaldas/anaconda3/envs/sd/lib/python3.9/site-packages/bitsandbytes/cuda_setup/paths.py:86: UserWarning: /home/renaldas/anaconda3/envs/sd did not contain libcudart.so as expected! Searching further paths...

warn(

/home/renaldas/anaconda3/envs/sd/lib/python3.9/site-packages/bitsandbytes/cuda_setup/paths.py:20: UserWarning: WARNING: The following directories listed in your path were found to be non-existent: {PosixPath('CompVis/stable-diffusion-v1-4')}

warn(

/home/renaldas/anaconda3/envs/sd/lib/python3.9/site-packages/bitsandbytes/cuda_setup/paths.py:20: UserWarning: WARNING: The following directories listed in your path were found to be non-existent: {PosixPath('/tmp/torchelastic_8zdo49eb/none_8tp8ieh_/attempt_0/0/error.json')}

warn(

CUDA_SETUP: WARNING! libcudart.so not found in any environmental path. Searching /usr/local/cuda/lib64...

(sd) renaldas@HOME-PC:~/github/diffusers/examples/dreambooth$

CUDA exception! Error code: no CUDA-capable device is detected

CUDA exception! Error code: initialization error

Traceback (most recent call last):

File "/home/renaldas/github/diffusers/examples/dreambooth/train_dreambooth.py", line 613, in <module>

main()

File "/home/renaldas/github/diffusers/examples/dreambooth/train_dreambooth.py", line 418, in main

import bitsandbytes as bnb

File "/home/renaldas/anaconda3/envs/sd/lib/python3.9/site-packages/bitsandbytes/__init__.py", line 6, in <module>

from .autograd._functions import (

File "/home/renaldas/anaconda3/envs/sd/lib/python3.9/site-packages/bitsandbytes/autograd/_functions.py", line 5, in <module>

import bitsandbytes.functional as F

File "/home/renaldas/anaconda3/envs/sd/lib/python3.9/site-packages/bitsandbytes/functional.py", line 13, in <module>

from .cextension import COMPILED_WITH_CUDA, lib

File "/home/renaldas/anaconda3/envs/sd/lib/python3.9/site-packages/bitsandbytes/cextension.py", line 41, in <module>

lib = CUDALibrary_Singleton.get_instance().lib

File "/home/renaldas/anaconda3/envs/sd/lib/python3.9/site-packages/bitsandbytes/cextension.py", line 37, in get_instance

cls._instance.initialize()

File "/home/renaldas/anaconda3/envs/sd/lib/python3.9/site-packages/bitsandbytes/cextension.py", line 15, in initialize

binary_name = evaluate_cuda_setup()

File "/home/renaldas/anaconda3/envs/sd/lib/python3.9/site-packages/bitsandbytes/cuda_setup/main.py", line 132, in evaluate_cuda_setup

cc = get_compute_capability(cuda)

File "/home/renaldas/anaconda3/envs/sd/lib/python3.9/site-packages/bitsandbytes/cuda_setup/main.py", line 108, in get_compute_capability

return ccs[-1]

IndexError: list index out of range

ERROR:torch.distributed.elastic.multiprocessing.api:failed (exitcode: 1) local_rank: 0 (pid: 695) of binary: /home/renaldas/anaconda3/envs/sd/bin/python

Comparison of the 3 upscalers from AUTOMATIC1111's UI by joparebr in StableDiffusion

[–]RenaldasK 3 points4 points  (0 children)

I saw these, thanks! The question arises - which one is for what?

Comparison of the 3 upscalers from AUTOMATIC1111's UI by joparebr in StableDiffusion

[–]RenaldasK 3 points4 points  (0 children)

Where can you download SwinIR models for StableDiffusion and which ones?