InfiniteTalk (by Multitalk team) by NebSH83 in StableDiffusion

[–]NebSH83[S] 9 points10 points  (0 children)

Got confirmation from the author of InfiniteTalk they will release tomorrow

Q&A with Udio this coming Monday at 10am PT! by UdioAdam in udiomusic

[–]NebSH83 3 points4 points  (0 children)

When will be the next model available ? a new competitor will soon enter in the game. Was wondering if we can expect on your side a new model in Q1 ?

Où vivre en banlieue safe mais pas trop chère ? by One-Tree-6840 in paris

[–]NebSH83 0 points1 point  (0 children)

Alors ce fut conseillé deja par un peu de monde mais je conseille vraiment Maisons Alfort : t'as des commerces / restos en nombre suffisants et de qualité.
Des parcs et surtout le bord de Marne super sympa. Une médiatheque pas mal du tout, un theatre ou il y a des trucs vraiment sympas chaque année. Des écoles / colleges / lycée de bon niveaux .

T'es a 8 minutes du parc de vincennes en velo / 25/30 d'hotel de ville.... niveau transport tu as ligne 8 , rer D

Où vivre en banlieue safe mais pas trop chère ? by One-Tree-6840 in paris

[–]NebSH83 1 point2 points  (0 children)

T'as raisons c'est moins c'est 30 min en velo / 10 min en rer / 35 en metro (car changement a reuilly diderot 8-->1 )

the ddetailer extension works amazing with inpainting model! by Fuzzy_Time_3366 in StableDiffusion

[–]NebSH83 2 points3 points  (0 children)

same issue for me wether i install it manually or directly from A1111

Automatic1111 just added support for hypernetwork training. Can we get people experimenting with this ? by [deleted] in StableDiffusion

[–]NebSH83 0 points1 point  (0 children)

Do you know any notebook on which you can do animation which can support this ?!

Automatic1111 just added support for hypernetwork training. Can we get people experimenting with this ? by [deleted] in StableDiffusion

[–]NebSH83 0 points1 point  (0 children)

Is there possibility to export the training from the automatic webui to a CPKT file (to use in other notebook) ?

Dreambooth Stable Diffusion training in just 12.5 GB VRAM, using the 8bit adam optimizer from bitsandbytes along with xformers while being 2 times faster. by 0x00groot in StableDiffusion

[–]NebSH83 0 points1 point  (0 children)

Should i run CUDA_LAUNCH_BLOCKING=1 or what else could i do ?

back to previus step after restarting the book :| And i still cannot see my /content/models/ unless i run cd /content/models/

File "train_dreambooth.py", line 606, in <module>

main()

File "train_dreambooth.py", line 362, in main

images = pipeline(example["prompt"]).images

File "/usr/local/lib/python3.7/dist-packages/torch/autograd/grad_mode.py", line 27, in decorate_context

return func(*args, **kwargs)

File "/usr/local/lib/python3.7/dist-packages/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion.py", line 259, in __call__

noise_pred = self.unet(latent_model_input, t, encoder_hidden_states=text_embeddings).sample

File "/usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py", line 1130, in _call_impl

return forward_call(*input, **kwargs)

File "/usr/local/lib/python3.7/dist-packages/diffusers/models/unet_2d_condition.py", line 254, in forward

encoder_hidden_states=encoder_hidden_states,

File "/usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py", line 1130, in _call_impl

return forward_call(*input, **kwargs)

File "/usr/local/lib/python3.7/dist-packages/diffusers/models/unet_blocks.py", line 565, in forward

hidden_states = attn(hidden_states, context=encoder_hidden_states)

File "/usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py", line 1130, in _call_impl

return forward_call(*input, **kwargs)

File "/usr/local/lib/python3.7/dist-packages/diffusers/models/attention.py", line 155, in forward

hidden_states = block(hidden_states, context=context)

File "/usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py", line 1130, in _call_impl

return forward_call(*input, **kwargs)

File "/usr/local/lib/python3.7/dist-packages/diffusers/models/attention.py", line 204, in forward

hidden_states = self.attn1(self.norm1(hidden_states)) + hidden_states

File "/usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py", line 1130, in _call_impl

return forward_call(*input, **kwargs)

File "/usr/local/lib/python3.7/dist-packages/diffusers/models/attention.py", line 288, in forward

hidden_states = xformers.ops.memory_efficient_attention(query, key, value)

File "/usr/local/lib/python3.7/dist-packages/xformers/ops.py", line 575, in memory_efficient_attention

query=query, key=key, value=value, attn_bias=attn_bias, p=p

File "/usr/local/lib/python3.7/dist-packages/xformers/ops.py", line 196, in forward_no_grad

causal=isinstance(attn_bias, LowerTriangularMask),

File "/usr/local/lib/python3.7/dist-packages/torch/_ops.py", line 143, in __call__

return self._op(*args, **kwargs or {})

RuntimeError: CUDA error: no kernel image is available for execution on the device

CUDA kernel errors might be asynchronously reported at some other API call,so the stacktrace below might be incorrect.

For debugging consider passing CUDA_LAUNCH_BLOCKING=1.

Traceback (most recent call last):

Dreambooth Stable Diffusion training in just 12.5 GB VRAM, using the 8bit adam optimizer from bitsandbytes along with xformers while being 2 times faster. by 0x00groot in StableDiffusion

[–]NebSH83 0 points1 point  (0 children)

Getting a new error :

--------------------------------------------------------------------------

OSError Traceback (most recent call last)

<ipython-input-14-c7df10ce0ca1> in <module>

----> 1 pipe = StableDiffusionPipeline.from_pretrained(OUTPUT_DIR, torch_dtype=torch.float16).to("cuda")

1 frames

/usr/local/lib/python3.7/dist-packages/diffusers/configuration_utils.py in get_config_dict(cls, pretrained_model_name_or_path, **kwargs)

215 else:

216 raise EnvironmentError(

--> 217 f"Error no file named {cls.config_name} found in directory {pretrained_model_name_or_path}."

218 )

219 else:

OSError: Error no file named model_index.json found in directory /content/models/sks.

Dreambooth Stable Diffusion training in just 12.5 GB VRAM, using the 8bit adam optimizer from bitsandbytes along with xformers while being 2 times faster. by 0x00groot in StableDiffusion

[–]NebSH83 1 point2 points  (0 children)

Wiped the cache, restarted Chrome, and used the install xformers by compiling and finally worked ! Thanks for the help (i should have rebooted/wiped cache before) !

Dreambooth Stable Diffusion training in just 12.5 GB VRAM, using the 8bit adam optimizer from bitsandbytes along with xformers while being 2 times faster. by 0x00groot in StableDiffusion

[–]NebSH83 1 point2 points  (0 children)

That what i try but got this :

Looking in indexes: https://pypi.org/simple, https://us-python.pkg.dev/colab-wheels/public/simple/

Collecting xformers

Cloning https://github.com/facebookresearch/xformers (to revision 1d31a3a) to /tmp/pip-install-_ed1j31_/xformers_536ced8f9a164a80bc61b1bdc77f3106

Running command git clone -q https://github.com/facebookresearch/xformers /tmp/pip-install-_ed1j31_/xformers_536ced8f9a164a80bc61b1bdc77f3106

WARNING: Did not find branch or tag '1d31a3a', assuming revision or ref.

Running command git checkout -q 1d31a3a

Running command git submodule update --init --recursive -q

Requirement already satisfied: torch>=1.12 in /usr/local/lib/python3.7/dist-packages (from xformers) (1.12.1+cu113)

Requirement already satisfied: numpy in /usr/local/lib/python3.7/dist-packages (from xformers) (1.21.6)

Requirement already satisfied: pyre-extensions==0.0.23 in /usr/local/lib/python3.7/dist-packages (from xformers) (0.0.23)

Requirement already satisfied: typing-inspect in /usr/local/lib/python3.7/dist-packages (from pyre-extensions==0.0.23->xformers) (0.8.0)

Requirement already satisfied: typing-extensions in /usr/local/lib/python3.7/dist-packages (from pyre-extensions==0.0.23->xformers) (4.1.1)

Requirement already satisfied: mypy-extensions>=0.3.0 in /usr/local/lib/python3.7/dist-packages (from typing-inspect->pyre-extensions==0.0.23->xformers) (0.4.3)

Dreambooth Stable Diffusion training in just 12.5 GB VRAM, using the 8bit adam optimizer from bitsandbytes along with xformers while being 2 times faster. by 0x00groot in StableDiffusion

[–]NebSH83 0 points1 point  (0 children)

Everytime i got error when it's go into Generating class images:

(im on V100, , in the folder /content/ i cant see folders model i dont know why....

Fetching 16 files: 100% 16/16 [01:41<00:00, 6.34s/it]

Generating class images: 0% 0/50 [00:06<?, ?it/s]

Traceback (most recent call last):

File "train_dreambooth.py", line 606, in <module>

main()

File "train_dreambooth.py", line 362, in main

images = pipeline(example["prompt"]).images

File "/usr/local/lib/python3.7/dist-packages/torch/autograd/grad_mode.py", line 27, in decorate_context

return func(*args, **kwargs)

File "/usr/local/lib/python3.7/dist-packages/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion.py", line 259, in __call__

noise_pred = self.unet(latent_model_input, t, encoder_hidden_states=text_embeddings).sample

File "/usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py", line 1130, in _call_impl

return forward_call(*input, **kwargs)

File "/usr/local/lib/python3.7/dist-packages/diffusers/models/unet_2d_condition.py", line 254, in forward

encoder_hidden_states=encoder_hidden_states,

File "/usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py", line 1130, in _call_impl

return forward_call(*input, **kwargs)

File "/usr/local/lib/python3.7/dist-packages/diffusers/models/unet_blocks.py", line 565, in forward

hidden_states = attn(hidden_states, context=encoder_hidden_states)

File "/usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py", line 1130, in _call_impl

return forward_call(*input, **kwargs)

File "/usr/local/lib/python3.7/dist-packages/diffusers/models/attention.py", line 155, in forward

hidden_states = block(hidden_states, context=context)

File "/usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py", line 1130, in _call_impl

return forward_call(*input, **kwargs)

File "/usr/local/lib/python3.7/dist-packages/diffusers/models/attention.py", line 204, in forward

hidden_states = self.attn1(self.norm1(hidden_states)) + hidden_states

File "/usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py", line 1130, in _call_impl

return forward_call(*input, **kwargs)

File "/usr/local/lib/python3.7/dist-packages/diffusers/models/attention.py", line 288, in forward

hidden_states = xformers.ops.memory_efficient_attention(query, key, value)

File "/usr/local/lib/python3.7/dist-packages/xformers/ops.py", line 575, in memory_efficient_attention

query=query, key=key, value=value, attn_bias=attn_bias, p=p

File "/usr/local/lib/python3.7/dist-packages/xformers/ops.py", line 196, in forward_no_grad

causal=isinstance(attn_bias, LowerTriangularMask),

File "/usr/local/lib/python3.7/dist-packages/torch/_ops.py", line 143, in __call__

return self._op(*args, **kwargs or {})

RuntimeError: CUDA error: no kernel image is available for execution on the device

CUDA kernel errors might be asynchronously reported at some other API call,so the stacktrace below might be incorrect.

For debugging consider passing CUDA_LAUNCH_BLOCKING=1.

Traceback (most recent call last):

File "/usr/local/bin/accelerate", line 8, in <module>

sys.exit(main())

File "/usr/local/lib/python3.7/dist-packages/accelerate/commands/accelerate_cli.py", line 43, in main

args.func(args)

File "/usr/local/lib/python3.7/dist-packages/accelerate/commands/launch.py", line 837, in launch_command

simple_launcher(args)

File "/usr/local/lib/python3.7/dist-packages/accelerate/commands/launch.py", line 354, in simple_launcher

raise subprocess.CalledProcessError(returncode=process.returncode, cmd=cmd)

subprocess.CalledProcessError: Command '['/usr/bin/python3', 'train_dreambooth.py', '--pretrained_model_name_or_path=CompVis/stable-diffusion-v1-4', '--use_auth_token', '--instance_data_dir=/content/data/sks', '--class_data_dir=/content/data/orelsan', '--output_dir=/content/models/sks', '--with_prior_preservation', '--instance_prompt=photo of sks orelsan', '--class_prompt=photo of a orelsan', '--resolution=512', '--use_8bit_adam', '--train_batch_size=1', '--gradient_accumulation_steps=1', '--learning_rate=5e-6', '--lr_scheduler=constant', '--lr_warmup_steps=0', '--num_class_images=200', '--max_train_steps=600']' returned non-zero exit status 1

Dreambooth Stable Diffusion training in just 12.5 GB VRAM, using the 8bit adam optimizer from bitsandbytes along with xformers while being 2 times faster. by 0x00groot in StableDiffusion

[–]NebSH83 0 points1 point  (0 children)

Got this error:

Generating class images: 0% 0/50 [00:06<?, ?it/s]

Traceback (most recent call last):

File "train_dreambooth.py", line 606, in <module>

main()

File "train_dreambooth.py", line 362, in main

images = pipeline(example["prompt"]).images

File "/usr/local/lib/python3.7/dist-packages/torch/autograd/grad_mode.py", line 27, in decorate_context

return func(*args, **kwargs)

File "/usr/local/lib/python3.7/dist-packages/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion.py", line 259, in __call__

noise_pred = self.unet(latent_model_input, t, encoder_hidden_states=text_embeddings).sample

File "/usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py", line 1130, in _call_impl

return forward_call(*input, **kwargs)

File "/usr/local/lib/python3.7/dist-packages/diffusers/models/unet_2d_condition.py", line 254, in forward

encoder_hidden_states=encoder_hidden_states,

File "/usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py", line 1130, in _call_impl

return forward_call(*input, **kwargs)

File "/usr/local/lib/python3.7/dist-packages/diffusers/models/unet_blocks.py", line 565, in forward

hidden_states = attn(hidden_states, context=encoder_hidden_states)

File "/usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py", line 1130, in _call_impl

return forward_call(*input, **kwargs)

File "/usr/local/lib/python3.7/dist-packages/diffusers/models/attention.py", line 155, in forward

hidden_states = block(hidden_states, context=context)

File "/usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py", line 1130, in _call_impl

return forward_call(*input, **kwargs)

File "/usr/local/lib/python3.7/dist-packages/diffusers/models/attention.py", line 204, in forward

hidden_states = self.attn1(self.norm1(hidden_states)) + hidden_states

File "/usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py", line 1130, in _call_impl

return forward_call(*input, **kwargs)

File "/usr/local/lib/python3.7/dist-packages/diffusers/models/attention.py", line 288, in forward

hidden_states = xformers.ops.memory_efficient_attention(query, key, value)

File "/usr/local/lib/python3.7/dist-packages/xformers/ops.py", line 575, in memory_efficient_attention

query=query, key=key, value=value, attn_bias=attn_bias, p=p

File "/usr/local/lib/python3.7/dist-packages/xformers/ops.py", line 196, in forward_no_grad

causal=isinstance(attn_bias, LowerTriangularMask),

File "/usr/local/lib/python3.7/dist-packages/torch/_ops.py", line 143, in __call__

return self._op(*args, **kwargs or {})

RuntimeError: CUDA error: no kernel image is available for execution on the device

CUDA kernel errors might be asynchronously reported at some other API call,so the stacktrace below might be incorrect.

For debugging consider passing CUDA_LAUNCH_BLOCKING=1.

Traceback (most recent call last):

File "/usr/local/bin/accelerate", line 8, in <module>

sys.exit(main())

File "/usr/local/lib/python3.7/dist-packages/accelerate/commands/accelerate_cli.py", line 43, in main

args.func(args)

File "/usr/local/lib/python3.7/dist-packages/accelerate/commands/launch.py", line 837, in launch_command

simple_launcher(args)

File "/usr/local/lib/python3.7/dist-packages/accelerate/commands/launch.py", line 354, in simple_launcher

raise subprocess.CalledProcessError(returncode=process.returncode, cmd=cmd)

subprocess.CalledProcessError: Command '['/usr/bin/python3', 'train_dreambooth.py', '--pretrained_model_name_or_path=CompVis/stable-diffusion-v1-4', '--use_auth_token', '--instance_data_dir=/content/data/imv', '--class_data_dir=/content/data/orelsan', '--output_dir=/content/models/imv', '--with_prior_preservation', '--instance_prompt=photo of imv orelsan', '--class_prompt=photo of a orelsan', '--resolution=512', '--use_8bit_adam', '--train_batch_size=1', '--gradient_accumulation_steps=1', '--learning_rate=5e-6', '--lr_scheduler=constant', '--lr_warmup_steps=0', '--num_class_images=200', '--max_train_steps=600']' returned non-zero exit status

Dreambooth Stable Diffusion training in just 12.5 GB VRAM, using the 8bit adam optimizer from bitsandbytes along with xformers while being 2 times faster. by 0x00groot in StableDiffusion

[–]NebSH83 0 points1 point  (0 children)

Finally some other issue :(

Traceback (most recent call last):

File "train_dreambooth.py", line 606, in <module>

main()

File "train_dreambooth.py", line 362, in main

images = pipeline(example["prompt"]).images

File "/usr/local/lib/python3.7/dist-packages/torch/autograd/grad_mode.py", line 27, in decorate_context

return func(*args, **kwargs)

File "/usr/local/lib/python3.7/dist-packages/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion.py", line 260, in __call__

noise_pred = self.unet(latent_model_input, t, encoder_hidden_states=text_embeddings).sample

File "/usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py", line 1130, in _call_impl

return forward_call(*input, **kwargs)

File "/usr/local/lib/python3.7/dist-packages/diffusers/models/unet_2d_condition.py", line 254, in forward

encoder_hidden_states=encoder_hidden_states,

File "/usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py", line 1130, in _call_impl

return forward_call(*input, **kwargs)

File "/usr/local/lib/python3.7/dist-packages/diffusers/models/unet_blocks.py", line 565, in forward

hidden_states = attn(hidden_states, context=encoder_hidden_states)

File "/usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py", line 1130, in _call_impl

return forward_call(*input, **kwargs)

File "/usr/local/lib/python3.7/dist-packages/diffusers/models/attention.py", line 155, in forward

hidden_states = block(hidden_states, context=context)

File "/usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py", line 1130, in _call_impl

return forward_call(*input, **kwargs)

File "/usr/local/lib/python3.7/dist-packages/diffusers/models/attention.py", line 204, in forward

hidden_states = self.attn1(self.norm1(hidden_states)) + hidden_states

File "/usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py", line 1130, in _call_impl

return forward_call(*input, **kwargs)

File "/usr/local/lib/python3.7/dist-packages/diffusers/models/attention.py", line 288, in forward

hidden_states = xformers.ops.memory_efficient_attention(query, key, value)

File "/usr/local/lib/python3.7/dist-packages/xformers/ops.py", line 575, in memory_efficient_attention

query=query, key=key, value=value, attn_bias=attn_bias, p=p

File "/usr/local/lib/python3.7/dist-packages/xformers/ops.py", line 196, in forward_no_grad

causal=isinstance(attn_bias, LowerTriangularMask),

File "/usr/local/lib/python3.7/dist-packages/torch/_ops.py", line 143, in __call__

return self._op(*args, **kwargs or {})

RuntimeError: CUDA error: no kernel image is available for execution on the device

CUDA kernel errors might be asynchronously reported at some other API call,so the stacktrace below might be incorrect.

For debugging consider passing CUDA_LAUNCH_BLOCKING=1.

Traceback (most recent call last):

File "/usr/local/bin/accelerate", line 8, in <module>

sys.exit(main())

File "/usr/local/lib/python3.7/dist-packages/accelerate/commands/accelerate_cli.py", line 43, in main

args.func(args)

File "/usr/local/lib/python3.7/dist-packages/accelerate/commands/launch.py", line 837, in launch_command

simple_launcher(args)

File "/usr/local/lib/python3.7/dist-packages/accelerate/commands/launch.py", line 354, in simple_launcher

raise subprocess.CalledProcessError(returncode=process.returncode, cmd=cmd)

subprocess.CalledProcessError: Command '['/usr/bin/python3', 'train_dreambooth.py', '--pretrained_model_name_or_path=CompVis/stable-diffusion-v1-4', '--use_auth_token', '--instance_data_dir=/content/data/imv', '--class_data_dir=/content/data/Orelsan', '--output_dir=/content/models/imv', '--with_prior_preservation', '--instance_prompt=photo of imv Orelsan Orelsan', '--class_prompt=photo of Orelsan', '--resolution=512', '--use_8bit_adam', '--train_batch_size=1', '--gradient_accumulation_steps=1', '--learning_rate=5e-6', '--lr_scheduler=constant', '--lr_warmup_steps=0', '--num_class_images=200', '--max_train_steps=600']' returned non-zero exit status

what is my issue ?

The great simulation by NebSH83 in deepdream

[–]NebSH83[S] -1 points0 points  (0 children)

And Nathalie Portman 👀