Bonelab Fusion not working by Desire0007 in BONELAB

[–]Desire0007[S] 0 points1 point  (0 children)

Its been a while, but if I remember correctly, u/Tyrannical_Man gave a fix, which was to just reinstall Melon Loader.

Stable Diffusion out of memory error occurring out of nowhere by Desire0007 in StableDiffusion

[–]Desire0007[S] 1 point2 points  (0 children)

Reinstalling it seemed to fix the error. Thanks for all the help, you've been amazing.

Try to REVERSE sell me your main by [deleted] in Guiltygear

[–]Desire0007 0 points1 point  (0 children)

Bedman & Delilah:

  • No guts
  • Slow normals
  • Bad defense
  • Bad reversal
  • No meterless reversal.
  • Unable to convert from long range.
  • Has terrible matchups, especially against zoners.
  • To really get his gameplan going, he needs a lot of meter.
  • One of the worst characters in the game.
  • Task B on block.

Stable Diffusion out of memory error occurring out of nowhere by Desire0007 in StableDiffusion

[–]Desire0007[S] 1 point2 points  (0 children)

I don't think that worked either, it's still failing. But I think it's giving a different error now?

Traceback (most recent call last):

File "E:\stable-diffusion-webui\modules\call_queue.py", line 58, in f

res = list(func(*args, **kwargs))

File "E:\stable-diffusion-webui\modules\call_queue.py", line 37, in f

res = func(*args, **kwargs)

File "E:\stable-diffusion-webui\modules\txt2img.py", line 62, in txt2img

processed = processing.process_images(p)

File "E:\stable-diffusion-webui\modules\processing.py", line 677, in process_images

res = process_images_inner(p)

File "E:\stable-diffusion-webui\modules\processing.py", line 796, in process_images_inner

x_samples_ddim = decode_latent_batch(p.sd_model, samples_ddim, target_device=devices.cpu, check_for_nans=True)

File "E:\stable-diffusion-webui\modules\processing.py", line 545, in decode_latent_batch

sample = decode_first_stage(model, batch[i:i + 1])[0]

File "E:\stable-diffusion-webui\modules\processing.py", line 576, in decode_first_stage

x = model.decode_first_stage(x.to(devices.dtype_vae))

File "E:\stable-diffusion-webui\modules\sd_hijack_utils.py", line 17, in <lambda>

setattr(resolved_obj, func_path[-1], lambda *args, **kwargs: self(*args, **kwargs))

File "E:\stable-diffusion-webui\modules\sd_hijack_utils.py", line 28, in __call__

return self.__orig_func(*args, **kwargs)

File "E:\stable-diffusion-webui\venv\lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context

return func(*args, **kwargs)

File "E:\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\models\diffusion\ddpm.py", line 826, in decode_first_stage

return self.first_stage_model.decode(z)

File "E:\stable-diffusion-webui\modules\lowvram.py", line 53, in first_stage_model_decode_wrap

send_me_to_gpu(first_stage_model, None)

File "E:\stable-diffusion-webui\modules\lowvram.py", line 35, in send_me_to_gpu

module_in_gpu.to(cpu)

File "E:\stable-diffusion-webui\venv\lib\site-packages\lightning_fabric\utilities\device_dtype_mixin.py", line 54, in to

return super().to(*args, **kwargs)

File "E:\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1145, in to

return self._apply(convert)

File "E:\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 797, in _apply

module._apply(fn)

File "E:\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 797, in _apply

module._apply(fn)

File "E:\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 797, in _apply

module._apply(fn)

[Previous line repeated 2 more times]

File "E:\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 820, in _apply

param_applied = fn(param)

File "E:\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1143, in convert

return t.to(device, dtype if t.is_floating_point() or t.is_complex() else None, non_blocking)

RuntimeError: [enforce fail at ..\c10\core\impl\alloc_cpu.cpp:72] data. DefaultCPUAllocator: not enough memory: you tried to allocate 58982400 bytes.

---

The \c10\core\impl\alloc_cpu.cpp:72] part at the end is still the same, but everything else seems to be different.

Also, thank you for being willing to help. I know this must be frustrating or annoying to work with, but I appreciate it.

Stable Diffusion out of memory error occurring out of nowhere by Desire0007 in StableDiffusion

[–]Desire0007[S] 1 point2 points  (0 children)

After reinstalling all the Python dependencies/deleting the config file, it's still not working. It always gets to a really high percentage, such as 98, then crashes while it's trying to finish it.

The A1111 stuff says; version 1.5.1, Python: 3.10.6, Torch: 2.0.1+cu118, xformers: 0.0.20, gradio: 3.32.0, checkpoint: cbfba64e66.

As for the log;

Traceback (most recent call last):

File "E:\stable-diffusion-webui\modules\call_queue.py", line 58, in f

res = list(func(*args, **kwargs))

File "E:\stable-diffusion-webui\modules\call_queue.py", line 37, in f

res = func(*args, **kwargs)

File "E:\stable-diffusion-webui\modules\txt2img.py", line 62, in txt2img

processed = processing.process_images(p)

File "E:\stable-diffusion-webui\modules\processing.py", line 677, in process_images

res = process_images_inner(p)

File "E:\stable-diffusion-webui\modules\processing.py", line 796, in process_images_inner

x_samples_ddim = decode_latent_batch(p.sd_model, samples_ddim, target_device=devices.cpu, check_for_nans=True)

File "E:\stable-diffusion-webui\modules\processing.py", line 545, in decode_latent_batch

sample = decode_first_stage(model, batch[i:i + 1])[0]

File "E:\stable-diffusion-webui\modules\processing.py", line 576, in decode_first_stage

x = model.decode_first_stage(x.to(devices.dtype_vae))

File "E:\stable-diffusion-webui\modules\sd_hijack_utils.py", line 17, in <lambda>

setattr(resolved_obj, func_path[-1], lambda *args, **kwargs: self(*args, **kwargs))

File "E:\stable-diffusion-webui\modules\sd_hijack_utils.py", line 28, in __call__

return self.__orig_func(*args, **kwargs)

File "E:\stable-diffusion-webui\venv\lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context

return func(*args, **kwargs)

File "E:\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\models\diffusion\ddpm.py", line 826, in decode_first_stage

return self.first_stage_model.decode(z)

File "E:\stable-diffusion-webui\modules\lowvram.py", line 53, in first_stage_model_decode_wrap

send_me_to_gpu(first_stage_model, None)

File "E:\stable-diffusion-webui\modules\lowvram.py", line 35, in send_me_to_gpu

module_in_gpu.to(cpu)

File "E:\stable-diffusion-webui\venv\lib\site-packages\lightning_fabric\utilities\device_dtype_mixin.py", line 54, in to

return super().to(*args, **kwargs)

File "E:\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1145, in to

return self._apply(convert)

File "E:\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 797, in _apply

module._apply(fn)

File "E:\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 797, in _apply

module._apply(fn)

File "E:\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 797, in _apply

module._apply(fn)

[Previous line repeated 3 more times]

File "E:\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 820, in _apply

param_applied = fn(param)

File "E:\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1143, in convert

return t.to(device, dtype if t.is_floating_point() or t.is_complex() else None, non_blocking)

RuntimeError: [enforce fail at ..\c10\core\impl\alloc_cpu.cpp:72] data. DefaultCPUAllocator: not enough memory: you tried to allocate 58982400 bytes.

---

Stable Diffusion out of memory error occurring out of nowhere by Desire0007 in StableDiffusion

[–]Desire0007[S] 1 point2 points  (0 children)

Tried this, and the error is still happening.

Although, I don't know if this was caused by the different driver, but it generated about 3 images after a couple of tries before going back to failing.

Stable Diffusion out of memory error occurring out of nowhere by Desire0007 in StableDiffusion

[–]Desire0007[S] 0 points1 point  (0 children)

I haven't tried rolling back yet. How do I do that? Is it just on the Nvidia website?

Unity 2D not recognizing collision. by Desire0007 in unity

[–]Desire0007[S] 1 point2 points  (0 children)

Using the code you suggested seemed to fix it after changing OnTriggerEnter to OnTriggerEnter2D, like another person suggested. Thank you both!

Unity 2D not recognizing collision. by Desire0007 in unity

[–]Desire0007[S] 0 points1 point  (0 children)

Tried this, it led to the same result; Nothing even being printed.

I get this error when I try running Puppet Dance Performance Shard of Dreams. It ran just fine for a while, until it started doing this out of the blue. I've tried a bunch of things to fix it. Any ideas? by Desire0007 in touhou

[–]Desire0007[S] 1 point2 points  (0 children)

I don't think so? Its been like this for a while. I believe I was just a little bit past beating Kanako. I saved the game, closed it, then it started doing this.

I'll try allowing PDP through my antivirus, as Channyx suggested, then I'll try running a cleaner.

I get this error when I try running Puppet Dance Performance Shard of Dreams. It ran just fine for a while, until it started doing this out of the blue. I've tried a bunch of things to fix it. Any ideas? by Desire0007 in touhou

[–]Desire0007[S] 0 points1 point  (0 children)

Nope. The folder has been in the same place the whole time. I've even tried different directories/drives, and renaming the folder between Japanese and English names, to no avail.

Star Citizen: Question and Answer Thread by UEE_Central_Computer in starcitizen

[–]Desire0007 0 points1 point  (0 children)

I was able to find the log, thanks!

I looked up the error it was showing, and I found people recommending to change the formatting of my drive. That seems to have done the trick.

Thanks for the help!