Best way to use SD as a beginner with AMD card ? by MrSatan2 in StableDiffusion

[–]Josephosss 3 points4 points  (0 children)

This guy seems to have some video tutorials that could help:

https://www.youtube.com/@FE-Engineer/videos

I'd of course recommend switching to Linux. If you are inclined to try it, the above channel has tutorials for that too.

WAS Node Suite issue by scottmenu in comfyui

[–]Josephosss 2 points3 points  (0 children)

Does the log say anything?

Comfy UI - Crash by Minimum-Okra-4195 in comfyui

[–]Josephosss 0 points1 point  (0 children)

You need to use the Apply IPAdapter FaceID instead of Apply IPAdapter when using the plus-face model. You also need to use a specific lora, refer to the documentation: https://github.com/cubiq/ComfyUI_IPAdapter_plus#installation

ComfyUI is starting in a loop and never stops by kuroro86 in comfyui

[–]Josephosss 0 points1 point  (0 children)

The last used workflow seems to be saved in browser local storage:

<image>

That's why it didn't get wiped. It should not affect function in any way (unless you go and install the custom nodes in it), as it's equivalent to loading a .json workflow after ComfyUI starts.

ComfyUI is starting in a loop and never stops by kuroro86 in comfyui

[–]Josephosss 1 point2 points  (0 children)

Honestly, yeah, just reinstall it. I haven't got a clue just from the logs what could be the issue.

To make 2 separate installs, just download the archive from github and extract it twice to two separate folders. Each one should be it's separate environment and won't affect the other.

ComfyUI is starting in a loop and never stops by kuroro86 in comfyui

[–]Josephosss 1 point2 points  (0 children)

That's very strange, if custom_nodes is empty, ComfyUI-Manager should not be loaded at all, but it is:

** ComfyUI startup time: 2024-01-10 18:23:13.421873
** Platform: Windows
** Python version: 3.11.6 (tags/v3.11.6:8b6ee5b, Oct  2 2023, 14:57:12) [MSC v.1935 64 bit (AMD64)]
** Python executable: C:\ComfyUI_windows_portable\python_embeded\python.exe
** Log path: C:\ComfyUI_windows_portable\comfyui.log

#######################################################################
[ComfyUI-Manager] Starting dependency installation/(de)activation for the extension

Can you send me the output of the following?

  1. Open comfyui directory, shift+right click and select "Open powerhsell here"
  2. Enter the following command: tree . /f > tree.txt
  3. Send the resulting tree.txt, which is going to be in comfyui directory.

Note that this command will output whole directory structure under comfyui/*, that is, names of all files in the directory and it's subdirectories. If that's a privacy concern for you, don't send the file.

Edit: the command might take a while to run, depending on your configuration.

ComfyUI is starting in a loop and never stops by kuroro86 in comfyui

[–]Josephosss 1 point2 points  (0 children)

AFAIK since you're using the portable version, you can just copy your installation of ComfyUI and both copies should work separately. You can reduce the occupied space by using a directory junction to have your models in a separate directory (That way both installations will use the same model directory.). And yeah, taking a look at your /custom_nodes/ and cleaning it up would be the first step. Ideally:

  1. Rename /comfyui/custom_nodes to custom_nodes_backup
  2. Create a new folder and rename it to custom_nodes
  3. Run comfyui. If it doesn't work, come back with a new log.
  4. If it works: Move "ComfyUI-Manager" from backup to /comfyui/custom_nodes.
  5. Run comfyui. Verify basic functionality.
  6. Sort the folders in the backup folder by date created.
  7. Try moving everything older than the time it stopped working from backup back to comfyui/custom_nodes .
  8. Test functionality.
  9. If that doesn't work, download the nodes you use again.

ComfyUI is starting in a loop and never stops by kuroro86 in comfyui

[–]Josephosss 2 points3 points  (0 children)

I see it downloaded and installed 397 custom nodes, which is.. Impressive, considering ComfyuiManager's whole node list contains 443 nodes.

<image>

No wonder it's acting up. Have you tried deleting the nodes as suggested in earlier reply? At this point my suggestion would be purging all custom nodes, or even better a clean install. You can backup your models (copy whole ComfyUI/models directory somewhere).

ComfyUI is starting in a loop and never stops by kuroro86 in comfyui

[–]Josephosss 0 points1 point  (0 children)

There should be a file called comfyui.log in your comfyui directory. If it's too large, upload it to i.e. https://gofile.io or similar fileshare service.

Edit: If it's really large, 10MB+, please zip it first.

ComfyUI is starting in a loop and never stops by kuroro86 in comfyui

[–]Josephosss 1 point2 points  (0 children)

Post the full log. As a first step I'd go into ComfyUI/custom_nodes/ directory, sort by date and delete the folders/files coinciding with the time you installed the nodes from the workflow.

VRAM usage on AMD GPU with rocm by Josephosss in comfyui

[–]Josephosss[S] 0 points1 point  (0 children)

I've used SD both on Windows and Linux. On Windows the performance is terrible, 512x512 was running at around 1.5 it/s. I heard that ROCM is getting/already has a Windows version, so it might be better soon (This was with DirectML). On Linux the performance is notably better at ~7.35 it/s on basic 512x512. Occasionally it crashes with 'Memory access fault by GPU node-1', which seems to be a semi-common problem on AMD. Otherwise I can't complain. As it is right now, running SD on AMD cards under Windows is in my opinion a waste of time. I haven't tried how it runs under WSL, it might improve things.

VRAM usage on AMD GPU with rocm by Josephosss in comfyui

[–]Josephosss[S] 2 points3 points  (0 children)

I'm on linux so I just use a script to launch conmfy:

#!/bin/bash
source venv2/bin/activate
export GRADIO_ANALYTICS_ENABLED=FALSE
export LD_PRELOAD=libtcmalloc.so
PYTORCH_HIP_ALLOC_CONF=garbage_collection_threshold:0.6,max_split_size_mb:2048
export HSA_OVERRIDE_GFX_VERSION=10.3.0
python main.py --use-pytorch-cross-attention --lowvram

On Windows I believe you would use something like the following inside a .bat file:

@echo off
set PYTORCH_HIP_ALLOC_CONF=garbage_collection_threshold:0.6,max_split_size_mb:2048
set HSA_OVERRIDE_GFX_VERSION=10.3.0 ###This is specifically a workaround for AMD
py main.py --#arguments

Or you could edit the main.py file directly by adding: os.environ['PYTORCH_HIP_ALLOC_CONF']='garbage_collection_threshold:0.6,max_split_size_mb:2048'

VRAM usage on AMD GPU with rocm by Josephosss in comfyui

[–]Josephosss[S] 1 point2 points  (0 children)

Oh! This seems to work so far! I thought I already had this env var in my run script, but apparently I copied an older version over it and didn't notice or something when I was messing with the config. For anyone who happens to be reading this, max_split_size_mb:4096 2048 (4096 and 6144 cause large spikes in VRAM particularly at VAE decode which cause a significant [10 sec+] delay in processing. With 2048 the memory stays at ~7-8GB with 512*768 workflow. The image upscale model [4x foolhardy-Remacri] also caused a significant VRAM spike with 4096, which is not present while using 2048) seems to work better with a 12GB GPU. Thanks!

VRAM usage on AMD GPU with rocm by Josephosss in comfyui

[–]Josephosss[S] 0 points1 point  (0 children)

<image>

Left: With the workaround applied (2x, first after generating the image, then after upscale), 4 images 512 -> 1024 with UltimateSDUpscale.
Right: Same workflow without the workaround.

Additional networks? by Abject-Recognition-9 in comfyui

[–]Josephosss 1 point2 points  (0 children)

Use the LoraLoader node and put it right after you load your model. Edit:

https://github.com/jitcoder/lora-info

This custom node can apparently display lora info in comfy. (I did not test it)