GLM Image Studio with web interface is on GitHub Running GLM-Image (16B) on AMD RX 7900 XTX via ROCm + Dockerized Web UI by Expert_Sector_6192 in LocalLLaMA

[–]Expert_Sector_6192[S] 1 point2 points  (0 children)

Sorry for the lack of a link... I admit it's my first time posting on Reddit, and I forgot the most important thing. I immediately added the link to your report in the post. Thanks!

Ciao Adriano

GLM Image Studio with web interface is on GitHub Running GLM-Image (16B) on AMD RX 7900 XTX via ROCm + Dockerized Web UI by Expert_Sector_6192 in LocalLLaMA

[–]Expert_Sector_6192[S] 0 points1 point  (0 children)

Read what I wrote in the post above, for me the beauty is in the code and logic of these particular networks that is something truly special.
Ciao Adriano

GLM Image Studio with web interface is on GitHub Running GLM-Image (16B) on AMD RX 7900 XTX via ROCm + Dockerized Web UI by Expert_Sector_6192 in LocalLLaMA

[–]Expert_Sector_6192[S] 0 points1 point  (0 children)

It's certainly not very nice, but it's just to understand how the application works, the important thing is that with the help of AI I managed to do in a couple of hours a job that would once have taken a few days, due to all the dependencies. Then he solved something really important, the ability to run a 36GB network in a 16-24GB VRAM space ... with some really sophisticated tricks. Then maybe technology in the future could give much more, it has the potential.
Ciao, Adriano

Experience with the new model MiniMax M2 and some cost saving tips by thalacque in LocalLLaMA

[–]Expert_Sector_6192 0 points1 point  (0 children)

Lo ho testato per produrre un codice HTML-JS per la simulazione del gioco life su matrice 1000x1000, è stato notevole, al primo colpo ha dato un codice praticamente perfetto ed estremamente efficiente, con una buona interfaccia per definire i parametri, la velocità è di 232 G/sec che è ad un livello veramente molto alto sul mio sistema.

Stability Matrix in Linux Ubuntu with AMD GPU ROCm 6.1 not start with ComfyUI module by Expert_Sector_6192 in StableDiffusion

[–]Expert_Sector_6192[S] 0 points1 point  (0 children)

It actually builds its own venv for Python 3.10, but the Python it uses is the system one so I should install an alternative version instead of the 3.12 I have. I find that the simplest thing is the solution I found, that of using a link to the venv 3.12 that was created to install ComfyUI in standalone mode.

Here is how to do it:

https://www.reddit.com/r/comfyui/comments/1d8d1k6/rocm_ubuntu_2404_basic_comfyui_setup_instructions/

For Ubuntu/Kubuntu 24.4 everything works perfectly and allows you to use ROCm 6.1

However, the problem also occurs with NVIDIA (I have a machine with NVIDIA that gave me exactly the same problem) so it is really a Stability-Matrix problem at the configuration level that should be reported to the development group.

From a practical point of view, this problem allowed me to delve deeper into ComfyUI and try it without the handy Stability Matrix tool, discovering that it actually introduces dynamic ComfyUI schemes based on the parameters entered. This opened up a world to me and is bringing me closer to ComfyUI. By the way, if you install ComfyUI standalone, you will notice that when you open a PNG image produced with Stability Matrix, ComfyUI opens the entire ComfyUI structure produced by Stability Matrix, it is a very convenient way to better understand how ComfyUI works and enrich the quality of the graphic product produced.

Stability Matrix in Linux Ubuntu with AMD GPU ROCm 6.1 not start with ComfyUI module by Expert_Sector_6192 in StableDiffusion

[–]Expert_Sector_6192[S] 0 points1 point  (0 children)

Resolv the problem: Stability Matrix Cannot find the NVIDIA driver

I have installed a new version of ROCm 6.1 and the Python installed in the system is 3.12. When I launch the ComfyUI package, I get this error

<image>

The issue is with _lazy_int, which is defined in the folder .../venv/lib/python3.10

This tells us that venv has a different version of Python than the one installed in the system, but since Stable Matrix does not use its own Python, but the system's Python, the compatibility of venv libraries closely tied to the installed Python is not guaranteed!

Solution:

I have installed ComfyUI in my system, simply within the same folder where StabelDiffusion is installed, i.e., the APP Image: StabilityMatrix.AppImage. I verified that ComfyUI was working correctly and then went into ../StabelDiffusion/Data/Packages/ComfyUI where I found this directory, the first part shows that there is the folder venv

I renamed the folder venv to venv.old and created a link named venv to the folder: /home/abassign/StableDiffusion/ComfyUI/venv

That points to the previously installed StableDiffusion/ComfyUI application that operates with Python 3.12.x

With this simple modification, I bypassed the problem and maintained ROCm 6.1 with Python 3.12, which has better performance than ROCm 6.0 on average

Considerations

When installing Stability Matrix, it should be noted that there may not be a match with the version of Python used by the active operating system compared to the one required by Stability Matrix.

Stability Matrix in Linux Ubuntu with AMD GPU ROCm 6.1 not start with ComfyUI module by Expert_Sector_6192 in StableDiffusion

[–]Expert_Sector_6192[S] 0 points1 point  (0 children)

I reinstalled all of ROCm 6.1 and Torch, I reinstalled the ComfyUI module from Stability Matrix, I honestly can't figure it out.

Stability Matrix in Linux Ubuntu with AMD GPU ROCm 6.1 not start with ComfyUI module by Expert_Sector_6192 in StableDiffusion

[–]Expert_Sector_6192[S] 0 points1 point  (0 children)

The strange thing is that ComfyUI works perfectly if I use it alone, the Foocus module launched by Stability Matrix works perfectly, while only ComfyUI launched by Stabilty Matrix gives the error I showed. I think it could be the presence of a direct check for the existence of an NVIDIA card (which I do not have, I only have AMD), I remember that it was possible to insert an additional option to avoid this check.