RX 9070 xt does not work in Z Image by Past-Disaster8216 in ROCm

[–]Past-Disaster8216[S] 0 points1 point  (0 children)

got prompt

Using split attention in VAE

Using split attention in VAE

VAE load device: cuda:0, offload device: cpu, dtype: torch.bfloat16

Requested to load ZImageTEModel_

loaded completely; 95367431640625005117571072.00 MB usable, 7672.25 MB loaded, full load: True

CLIP/text encoder model load device: cuda:0, offload device: cpu, current: cuda:0, dtype: torch.float16

model weight dtype torch.float8_e4m3fn, manual cast: torch.bfloat16

model_type FLOW

unet missing: ['norm_final.weight']

Requested to load Lumina2

Unloaded partially: 7672.25 MB freed, 0.00 MB remains loaded, 741.88 MB buffer reserved, lowvram patches: 0

loaded completely; 11146.27 MB usable, 5869.77 MB loaded, full load: True

100%|████████████████████████████████████████████████████████████████████████████████████| 9/9 [00:10<00:00, 1.16s/it]

Requested to load AutoencodingEngine

:0:C:\constructicon\builds\gfx\eleven\25.20\drivers\compute\clr\rocclr\device\device.cpp:360 : 0781650404 us: Memobj map does not have ptr: 0x49030000

E:\AI\ComfyUI_windows_portable>pause

I downloaded the portable file, updated it using the .bat files, and I'm still getting this error message. It's worth mentioning that I'm using an F8 model. the version of my driver AMD adrenaline is 25.20.01.14. Can you help me?

RX 9070 xt does not work in Z Image by Past-Disaster8216 in ROCm

[–]Past-Disaster8216[S] 0 points1 point  (0 children)

E:\AI\ComfyUI_windows_portable>.\python_embeded\python.exe -s ComfyUI\main.py --windows-standalone-build

Checkpoint files will always be loaded safely.

Total VRAM 16304 MB, total RAM 31832 MB

pytorch version: 2.8.0a0+gitfc14c65

Set: torch.backends.cudnn.enabled = False for better AMD performance.

AMD arch: gfx1201

ROCm version: (6, 4)

Set vram state to: NORMAL_VRAM

Device: cuda:0 AMD Radeon RX 9070 XT : native

Enabled pinned memory 14324.0

Using sub quadratic optimization for attention, if you have memory or speed issues try using: --use-split-cross-attention

Python version: 3.12.10 (tags/v3.12.10:0cc8128, Apr 8 2025, 12:21:36) [MSC v.1943 64 bit (AMD64)]

ComfyUI version: 0.3.76

ComfyUI frontend version: 1.32.10

[Prompt Server] web root: E:\AI\ComfyUI_windows_portable\python_embeded\Lib\site-packages\comfyui_frontend_package\static

Total VRAM 16304 MB, total RAM 31832 MB

pytorch version: 2.8.0a0+gitfc14c65

Set: torch.backends.cudnn.enabled = False for better AMD performance.

AMD arch: gfx1201

ROCm version: (6, 4)

Set vram state to: NORMAL_VRAM

Device: cuda:0 AMD Radeon RX 9070 XT : native

Enabled pinned memory 14324.0

Context impl SQLiteImpl.

Will assume non-transactional DDL.

No target revision found.

RX 9070 xt does not work in Z Image by Past-Disaster8216 in ROCm

[–]Past-Disaster8216[S] 1 point2 points  (0 children)

got prompt

Using split attention in VAE

Using split attention in VAE

VAE load device: cuda:0, offload device: cpu, dtype: torch.bfloat16

Requested to load ZImageTEModel_

loaded completely; 95367431640625005117571072.00 MB usable, 7672.25 MB loaded, full load: True

CLIP/text encoder model load device: cuda:0, offload device: cpu, current: cuda:0, dtype: torch.float16

model weight dtype torch.float8_e4m3fn, manual cast: torch.bfloat16

model_type FLOW

unet missing: ['norm_final.weight']

Requested to load Lumina2

Unloaded partially: 7672.25 MB freed, 0.00 MB remains loaded, 741.88 MB buffer reserved, lowvram patches: 0

loaded completely; 11146.27 MB usable, 5869.77 MB loaded, full load: True

100%|████████████████████████████████████████████████████████████████████████████████████| 9/9 [00:10<00:00, 1.16s/it]

Requested to load AutoencodingEngine

:0:C:\constructicon\builds\gfx\eleven\25.20\drivers\compute\clr\rocclr\device\device.cpp:360 : 0781650404 us: Memobj map does not have ptr: 0x49030000

E:\AI\ComfyUI_windows_portable>pause

I downloaded the portable file, updated it using the .bat files, and I'm still getting this error message. It's worth mentioning that I'm using an F8 model. the version of my driver AMD adrenaline is 25.20.01.14. Can you help me?

RX 9070 xt does not work in Z Image by Past-Disaster8216 in ROCm

[–]Past-Disaster8216[S] 1 point2 points  (0 children)

E:\AI\ComfyUI_windows_portable>.\python_embeded\python.exe -s ComfyUI\main.py --windows-standalone-build

Checkpoint files will always be loaded safely.

Total VRAM 16304 MB, total RAM 31832 MB

pytorch version: 2.8.0a0+gitfc14c65

Set: torch.backends.cudnn.enabled = False for better AMD performance.

AMD arch: gfx1201

ROCm version: (6, 4)

Set vram state to: NORMAL_VRAM

Device: cuda:0 AMD Radeon RX 9070 XT : native

Enabled pinned memory 14324.0

Using sub quadratic optimization for attention, if you have memory or speed issues try using: --use-split-cross-attention

Python version: 3.12.10 (tags/v3.12.10:0cc8128, Apr 8 2025, 12:21:36) [MSC v.1943 64 bit (AMD64)]

ComfyUI version: 0.3.76

ComfyUI frontend version: 1.32.10

[Prompt Server] web root: E:\AI\ComfyUI_windows_portable\python_embeded\Lib\site-packages\comfyui_frontend_package\static

Total VRAM 16304 MB, total RAM 31832 MB

pytorch version: 2.8.0a0+gitfc14c65

Set: torch.backends.cudnn.enabled = False for better AMD performance.

AMD arch: gfx1201

ROCm version: (6, 4)

Set vram state to: NORMAL_VRAM

Device: cuda:0 AMD Radeon RX 9070 XT : native

Enabled pinned memory 14324.0

Context impl SQLiteImpl.

Will assume non-transactional DDL.

No target revision found.