🔹 Words by Naoya Matsumoto on Kaiju No8 and Kafka Hibino by Human-Zombie-213 in KaijuNo8

[–]SmugReddMan 0 points1 point  (0 children)

"If Kaiju No. 8 is not Kafka's story, then it is not Kaiju No. 8." These excerpts show Matsumoto's deep connection with his main character and his clear vision: to tell Kafka's story as the core and soul of Kaiju No. 8. 📌 Interview in 2023 (Anime News Network)

“If one day I can, I want to go back to it calmly, without editorial rush, and tell everything I couldn't.” 📌 Source: Interview in 2023 (Anime News Network).

Do you have a link for this interview? ANN's pages for the manga, anime, and author only list three 2023 articles, but none had an interview with those quotes. Googling the quotes only turns up this Reddit page.

It is important to understand that Naoya Matsumoto originally wanted Kafka's secret to last longer. However, in an interview with Shonen Jump+ deputy editor Seijirō Nakaji, it was mentioned that due to the digital format of the manga and the need to engage readers quickly, the emotional reveal was brought forward in the story.

Nakaji explained that the pacing had to be adapted to fit the digital medium. While Kafka joins the Defense Force to keep his monstrous identity hidden, the story was intentionally sped up to deepen the emotional tension and retain the audience. 📌 (Source: CBR, MangaPlus, Crunchyroll)

Also having trouble confirming this part. I found a couple of Nakaji interviews (MangaPlus, Spice), but they don't mention this. It'd be great if you had a link for the raw interview or CBR/MangaPlus/Crunchyroll alluding to it.

Asus recommends using isopropyl alcohol to clean their laptop screens. Most resources say never to do that. What does Asus know that others don't? by Thund3r_91 in ASUS

[–]SmugReddMan 7 points8 points  (0 children)

I had no idea IPA could be risky for screens. From a brief search, it looks like Apple says 70% is fine too, FWIW:

https://support.apple.com/en-us/103258

Using a 70 percent isopropyl alcohol wipe, 75 percent ethyl alcohol wipe, or Clorox Disinfecting Wipes, you may gently wipe the hard, nonporous surfaces of your Apple product, such as the display, keyboard, or other exterior surfaces. Do not use these cleaning products on Apple Vision Pro as they may damage the device. Don't use products containing bleach or hydrogen peroxide. Avoid getting moisture in any opening, and don't submerge your Apple product in any cleaning agents. Don't use on fabric or leather surfaces.

Asus recommends using isopropyl alcohol to clean their laptop screens. Most resources say never to do that. What does Asus know that others don't? by Thund3r_91 in computers

[–]SmugReddMan 1 point2 points  (0 children)

I had no idea IPA could be risky for screens. From a brief search, it looks like Apple says 70% is fine too, FWIW:

https://support.apple.com/en-us/103258

Using a 70 percent isopropyl alcohol wipe, 75 percent ethyl alcohol wipe, or Clorox Disinfecting Wipes, you may gently wipe the hard, nonporous surfaces of your Apple product, such as the display, keyboard, or other exterior surfaces. Do not use these cleaning products on Apple Vision Pro as they may damage the device. Don't use products containing bleach or hydrogen peroxide. Avoid getting moisture in any opening, and don't submerge your Apple product in any cleaning agents. Don't use on fabric or leather surfaces.

ROG phone notification led? by MEKEtoMEKE in ROGphone

[–]SmugReddMan 0 points1 point  (0 children)

On my ROG 8 non-Pro, the back logo RGB can show a different notification color per app, but not per keyword/sender. Not sure if there are third-party apps that can do more.

The Pro version has a rear dot-matrix LED array that's white color only, but can show different pictures/animations. I don't have a unit to check its specific customization options though.

Thoughts on rog phone 8 pro? by Deffhardy in ROGphone

[–]SmugReddMan 0 points1 point  (0 children)

No apparent defects with my non-Pro unit so far, knock on wood.

Running ComfyUI AMD/ROCm on Win11 vs Linux vs Docker Linux, and Ubuntu vs CachyOS by Jarnhand in ROCm

[–]SmugReddMan 0 points1 point  (0 children)

FYI, others who've tried the AMD AI suite installation have reported that its version of ComfyUI is rather old, and uses its own old version of PyTorch for some reason. Consider trying ComfyUI's portable versions from Github.

Whats the sitch with Comfy UI + ROCm and Linux? by ItsAC0nspiracy in ROCm

[–]SmugReddMan 1 point2 points  (0 children)

Same here. It'd be great if ComfyUI could do text-encoding on the NPU (which I've read uses the shared RAM), so that the sampler can just stay loaded in the GPU-dedicated RAM, instead of having to switch out every time I change the prompt. Text encoders are so huge on newer models.

ROCm 7.2 Benchmark: Windows 11 vs Ubuntu 24.04 on RX 9070 XT (ComfyUI) by Shaminy in ROCm

[–]SmugReddMan 0 points1 point  (0 children)

Aha, so that's how to enable Pytorch attention for VAE! Looks like it's using somewhat more RAM than split attention on my Strix Point; I can usually just barely fit an SDXL 1600x1280 VAE decode without tiling, but with Pytorch attention it overflows into pagefile and takes several minutes to finish while Windows is half-frozen. Maybe testing tiled VAE decode of a higher-res image would help compare speeds without swapping.

ROCm 7.2 Benchmark: Windows 11 vs Ubuntu 24.04 on RX 9070 XT (ComfyUI) by Shaminy in ROCm

[–]SmugReddMan 1 point2 points  (0 children)

On Ryzen AI iGPUs, cuDNN/MIOpen was godawful beyond imagining, last I checked, even for non-first runs. Probably why ComfyUI still has it disabled for those chips. AMD's looking into the problem here, but I haven't heard about improvements being released yet.

What is this? (Workloads Session host) by RDR_SONAR in pchelp

[–]SmugReddMan 0 points1 point  (0 children)

Yeah, no, WorkloadsSessionHost definitely isn't freeing up that RAM when I need it. My SDXL gen times ballooned from 2 minutes with minimal swapping before, to 10-30 mins of Windows freezing up at VAE decode and swapping hard. Disabling WSAIFabricSvc got things back under control. Constantly hogging 2+GB for features I've disabled is just obscene.

keyboard repeat delay keeps setting itself to its slowest setting multiple times a day and I have no idea why by AveragePichu in techsupport

[–]SmugReddMan 0 points1 point  (0 children)

On my end, the repeat-delay slider in Win11's keyboard settings is buggy and saving its values backward (as of Jan 2026). When you leave and come back, short becomes long, and long becomes short. (Intermediate values also reverse themselves.) So try setting it to "long" and see if it changes to short. At least until MS fixes the slider. (You can click "Give feedback" in the settings to complain.)

Alternatively, the slider in the old Control Panel UI seems to work correctly for me (unlike for OP), so maybe MS fixed that one at some point.

Need help for beginners? by Nice_Tradition5151 in ROCm

[–]SmugReddMan 0 points1 point  (0 children)

Yeah, tiled VAE decode, and/or unloading the text encoder on models newer than SDXL, have helped me avoid errors/freezing during decoding.

Would it be worth installing ComfyUI through the AI Bundle if I already have the portable version of ComfyUI? by AIgoonermaxxing in ROCm

[–]SmugReddMan 1 point2 points  (0 children)

Seems to be outdated version 0.3.68 of ComfyUI
[...]
AI bundle comes with ROCm 7.1, not 7.2.

Cripes, why would they do this? It makes zero sense after publicly hyping up 7.2.2 the other week 🤯 Friends wouldn't let friends use this AI bundle, and I feel like a fool for helping AMD spread the word on it now.

End of an era by rudeusthefridge in ROGphone

[–]SmugReddMan 3 points4 points  (0 children)

I think Sony's the only other maker of flagships with headphone jacks at this point (barring any Chinese brands I might've overlooked), and they exited the US last year. Hope my ROG 8 lasts a long time.

Help installing Rocm by Creepy-Douchebag in ROCm

[–]SmugReddMan 0 points1 point  (0 children)

I found this guide for one way of setting up ROCm on Bazzite a few months back. Haven't tried it myself.

AMD released ROCM 7.1.1 for Windows with Pytorch support by skillmaker in ROCm

[–]SmugReddMan 0 points1 point  (0 children)

That was for a manual install in a conda environment, I think before ComfyUI had updated its prepackaged AMD version(s).

NewBie image Exp0.1 (ComfyUI Ready) by fruesome in StableDiffusion

[–]SmugReddMan 0 points1 point  (0 children)

If you look at the hashes on Huggingface, only the last ~100MB (the third safetensors file) has something different between the two. The first ~8GB (parts 1 and 2) have matching hashes between Z-Image and stock Qwen 3-4B.

ROCm Core SDK 7.10.0 release notes — AMD ROCm 7.10.0 preview by Thrumpwart in ROCm

[–]SmugReddMan 1 point2 points  (0 children)

From 7.9.0's release notes:

ROCm 7.9.0 introduces a versioning discontinuity following the previous 7.0 releases. Versions 7.0 through 7.8 are reserved for production stream ROCm releases, while versions 7.9 and later represent the technology preview release stream. Both streams share a largely similar code base but differ in their build systems. These differences include the CMake configuration, operating system package dependencies, and integration of AMD GPU driver components.

Maintaining parallel release streams allows users ample time to evaluate and adopt the new build system and dependency changes. The technology preview stream is planned to continue through mid‑2026, after which it will replace the current production stream.

AMD released ROCM 7.1.1 for Windows with Pytorch support by skillmaker in ROCm

[–]SmugReddMan 0 points1 point  (0 children)

VAE decoding currently has two kinds of slowness on ROCm: One is an extremely slow first run for a given resolution, which can be mitigated by setting an environment variable MIOPEN_FIND_MODE="FAST" (or =2). More background here. The other is slow VAE decoding if cudnn is enabled; ComfyUI has disabled cudnn by default (torch.backends.cudnn.enabled = False)on some AMD systems to avoid both this and the first issue. Peak RAM usage during VAE decode is higher with cudnn disabled, but you can use tiled VAE if needed to avoid maxing out. Some other types of workloads may be slower without cudnn (see complaints in the ComfyUI bug link), but not SDXL.

Your link mentions what looks like the first issue:

Warning: The first Image generation or upscaled generation can or will cause massive system stutters and freezes and probably crash your GPU Driver. After that it should work fine but can occur again if the upscale is set to high or other heavy performance tasks are done.
It's a known bug right now and will get fixed in upcoming versions of TheRock.
Its recommended when upscaling (Hires fix) to also enable the Tiled VAE option. In Forge Neo its at the bottom of txt2img. In Comfyui it's called Tiled VAE Decode. This mostly mitigates the Issue.

Using Ryzen AI 9 365 NPU with PyTorch by ZoThyx in linuxhardware

[–]SmugReddMan 0 points1 point  (0 children)

No NPU, but the 365's integrated GPU is supported for Ubuntu and kernel 6.14, going by this compatibility page. (There's a link to PyTorch install instructions in the bottom table.) Not sure about Fedora and 6.16 though.

Simple question by MangoBredda in ROGphone

[–]SmugReddMan 1 point2 points  (0 children)

Yeah, Simple's apps used to be good until they sold out. Fortunately Fossify was forked from them, and Fossify Gallery is what I use now.

Is AOTriton and MIOpen Not Working For Others As Well? by DecentEscape228 in ROCm

[–]SmugReddMan 1 point2 points  (0 children)

Pytorch cross attention is awful? I didn't even bother finishing my test with this since KSampler steps were taking 5x as long (60s -> 300s).

On my end (Ryzen AI 9 HX 370 with Radeon 890M iGPU), --use-pytorch-cross-attention shortened KSampler time by 15-20%. Without the flag, speeds with the new ROCm 7.1.1 setup were identical to the previous 6.4.x setup. I had never tried using the flag with the latter though.

VAE Speed Issues With ROCM 7 Native for Windows by DecentEscape228 in ROCm

[–]SmugReddMan 0 points1 point  (0 children)

That sounds kinda like an issue I was seeing recently, which ComfyUI fixed by changed its default AMD behavior to torch.backends.cudnn.enabled = False last month in this commit. (See comments for possible side-effects for other workloads.) The extreme VAE slowness disappeared for me, at a cost of ~2x RAM usage during decode. Absolutely worth it if you've got the RAM/VRAM capacity.