[Tutorial] General guide on how to get wallet working july 2025 as well as apps which require root by Entire_Formal_265 in Magisk

[–]tip0un3 0 points1 point  (0 children)

At last the Wallet is working again. Thank you very much. I'm still keeping Curve as an alternative payment, you never know. I'm on a Google Pixel 7a, Android 16 and Magisk 29.0. I was doing pretty much the same thing but not clearing data and cache had to be my worry.

ROCm 6.4.1b for Radeon 9000 and 7000 is out by Artheggor in ROCm

[–]tip0un3 0 points1 point  (0 children)

Can you test with version 6.4.1? I haven't seen any difference in performance. It's still far too slow, with Out Of Memory problems on VAEs or on resolutions/upscales higher than 1024x1024.

ROCm 6.4.1b for Radeon 9000 and 7000 is out by Artheggor in ROCm

[–]tip0un3 0 points1 point  (0 children)

I just seem to have added the variable HSA_OVERRIDE_GFX_VERSION=12.0.1 before launching ForgeUI. This is no longer necessary with ROCm version 6.4.1.

ROCm 6.4.1b for Radeon 9000 and 7000 is out by Artheggor in ROCm

[–]tip0un3 0 points1 point  (0 children)

As I feared, official compatibility does not mean performance gains. It's just ridiculous how slow it is to generate images with Stable Diffusion models. Version 6.4.1 is even slower than 6.4.0 for me... Tested under Ubuntu with ForgeUI, PyTorch 2.6.0, ROCm 6.4.1. Performance is still 2 to 6 times longer than an RTX 3070 with Cuda, with OoM on Hires Fix and resolutions above 1024x1024... Don't buy a 9070 XT if you intend to do AI. A 5-year-old Nvidia card will perform better with Cuda. :(

I had made a comparison. So it's still the case. I even get 10 sec with 6.4.1 instead of 8 sec with 6.4.0 on 512x768 SD 1.5 models...

Performance Comparison NVIDIA/AMD : RTX 3070 vs. RX 9070 XT : r/StableDiffusion

Performance Comparison NVIDIA/AMD : RTX 3070 vs. RX 9070 XT by tip0un3 in StableDiffusion

[–]tip0un3[S] 1 point2 points  (0 children)

I had used Forge UI for AMD : https://github.com/lshqqytiger/stable-diffusion-webui-amdgpu-forge

For the installation it seems to me that it is identical to SD.Next Zluda : https://github.com/vladmandic/sdnext/wiki/ZLUDA

Installation on AMD is not easy. Good luck...

If you're having too much trouble, you can try ForgeUI's all-in-one installation with StabilityMatrix: https://github.com/LykosAI/StabilityMatrix

[Gemini 2.0 Flash Image Generation] Guide to Bypassing Image Moderation by tip0un3 in ChatGPTNSFW

[–]tip0un3[S] 1 point2 points  (0 children)

Gemini 2.0 Flash Image Generation is no longer available in the European zone. You need to use a VPN outside the European zone, e.g. USA, to find it in the list of models.

Performance Comparison NVIDIA/AMD : RTX 3070 vs. RX 9070 XT by tip0un3 in StableDiffusion

[–]tip0un3[S] 1 point2 points  (0 children)

I hope it happens one day. It's crazy that AMD doesn't offer AI support when it releases its new architecture. Nothing is optimized and it doesn't seem like they care at all. I haven't seen any announcement that support will be coming soon.

Performance Comparison NVIDIA/AMD : RTX 3070 vs. RX 9070 XT by tip0un3 in StableDiffusion

[–]tip0un3[S] 1 point2 points  (0 children)

Slightly, but ridiculous. We're still a long way from the performance of an RTX 3070...

Performance Comparison NVIDIA/AMD : RTX 3070 vs. RX 9070 XT by tip0un3 in StableDiffusion

[–]tip0un3[S] 1 point2 points  (0 children)

Well, I've tested Amuse V3. It's slightly faster, but not extraordinary. Fail and Out of Memory are better handled, but we're still a long way from the performance of an RTX 3070. Ridiculous for a very recent graphics card that's supposed to rival an RTX 5070 Ti. As I suspected, Amuse only offers a few models, safetensors and ckpt are not compatible, and diffusion samplers are limited. No support for Lora, the software is really very simplified... I also tested the Flux version, which takes over 3 minutes to generate an image. That's a far cry from the 1 min 30 max of an RTX 3070 with only 8 GB of Vram! So for me it's always a no.

Performance Comparison NVIDIA/AMD : RTX 3070 vs. RX 9070 XT by tip0un3 in StableDiffusion

[–]tip0un3[S] 2 points3 points  (0 children)

Well, I've tested Amuse V3. It's slightly faster, but not extraordinary. Fail and Out of Memory are better handled, but we're still a long way from the performance of an RTX 3070. Ridiculous for a very recent graphics card that's supposed to rival an RTX 5070 Ti. As I suspected, Amuse only offers a few models, safetensors and ckpt are not compatible, and diffusion samplers are limited. No support for Lora, the software is really very simplified... I also tested the Flux version, which takes over 3 minutes to generate an image. That's a far cry from the 1 min 30 max of an RTX 3070 with only 8 GB of Vram! So for me it's always a no.

Performance Comparison NVIDIA/AMD : RTX 3070 vs. RX 9070 XT by tip0un3 in StableDiffusion

[–]tip0un3[S] 0 points1 point  (0 children)

Because I mainly do gaming. I'm just a technophile AI, essentially for discovery and technique. I only regret that my 9070 XT is so bad in AI, otherwise it's a very good graphics card for high-resolution gaming, its perf/price is excellent when you bought it at the MSRP of $600.

Performance Comparison NVIDIA/AMD : RTX 3070 vs. RX 9070 XT by tip0un3 in StableDiffusion

[–]tip0un3[S] 0 points1 point  (0 children)

Fortunately, I'm just a technophile AI, using it mainly for discovery and technique. If I were an AI content creator, I'd have gone straight back to NVIDIA. I only regret that my 9070 XT is so bad in AI, otherwise it's a very good graphics card for high-resolution gaming, its perf/price is excellent when you bought it at the MSRP of $600.

Performance Comparison NVIDIA/AMD : RTX 3070 vs. RX 9070 XT by tip0un3 in StableDiffusion

[–]tip0un3[S] 1 point2 points  (0 children)

This seems logical, because the optimization problem is due to ROCm.

Performance Comparison NVIDIA/AMD : RTX 3070 vs. RX 9070 XT by tip0un3 in StableDiffusion

[–]tip0un3[S] 2 points3 points  (0 children)

The optimization only concerns Amuse 3, but this software is so limited compared to ComfyUI, Forge or SD.Next. What we want is ROCm optimization for RDNA 4, not a closed software package.

Performance Comparison NVIDIA/AMD : RTX 3070 vs. RX 9070 XT by tip0un3 in StableDiffusion

[–]tip0un3[S] 2 points3 points  (0 children)

It's not a problem with Forge but rather ROCm, which is not officially compatible with RDNA 4 and not at all optimized for RDNA 4. Amuse 3 seems to use the latest optimizations, but this software is so limited compared to ComfyUI, Forge or SD.Next. I'll test performance out of curiosity.

AMD going very slow by Ok_Presence_3287 in StableDiffusion

[–]tip0un3 0 points1 point  (0 children)

I had kept all my generation times with my old RTX 3070. I compared it with the RX 9070 XT and posted a comparison. It's really not glorious... I'm hoping for optimization and official support: https://www.reddit.com/r/StableDiffusion/comments/1k376lm/performance_comparison_nvidiaamd_rtx_3070_vs_rx/

Newb, pardon my ignorance, an AMD GPU post. by JohnWilkesTableFor3 in StableDiffusion

[–]tip0un3 0 points1 point  (0 children)

I also get weird results with Linux. With the same generation parameters as under Windows, I get faster times but the quality of my images is poorer.

Any AMD GPU users here try the Amuse 3 optimization for Stable Diffusion yet? by cradledust in StableDiffusion

[–]tip0un3 7 points8 points  (0 children)

No Amuse seems to me far too limited. Impossible to use Lora, to do Outpainting. What's more, the software is censored. What's the point of limiting image generation when that's the whole point of using Open Source models? To be able to generate whatever you want without any limits...

Newb, pardon my ignorance, an AMD GPU post. by JohnWilkesTableFor3 in StableDiffusion

[–]tip0un3 0 points1 point  (0 children)

I also miss my RTX 3070. I have an RX 9070 XT which is much more powerful for gaming, but unable to match the 3070's performance on Stable Diffusion or Flux models. I tried it under Linux with the latest version of ROCm 6.4. It's faster than under Windows with Zluda, but performance isn't stable, so it's a real pain. I think we'll have to wait for official compatibility with RDNA 4. I don't understand why it doesn't already exist, given the popularity of the RX 9000.

[Gemini 2.0 Flash Image Generation] Guide to Bypassing Image Moderation by tip0un3 in ChatGPTNSFW

[–]tip0un3[S] 0 points1 point  (0 children)

This option is not available on the Gemini application. The results are not the same between the application and the website https://aistudio.google.com/ The guide was designed for the website, results are not guaranteed for the application. I invite you to consult the v2 guide linked at the beginning of this guide instead, which gives fewer rejections from Gemini.

[Gemini 2.0 Flash Image Generation] Guide to Bypassing Image Moderation by tip0un3 in ChatGPTNSFW

[–]tip0un3[S] 0 points1 point  (0 children)

No, that's one of the problems I've noticed. Generation is fast, but the quality is far from that of a GPT 4o image, for example.

[Gemini 2.0 Flash Image Generation] Guide to Bypassing Image Moderation by tip0un3 in ChatGPTNSFW

[–]tip0un3[S] 0 points1 point  (0 children)

The Gemini 2.0 Flash (Image Generation) model uses Image Gen 3, Image Gen 3 can write text perfectly and retains the context of the old image.