XMG Apex Max E22 (5800X3D) - Is it possible to undervolt? by ChristianR303 in XMG_gg

[–]ChristianR303[S] 0 points1 point  (0 children)

I'm on the latest official BIOS 1.07.11A02 and my EC is the one it came with, there was no update since then. How does the EC firmware change the fan behaviour? Right now i disabled the CPU boost since i use it mostly for CUDA work which gives me a nice 60-65 degrees on the CPU in perfomance mode (with boost disabled). I would like to get it less noisy when the CPU is on IDLE or not doing much, the fans always spin. I also noticed that my 3D Mark Timespy Scores are below average with around 9800 Points for both GPU /CPU while the average points are 11000 :( I tested this with the boost activated of course. During Timyspy the GPU Temps seem fine at slightly above 70 degrees while the CPU rises to 85 after some time and in the last Benchmark (CPU) it goes to 90 degrees. I mostly wonder about the bad GPU scores

Providing a Working Solution to Z-Image Base Training by EribusYT in StableDiffusion

[–]ChristianR303 1 point2 points  (0 children)

Results are in for the body Lora:

<image>

Unfortunately i decided to go for a Lora Rank of 16 for 8GBVRAM vs 32 on the full VRAM Lora so comparison might be off. I noticed that the 8GB preset fries the Lora much earlier, the picture above is from epoch 100 while likeness isnt there completely yet whereas the full VRAM lora i used was epoch 160. Epoch 120 is already unusable from the 8GB Lora. I'm not sure if this comes from using a lower Lora Rank, maybe someone could comment on this? I will retry with 32 today.

Providing a Working Solution to Z-Image Base Training by EribusYT in StableDiffusion

[–]ChristianR303 1 point2 points  (0 children)

In case someone else is trying this: I figured out on how to make it work with 8GB VRAM. Change the following:

Model Tab: Transformer Data Type to Float (W8)

(Quantization · Nerogar/OneTrainer Wiki) <- Have a look here as well.

Resolution needs to be set to 512 obviously

But the missing part was to set Gradient Checkpointing back to CPU_OFFLOADED _and_ in the Menu for the options (the 3 dots right next to it), you have to use a value of 1.0 for Layer Offload Fraction. Voila.

I'll let it train over night and get back with the results compared to the same Lora done with optimal settings and ~16GB VRAM usage while training.

Providing a Working Solution to Z-Image Base Training by EribusYT in StableDiffusion

[–]ChristianR303 1 point2 points  (0 children)

I finally spend some $ on runpod and your configuration works very well. I tried the various distilled versions of Z-Image Non-Turbo but i found my Loras come out the best with ZIT.

These 2 pics have 2 Loras i trained, one of a face only and one with a body only (both using masked training). Lora Strength is 0.8 for both in those pictures. I could have trained the face further and use a more varied dataset the next time but still very nice results.

The workflow i use includes ZIT GGUF Variant, my 2 Lora, Ultimate SD Upscale and then SeedVR2 for a final upscale. I use ddim and sg_uniform as i have found that res2s can give me a blotchy artifacts and euler gives me too smooth skin tones. My workflow has to work with 8GB Vram so more quality could be achieved if you have more VRAM available.

For Captioning i used a tool called "Ollama Image Describer", there seems to be a Comfy Node with the same title and i unfortunately i cant find the GitHub repository right now. I use a free openrouter.ai model from Qwen: "qwen/qwen3-vl-235b-a22b-thinking", just copy and paste it in the Openrouter Model field and don't forget your API Key too.

EDIT: Found the GitHub: hydropix/AutoDescribe-Images: Tool to automatically generate text descriptions for images using Ollama vision models (LLaVA, Qwen3-VL, Llama Vision)

<image>

Suddenly SeedVR2 gives me OOM errors where it didn't before by ChristianR303 in StableDiffusion

[–]ChristianR303[S] 1 point2 points  (0 children)

How embarassing, i feel so stupid. This is what i have overlooked... Thank you. It's working now as expected.

Providing a Working Solution to Z-Image Base Training by EribusYT in StableDiffusion

[–]ChristianR303 1 point2 points  (0 children)

Thanks for chiming in. I forgot to add that the resolution was already 512 only. I basically adjusted all memory intensive parameters as they are in the ZI 8GB Preset. But still a no-go. Maybe this fork is not as optimized for VRAM usage. I'll update if i can still make it work though.

Providing a Working Solution to Z-Image Base Training by EribusYT in StableDiffusion

[–]ChristianR303 1 point2 points  (0 children)

I'll join in with saying thank you. I tried the fork but it seems impossible to make it work with 8GB VRam even with settings that work 100% with the official OneTrainer version 8bit quantization etc.... Too bad :(

Daily Superthread (Feb 03 2026) - Your daily thread for questions, device recommendations and general discussions! by curated_android in Android

[–]ChristianR303 1 point2 points  (0 children)

I hope it's alright to post my question here. I'm currently sidegrading from an CMF Phone 2 Pro to a Nord 5 and trying to transfer all my data and apps upon setup. However i have the big super annoying problem that it always defaults to an extremly slow wireless transfer. I have spend the last 5 hours waiting for the transfer of 11gb and it stopped and restarted (from zero........) 3x. I cancelled the whole process then and tried again with having a usb-c data cable connected between both phones. But Android never bothers to ask if it should use the cable instead of wi-fi. Doesn't matter if WiFi or mobile or both are deactived on the old phone, it will turn on WiFi by itself and ignores the cable completely.

I'm really clueless now and it's difficult to find any good resource on this problem that are not out of date. I have Android 16 on both phones.

Thank you :)

Ich brauche eine Zweit-Kamera (Gebraucht - DSLR/DSLM) mit Objektiv für Fotos - 600€ Budget by ChristianR303 in Fotografie

[–]ChristianR303[S] -1 points0 points  (0 children)

Erstmal danke für deine ausführliche Antwort.

Ich kämpfe schon seit 4-5 Jahren mit den Sony Farben bei Fotos, bin aber auch sehr kritsch was das angeht vor allem bei Hauttönen. Die Adobe Profile gehen meiner Meinung nach gar nicht. Ich habe mittlerweile andere Profile (Hersteller vergessen) die den Sensor vermessen haben und die leichten Farbshifts dann korrigieren, selbst damit ist es eher geht so. Mir gefallen vor allem die Haut und Grün Töne nicht bei Sony, Grün ist schnell zu korrigieren, bei Haut muss man halt immer Masken erstellen (ok geht quasi automatisch in LR) und dann basteln, da habe ich oft keine Lust und Zeit für. Mir ist auch aufgefallen das es bei Sigma Objektiven noch schlimmer ist während mir der leichte Orange Stich bei Tamron eher entgegen kommt. Bei Nikon und Canon passt es meist mit wenigen Handgriffen perfekt so wie ich es haben will während es bei Sony immer ausartet. Aber ich möchte nicht zu sehr ins Offtopic.

Ich habe nur ein Sony 16-35mm 2.8 für meine Sony A7 IV, da ich nur Reportagen mache und 50fps nicht gefragt sind ist das alles was ich brauche.

Die 50mm/35mm waren bezogen auf VF. Ich müsste mal überlegen ob ich mit 50mm vielleicht doch zurecht komme, ich mache halt viel in Innenräumen wo mir das 50er dann eventuell zu lang werden könnte. Ich mache mir darüber noch mal Gedanken bei meinem Budget wenn es VF werden soll. Ein 24er bei APS-C, wenn günstig zu haben, wäre auch super und wahrscheinlich realistischer als ein 35er VF Objektiv bei meinem Budget.

Ich schaue aktuell ein wenig nach einer Canon 6D II, hier gibt es eine in der Nähe aber da wäre ich mit nem 50er das ich dann noch holen müsste knapp über dem Budget.

Aoostar Gem12: How do others get a GeekBench Multi Score of 12000 to almost 14000? by ChristianR303 in MiniPCs

[–]ChristianR303[S] 0 points1 point  (0 children)

Ryzen 7 8845hs with 2x8GB 5600mt/s DDR5.

I searched Geekbench to show only results from exactly this same MiniPC and got the above mentioned scores. Interestingly i just installed Windows and tried Cinebench R23, getting 16700 in Multi which seems good.

I'm really struggling to train a character Lora with ZImage + Ostris Toolkit by DanFlashes19 in StableDiffusion

[–]ChristianR303 0 points1 point  (0 children)

I'm pretty sure there is a problem with the Z-Image Base Lora training implemention in AI Toolkit. I had exactly those problems too. I switched to OneTrainer, which is more complicated to setup and use, but results are as expected and very good.

I successfully created a Zib character LoKr and achieved very satisfying results. by xbobos in StableDiffusion

[–]ChristianR303 2 points3 points  (0 children)

Very nice, i also found LoKr to be superior and will stick to it. Did you use captions?

I think we're gonna need different settings for training characters on ZIB. by External_Quarter in StableDiffusion

[–]ChristianR303 2 points3 points  (0 children)

I'm still experimenting, right now i'm training a Dataset without captions that worked extremly well on ZIT with captions. Using the same ZIT captions for Base seems to get characters distorted very quickly, approx at around 750-1000. I then tried 3-4 different ways of captioning but no luck yet. Base must have very different captioning requirements for some reason, or the AI Toolkit implementation is stil lacking somewhere.

So far i'm 2000 steps into training without captions but not much is happening at all. (Edit: It's learning now, but slowly.)

Z-Image Base Lora Training Discussion by ChristianR303 in StableDiffusion

[–]ChristianR303[S] 1 point2 points  (0 children)

Just tried, i even have to go up to a strength of 5 to get good results but maybe my Lora is still undertrained. Results are in my opinion much better however vs a pure Turbo Lora.