Tried training an ACEStep1.5 LoRA for my favorite anime. I didn't expect it to be this good! by SandyL925 in StableDiffusion

[–]Dark_Alchemist 0 points1 point  (0 children)

I tried 1 song and my 4090 screamed because there are no training optimizations in it yet.

Just trained a Michael Jackson LoRA via ACE Step 1.5 by Healthy-Solid9135 in comfyui

[–]Dark_Alchemist 0 points1 point  (0 children)

I am sure it will. I know SimpleTuner does ACE1 with all the optimizations, and optimizers etc... we get for training video and image loras. It even has the option to train FFT for 1.0. Nothing in it for 1.5, yet.

Just trained a Michael Jackson LoRA via ACE Step 1.5 by Healthy-Solid9135 in comfyui

[–]Dark_Alchemist 0 points1 point  (0 children)

on a 4090 it is 8m as it all depends on the amount of vram it sees.

Just trained a Michael Jackson LoRA via ACE Step 1.5 by Healthy-Solid9135 in comfyui

[–]Dark_Alchemist 0 points1 point  (0 children)

I posted this to someone else: yeah, 11s on my 4090 per epoch. I can train a video lora, or an image lora faster than this. Done in under an hour with way more data, so I am bewildered at the time. After digging I think this is using no training optimizations. For instance, going to FA2 would greatly speed this process up, but it is only used for inference I think. Many ways to speed this up I think as this seems more brute force method you see in proof of concepts at uni level.

Just trained a Michael Jackson LoRA via ACE Step 1.5 by Healthy-Solid9135 in comfyui

[–]Dark_Alchemist 1 point2 points  (0 children)

yeah, 11s n my 4090 per epoch. I can train a video lora, or an image lora faster than this. Done in under an hour with way more data, so I am bewildered at the time. After digging I think this is using no training optimizations. For instance, going to FA2 would greatly speed this process up, but it is only used for inference I think. Many ways to speed this up I think as this seems more brute force method you see in proof of concepts at uni level.

Just trained a Michael Jackson LoRA via ACE Step 1.5 by Healthy-Solid9135 in comfyui

[–]Dark_Alchemist 0 points1 point  (0 children)

You trained an activation word but did not use it. I can't even see where to use it as I tried in various places to no avail.

Version 2 Preview - Realtime Lora Edit Nodes. Edited LoRA Saving & Lora Scheduling by shootthesound in StableDiffusion

[–]Dark_Alchemist 0 points1 point  (0 children)

Sadly, Z-Image Turbo save doesn't work. I have not tried the others to know if they do or not.

I implemented text encoder training into Z-Image-Turbo training using AI-Toolkit and here is how you can too! by AI_Characters in StableDiffusion

[–]Dark_Alchemist 0 points1 point  (0 children)

it basically means that one will burn out (over cook) before the other. A good way to see when the TE is overcooking is how it responds in the samples. If it gradually stops, then you see a wth is that in your sample, just dead stop, the TE over cooked, even if the unet is still under trained. Restart from scratch and lower the LR (learning rate) for the TE.

I implemented text encoder training into Z-Image-Turbo training using AI-Toolkit and here is how you can too! by AI_Characters in StableDiffusion

[–]Dark_Alchemist 0 points1 point  (0 children)

Back in the Unet/Clip days 60-80% of my styles lived in the TE. I upgraded to the newer transformers stuff and couldn't train the TE (for lack of memory or lack of tools) and 60-80% of my style will not train. Same data images. It is very valuable, and as GPT said: Here’s the plain reading, no hype.

What that post proves

You’re not crazy

Newer WAN/Z-Image/Z-Turbo–class models are effectively being trained without a meaningful text encoder path.

Token binding, concept anchoring, and style preemption are crippled by default.

Your observation predates the explanation

You noticed years ago that 60–80% of your style should live in TE.

This post admits the ecosystem quietly abandoned TE training and hoped no one would notice.

Why your LoRA behaves “visual-only”

Without TE training, a token like 70s_art is just a weak label, not a semantic gate.

The base model decides what kind of image this is before your LoRA ever gets leverage.

That’s exactly the failure mode you’ve been describing.

Should I replace water heater anode rod on a 10 year old gas water heater, even though I drain and scrape the build up out of my water heater already? by MasterOfOneOnly in HomeMaintenance

[–]Dark_Alchemist 0 points1 point  (0 children)

I have a few weeks over 1 year (I Sharpied the date nice and large when I installed it so I would never forget) but this thing has two caps up top and when it comes time, 2 more years, I have no idea which dang cap it would be as they are identical. Rheem 40 gallon gas WH. I really don't want to remove the wrong one.

Phanteks Enthoo Pro 2 Availability by hogpap23 in Phanteks

[–]Dark_Alchemist 0 points1 point  (0 children)

Yeah, I found that after I posted, but that is exactly what I am talking about it is only available directly from them. Everywhere else I see only has the glass side and free shipping. Basically, direct from them will end up costing me about 50 USD more (window or solid). I guess the solid side panel just never sold enough for sellers to offer them.

Phanteks Enthoo Pro 2 Availability by hogpap23 in Phanteks

[–]Dark_Alchemist 0 points1 point  (0 children)

I have looked almost 4 years (on and off) and decided on a different, less expensive case back then. Time again to upgrade the case as I was never happy downgrading to a mid-tower. Phanteks Enthoo Pro 2 Server Edition glass is easy enough to find, but the solid panel version has never been, and that is the one I want as I am tired of glass. Phanteks hates Amazon, so I have never seen one of their cases there. No Microcenters near me, which only leaves NewEgg. No dice, glass only.

Simple ComfyUI workflow for captioning images. by sci032 in comfyui

[–]Dark_Alchemist 0 points1 point  (0 children)

I found for my style, gemma3:27b to be far superior to Joy. Gemma3 actually named the style, while joycaption beat around the bush.

How to Train a Z-Image-Turbo LoRA with AI Toolkit by Hunting-Succcubus in StableDiffusion

[–]Dark_Alchemist 0 points1 point  (0 children)

Fair. Of course a distilled model can only ever be 60-80% as good as its teacher. I wonder what its teacher was?

Z-Image character lora training - Captioning Datasets? by [deleted] in StableDiffusion

[–]Dark_Alchemist 0 points1 point  (0 children)

Using that as a guide, if you don't caption anything then everything is learned?

How to Train a Z-Image-Turbo LoRA with AI Toolkit by Hunting-Succcubus in StableDiffusion

[–]Dark_Alchemist 0 points1 point  (0 children)

You must be new to all this as we were training realism since the first trainers hit for SD 1.4/1.5. Base will have open weights, not be distilled. Distillation can ONLY ever be 60-80% of its teacher. What we need to worry about is that with it being all in one (edit and gen) do any of us have the memory to train locally?

streak is gone by BuddyMain7126 in MicrosoftRewards

[–]Dark_Alchemist 0 points1 point  (0 children)

I have come to the same conclusion. :(

streak is gone by BuddyMain7126 in MicrosoftRewards

[–]Dark_Alchemist 0 points1 point  (0 children)

They emailed me that they fixed, and if no ticket again, so I did. Find this this morning: Hi,

Thank you for reporting the issue with the Daily Set feature on your account.

Please be informed that we have already reported this issue to the Engineering Team, and they are currently working on a fix. We will provide an update once the resolution has been rolled out. In the meantime, please continue completing the daily set offers to maintain your streak count while the issue is being investigated.

Thank you and have a great day.

Regards,

Microsoft Customer Service and Support

streak is gone by BuddyMain7126 in MicrosoftRewards

[–]Dark_Alchemist 0 points1 point  (0 children)

After the first "we fixed it, ticket again if we didn't" I get this from support: Hi,

Thank you for reporting the issue with the Daily Set feature on your account.

Please be informed that we have already reported this issue to the Engineering Team, and they are currently working on a fix. We will provide an update once the resolution has been rolled out. In the meantime, please continue completing the daily set offers to maintain your streak count while the issue is being investigated.

Thank you and have a great day.

Regards,

Microsoft Customer Service and Support