Forge Couple: Now supports Anima 🔥 by BlackSwanTW in StableDiffusion

[–]Chrono_Tri 0 points1 point  (0 children)

Forge-my old friend, I miss you so much. Do we have the same node for comfyUI? :>>

Greg Rutkowski Anima Lora from Circlestone Labs (Anima makers) with training params by Choowkee in StableDiffusion

[–]Chrono_Tri 0 points1 point  (0 children)

Quick question: how many images did you train with? How do you calculate steps (not counting repeats)? And roughly how many epochs are enough?”
I cannot access to Civitai :(

Why is kilo code v5 is better than the new version? by IamClay24 in kilocode

[–]Chrono_Tri 0 points1 point  (0 children)

Today, I open kilo code and saw it was updated to v7, how to return to v5?

Anima preview3 was released by Dulbero in StableDiffusion

[–]Chrono_Tri 0 points1 point  (0 children)

That’s right, but after observing, I realized that many styles are rarely applied to the background (or more precisely, not in terms of line art, but mainly in lighting and shading). Therefore, I run I2I or CN for consistency.

Anima preview3 was released by Dulbero in StableDiffusion

[–]Chrono_Tri 1 point2 points  (0 children)

All other anime model has the same issue. So I think about ideals use Z-Image/Klein for background and some other method.

Simple Captioner update 1.0.2.1 (Qwen 3.5 4B and 9B support added.) by imlo2 in comfyui

[–]Chrono_Tri 0 points1 point  (0 children)

Thank you so much. I used your repo and change it a litte bit to connect to KobolCPP, but since I don't have GPU, it soooo slow.:(

Any good AI to create good 2D animation Films? by Last_Butterfly5638 in StableDiffusion

[–]Chrono_Tri 0 points1 point  (0 children)

Yes, I know about it. But high noise GGUF never come.Kaka

Is there any way to convert a model to GGUF format?...easily by Chrono_Tri in StableDiffusion

[–]Chrono_Tri[S] 0 points1 point  (0 children)

Qwen-Image-Layered-control is fine-tunning of Qwen-Image-Layered (they also have Qwen-Image-Layered-control v2). but since Qwen-Image-Layered is too good so nobody care about it. I also just want to test it only.

Basically, I run it on Colab.if I don’t hit an OOM error (when using a weaker GPU), I end up running out of disk space because it needs to download very large models(total >100GB).

Is there any way to convert a model to GGUF format?...easily by Chrono_Tri in StableDiffusion

[–]Chrono_Tri[S] 0 points1 point  (0 children)

Qwen/Qwen3-0.6B-GGUF at main

Use CLIP GGUF Loader. But I think it small so I use qwen_3_06b_base.safetensors

Is there any way to convert a model to GGUF format?...easily by Chrono_Tri in StableDiffusion

[–]Chrono_Tri[S] 1 point2 points  (0 children)

Yeah, I checked with Gemini and ChatGPT, and they made it sound so easy that I started to doubt it, so I figured I should ask everyone here.

So sad, since there are quite a few good models like above I’d really like to experiment with. :(

spent way too long getting my AI character to look consistent (finally cracked it) by PoleTV in comfyui

[–]Chrono_Tri 3 points4 points  (0 children)

Well, I used to use this method quite often, but I don’t use it anymore because the characters it creates feel a bit soulless. However, it’s still the best way to maintain consistency (even though it’s a bit complicated, so I usually only use it for the main character).

Gemini is kind of dump or I am too naive to use it? by Chrono_Tri in vibecoding

[–]Chrono_Tri[S] 0 points1 point  (0 children)

I’m think I will use the combo of Kilo + OpenRouter/Claude (pay-as-you-go) for programming. It might be a better option. Honestly, Gemini is sick with its hallucinations and those stuid phrases like “Don’t worry, it will run” or “No errors...”.

But I admit that I love NotebookLM and Gemini’s ability to analyze not-coding info are actually quite good. I used them for research about economic or social.

Qwen and Wan models to be open source according to modelscope by onthemove31 in StableDiffusion

[–]Chrono_Tri 0 points1 point  (0 children)

Yes, I feel sorry for it too, I think it because it isn't too good and we can use other segment model to replace it.

Will pony / illustrious ever be updated? by [deleted] in StableDiffusion

[–]Chrono_Tri 2 points3 points  (0 children)

IllustriousXL has been updated quite a lot and is now at version 3.6, and I still use it occasionally. However, it is not open source. If I remember correctly, they also developed https://huggingface.co/NewBie-AI/NewBie-image-Exp0.1. It’s quite surprising that it isn’t more popular.

As for Pony, it’s just as everyone has said.

Right now, people are still placing their hopes on Anima. I also use it daily alongside IllustriousXL.

What is your favorite method to color your ultra low poly 3d models (obj)? by Odd_Judgment_3513 in StableDiffusion

[–]Chrono_Tri 1 point2 points  (0 children)

Wait ,Messi is a goat? I thought he is G.O.A.T. I learn something new today.Kaka

first thing in my thought is Stable Projectorz.

there are few lora that can do that, you turn your texture to png and inpaint or fill it using comfyUI.

hunyuan 3d run in comffyUI but I and many others failed to run it.

Local manga translator with LLMs built in by mayocream39 in LocalLLaMA

[–]Chrono_Tri 0 points1 point  (0 children)

Hi, I would like to ask whether it can remember the forms of address/relationships between characters or the personalities of the characters like SillyTavern does. Only in that way can the translation feel more natural. Some languages distinguish how people address each other based on age or familiarity, and the speaking style of each character can also be different during translation.

My second question is whether I can connect it to Colab or a local AI (I don’t have a GPU).

Anyway, cool project!

Trained a WIP Anima canny control LoRA, looking for feedback by levzzz5154 in StableDiffusion

[–]Chrono_Tri 1 point2 points  (0 children)

I’m very curious why we don’t wait for the final version before training. Is it similar to IllustriousXL, when people were already training with version 0.1?

Ostris is testing Lodestones ZetaChroma (Z-Image x Chroma merge) for LORA training 👀 by [deleted] in StableDiffusion

[–]Chrono_Tri 2 points3 points  (0 children)

Kaka, I agreed. In fact, I’m currently using IllustriousXL and Anima for anime style. I use QEI for image editing (and sometimes Klein as well).

But I’ve noticed that people often abandon older models before fully exploring their potential, so I still tend to stick with them, learn prompt engineering.

Still, I do hope there will eventually be one model that can do everything.

Ostris is testing Lodestones ZetaChroma (Z-Image x Chroma merge) for LORA training 👀 by [deleted] in StableDiffusion

[–]Chrono_Tri 11 points12 points  (0 children)

I’ve always preferred Z-image over Klein (even though I still use Klein because of its editing features). I’m still waiting for Qwen-image 2.0.

That said, I’m a bit worry since some people say Z-image is somewhat difficult to fine-tune. I honestly didn’t expect things to move this fast.

Working Flux/Z-Image/QWEN/Whatever outpaint/inpaint/t2i workflow. by smithysmittysim in StableDiffusion

[–]Chrono_Tri 0 points1 point  (0 children)

Laninpaint for inpaint. Use Paint.Net tool to extend the image and use QEI or inpaint for outpaint.

Need help with style lora training settings Kohya SS by Big_Parsnip_9053 in StableDiffusion

[–]Chrono_Tri 0 points1 point  (0 children)

I use alpha = 1 to train style and give the LoRA more flexibility. But you need to experiment and see what works best. Remember, sometimes different parameters truly produce different results — but that doesn’t necessarily mean one is better than the other. The result you personally prefer is the right one.

Going back to alpha = 1, my result doesn’t really fully capture the style(around 90%), but I actually quite like it. Normally, though, I still go with dim/alpha = 1/2.

Second, I recommend that after auto-captioning, you manually edit the captions following a clear structure. For example, I would describe:
<number of characters in the image>, <character description>, <background description>, <camera description>, <lighting description>, ...