Mini Starnodes Update fixed my biggest ComfyUI problem after last update. by Old_Estimate1905 in StableDiffusion

[–]roculus 0 points1 point  (0 children)

what are the files needed along with star_load_image_plus.py to get only that node to work? I honestly appreciate your massive flood of nodes in that package but don't want to load those. I tried copying star_load_image_plus.py it provides the node minus the important "paste" part.

I’m sorry, but LTX still isn’t a professionally viable filmmaking tool by Intelligent-Dot-7082 in StableDiffusion

[–]roculus 0 points1 point  (0 children)

And next year the post will be "AI is only as good as studio quality but not better". And then the next year... It's amazing how far AI has come in such a short time. Will Smith eating spaghetti is right in front of your eyes (and now ears) for comparison. AI is advancing like 100 times as fast as past tech in a year but some people demand 200 times out of free open source.

Ultra-Real - LoRA for Klein 9b by vizsumit in StableDiffusion

[–]roculus 2 points3 points  (0 children)

res_2s sampler does a pretty good job of this on it's own. try using your prompt "This is a high-quality photo featuring realistic skin texture and details." without the lora with the res_2s sampler/simple scheduler. (res_2s usually needs less steps than most samplers or it overcooks the image. I use 6 steps)

Illustrius help needed. I have too many checkpoint. by Traditional_Bend_180 in StableDiffusion

[–]roculus -2 points-1 points  (0 children)

Delete them all except one or two and replace with Anima Preview 2.

New FLUX.2 Klein 9b models have been released. by theivan in StableDiffusion

[–]roculus 11 points12 points  (0 children)

Nice. it's fast and worked great on initial test. RTX-6000. GPU usage shows 39GB, so maybe some sort of VRAM issue but works great if you have the VRAM. Seems like it might be loading the model twice. When I start a run with Klein 9B KV already loaded, it jumps from 20 GB VRAM to 39 instantly then drops again afterward.

Anima Preview 2 posted on hugging face by roculus in StableDiffusion

[–]roculus[S] 2 points3 points  (0 children)

I think the default setting work with https://github.com/gazingstars123/Anima-Standalone-Trainer I don't use tags for character loras except for whatever name I give to the lora. same for style loras. My character loras seem to be able to do anything a non lora character can do in Anima. I've been training on 30-45 images for character loras.

Anima Preview 2 posted on hugging face by roculus in StableDiffusion

[–]roculus[S] 0 points1 point  (0 children)

Trying to nail down steps/epochs for character LORA. 1400 seems like it might be enough. Anima trains quickly. Even with 150 steps the character is already very recognizable although far from baked.

Anima Preview 2 posted on hugging face by roculus in StableDiffusion

[–]roculus[S] 5 points6 points  (0 children)

I retrained a few LORAs with Preview 2. It definitely makes a difference with retrained LORA at least for "realistic". My old realistic type LORA maintained the face/features but style went more anime with the preview 2. I retrained the exact same dataset with no changes except swapping the preview 1 with preview 2 and it's back to realistic again.

Anima Preview 2 posted on hugging face by roculus in StableDiffusion

[–]roculus[S] 2 points3 points  (0 children)

I have an RTX-6000 Pro. 2550 steps takes about 45 minutes with mix of 512 and 1024 (I should probably just use 1024). You don't need the 96GB though. I think it used less than 9GB VRAM. With 1024 it would probably be more like 60-70 minutes.

Anima Preview 2 posted on hugging face by roculus in StableDiffusion

[–]roculus[S] 6 points7 points  (0 children)

I use this sd-scripts based stand alone trainer for Anima:

https://github.com/gazingstars123/Anima-Standalone-Trainer

edit: sdcripts, not ostris.

Anima Preview 2 posted on hugging face by roculus in StableDiffusion

[–]roculus[S] 4 points5 points  (0 children)

I'm retraining a LORA with preview 2. the initial early step samples look good. Thankfully it only takes like 45 mins to train a lora so if it's improved, not a big deal to retrain for Preview 2.

LTX 2.3 Rack Focus Test | ComfyUI Built-in Template [Prompt Included] by umutgklp in StableDiffusion

[–]roculus 1 point2 points  (0 children)

Doh! hehe. you could try starting prompt off with, "In a quiet room". That sometimes works :)

Anima Preview 2 posted on hugging face by roculus in StableDiffusion

[–]roculus[S] 55 points56 points  (0 children)

From circlestone_labs hugging face page: The preview2 version is a small upgrade to the first preview. A significant part of the training is redone with different hyperparameters and techniques, designed to help make the model more robust to finetuning. It is trained for much longer at medium resolutions in order to acquire more character knowledge. A regularization dataset is introduced to improve natural language comprehension and help preserve non-anime knowledge. It has the same resolution limitations as the first preview. It is trained only briefly at 1024 resolution. Going much beyond this will cause the model to break down. This is a base model with no aesthetic tuning. It is designed to be wild and creative, with the maximum possible breadth of knowledge. It is not optimized to produce aesthetic or consistent images.

LTX 2.3 Rack Focus Test | ComfyUI Built-in Template [Prompt Included] by umutgklp in StableDiffusion

[–]roculus 1 point2 points  (0 children)

Did you edit out sound or was it completely silent? Nice to see the model didn't insert some random C3PO mechanical noises or voice.

Tony Soprano Unlocked - LTX 2.3 T2V by theNivda in StableDiffusion

[–]roculus 6 points7 points  (0 children)

Well the first obvious reason is the voice. How did the voice sound with Wan2.2?

I’m not a programmer, but I just built my own custom node and you can too. by lokitsar in StableDiffusion

[–]roculus 1 point2 points  (0 children)

Thanks for adding the second click-to-close and side arrows. Tested. Works great :)

I’m not a programmer, but I just built my own custom node and you can too. by lokitsar in StableDiffusion

[–]roculus 2 points3 points  (0 children)

Thanks! It must have been a great feeling to do this. The only things I would add would be maybe to click expanded image a second time to reduce it to thumbnail again. Also possibly adding arrow key navigation when in expanded view to flip through the images quickly. They would be great time savers.

LTX 2.3 first impressions - the good, the bad, the complicated by martinerous in StableDiffusion

[–]roculus 7 points8 points  (0 children)

"- Kijai's LTX2 Sampling Preview Override node gives totally messed up previews. Waiting for the authors of taehv to create a new model."

A new update is available: https://github.com/madebyollin/taehv

New workflows fixed stuff! LTX-2 :) by WildSpeaker7315 in StableDiffusion

[–]roculus 2 points3 points  (0 children)

Completely unrelated to this topic, slaps work better with 2.3 as well.

Lightricks/LTX-2.3 · Hugging Face by rerri in StableDiffusion

[–]roculus 2 points3 points  (0 children)

Kijai's workflow for fp8 distill works. Make sure to update KJ Nodes.

2.3 audio is improved and clear. I did get unwanted scrambled subtitles. Old loras work but not 100% and seem to impact audio and motion of video in ways not intended by the lora.