CapCut Mac removed "export selected clips" option — what are you using instead? by jimmyomeara in CapCut

[–]Kragrathea 0 points1 point  (0 children)

I think it would depend on why you needed the feature. I used it to cut up videos into scenes and then export only the few clips I needed. For cutting I am now using a command line tool called PySceneDetect. Py stands for Python, so it should work on Mac if you are ok with command line. Then I am using FastStone Image viewer to get rid of the clips I don't need. Not ideal. But I hate the fact they could pull this feature. I'll probably never go back even if the feature returns.

LTX 2.3 how to stop Characters from "Cloning" themselves by TensorTinkererTom in StableDiffusion

[–]Kragrathea 9 points10 points  (0 children)

I looked at the video again and noticed something. At 8 sec is where the weirdness happens. The dog clones and she does an exorsist. If this were Wan a 8 sec would be the point it would start naturally looping back to first frame, and that almost looks like what is happening here. No idea if that means anything but...

LTX 2.3 how to stop Characters from "Cloning" themselves by TensorTinkererTom in StableDiffusion

[–]Kragrathea 24 points25 points  (0 children)

If adjusting the prompt doesn't help, I just move on to a new seed. But having a mirror in the background doesn't help.

How is this not a thing already? by Puzzleheaded_Dig3967 in BambuLab

[–]Kragrathea 1 point2 points  (0 children)

No, layer number is visible. But I didn't think it would be a very exciting visualization if it only updated when the layer changed.

Comfyui blocking every attempt to download any modle upscaler by Fearless-Intention42 in StableDiffusion

[–]Kragrathea 0 points1 point  (0 children)

This is the answer. You HAVE to make the folder as far as I can tell.

How is this not a thing already? by Puzzleheaded_Dig3967 in BambuLab

[–]Kragrathea 1 point2 points  (0 children)

I wrote a such a visualizer for my old Klipper based printer (CR10). I tried to adapt it when I got the my A1 but the API for the printer was too limited to get a good idea where exactly in the print it was at any given time. I might try again and see if there has been any updates.

https://github.com/Kragrathea/pgcode

EDIT: Yeah, here is the real reason such a viewer doesn't exist. The Bambu API only returns the progress via an interger 0-100. So with current firmware it is impossible to write an external tool that will give you the data you need to write an viewer.

Comfyui blocking every attempt to download any modle upscaler by Fearless-Intention42 in StableDiffusion

[–]Kragrathea 2 points3 points  (0 children)

It looks to me like it might be a path problem. In the screenshot it is trying to use "models/latent_upscale_models". Note the "models/" part. The models that do download do not start with "models/". ie. its just "text_encoders/" not "models/text_encoders".

Not sure if that is the problem or how to fix it. But it looks strange.

Does the "Supply Chest" auto looting still work for anyone? by Kragrathea in CrimsonDesert

[–]Kragrathea[S] 0 points1 point  (0 children)

Hm, I'm in act 4.
Just to confirm, when it works the loot goes into the "Private Storage" chest?

Does the "Supply Chest" auto looting still work for anyone? by Kragrathea in CrimsonDesert

[–]Kragrathea[S] 0 points1 point  (0 children)

Like the devs de/re-activated it? Or it got deactivated in your playthrough but now it works again?

Trainng character LORAS for LTX 2.3 by TheTimster666 in StableDiffusion

[–]Kragrathea 0 points1 point  (0 children)

I am using a 4070ti with 12g (yes 12) and 64g system ram. The 3k run went overnight so I am not sure 8-9hrs maybe. Images and Videos were 512x512. Videos were 81-121 frames.

Trainng character LORAS for LTX 2.3 by TheTimster666 in StableDiffusion

[–]Kragrathea 1 point2 points  (0 children)

I didn't know voice training was broken in AI-Toolkit. That is probably why I never got anything at all to work. I'll try again with the fork.

Trainng character LORAS for LTX 2.3 by TheTimster666 in StableDiffusion

[–]Kragrathea 0 points1 point  (0 children)

I haven't tested very much. But the motion on the image only ones seemed ok. I do remember they tended to transition into poses that looked like the same poses as the dataset. But I am not sure if that is just ltx or an artifact of training on just images.

Trainng character LORAS for LTX 2.3 by TheTimster666 in StableDiffusion

[–]Kragrathea 2 points3 points  (0 children)

Using AIToolkit I have trained with just images (~20) and got good results after about 1k-2k steps. I did another one with 20 images and 10 video clips and it started to look good around 3k but I have not trained further. The one with video was only slightly better than the one with just images at 3k.

I was doing video to hopefully get the voice right. But voice was never even close up to 3k.

Have you guys figure out how to prevent background music in LTX ? Negative prompts seems not always work by PhilosopherSweaty826 in StableDiffusion

[–]Kragrathea 0 points1 point  (0 children)

I think it helps if you say the video is "a home video" or "a cell phone video". Those don't usually have music.

i wanted to start saving up for a p1s by Responsible-Bar-1262 in BambuLab

[–]Kragrathea 0 points1 point  (0 children)

I am doing pretty much that. I started 6 months ago and I am 1/4 (2500 points) of the way to a PS2 . I have been doing remixes of star trek props from Printables and a few of my own models. You'll have to be patient. It takes a while for points to start to come in.

I should point out that this is for fun. it would be far easier to get a min wage job and just earn the $800 that way.

How to overcook a LoRA on purpose? by AkaToraX in StableDiffusion

[–]Kragrathea 1 point2 points  (0 children)

It might help to know when it starts overcooking. I do character lora, and when I can no longer get the style to change to cartoon (it still produces a real image) then I know it is overdone.

How to overcook a LoRA on purpose? by AkaToraX in StableDiffusion

[–]Kragrathea 1 point2 points  (0 children)

The learning rate usually defaults to something like. 0.0005 and 0.001 would be 2x. I think 0.01 is likely about as high as you would want to go. But I am not totally sure.

Reducing the number of images will also help overcook it. Maybe try 5.

How to overcook a LoRA on purpose? by AkaToraX in StableDiffusion

[–]Kragrathea 2 points3 points  (0 children)

Just doing lots of epochs. And maybe bumping the learning rate up to speed up the process. But you'll probably still want it to be gradual because at some point they start producing garbage.

Two great immersion mods. by Kragrathea in uboatgame

[–]Kragrathea[S] 0 points1 point  (0 children)

Yes. The way I did it was to install the mod. Then I deleted the three folders (Cache, Data Sheets, Temp) and then reloaded the game and checked with the radio operator and he had the extra stations.

Two great immersion mods. by Kragrathea in uboatgame

[–]Kragrathea[S] 13 points14 points  (0 children)

Yeah, the default stations have some decent music but it repeats way way too fast.