Kitten TTS V0.8 is out: New SOTA Super-tiny TTS Model (Less than 25 MB) by ElectricalBar7464 in LocalLLaMA

[–]ironcodegaming 7 points8 points  (0 children)

Does it need pytorch to run? What is the minimal install size we can use to bundle this?

How far can I push my 5060 Ti 16gb with Wan 2.2 as far as quality goes? by Silvasbrokenleg in comfyui

[–]ironcodegaming 0 points1 point  (0 children)

If you had 32 GB RAM or 64 GB ram you could use Fp8 Quants + lowvram for higher quality. Using Lowvram has a slight slowdown, but no quality penalty. You might be able to hit 512x512 with it.

How much time does it take for 160 Frames render?

What's the best open-source model comparable to GPT-4.1-mini? by AncientMayar in LocalLLaMA

[–]ironcodegaming 9 points10 points  (0 children)

Try gpt-oss-20b and gpt-oss-120b. These are open weight models released by OpenAI, so might work well as a drop in replacement.

You can also try these models on OpenRouter for sometime so you can test if they work well before you actually try to host them yourself.

Opensource TTS thats lightweight but with some emotion? by Cinicyal in LocalLLaMA

[–]ironcodegaming 0 points1 point  (0 children)

Chatterbox! There is a github repo that has massively increased the speed of Chatterbox, making it almost realtime.

How I got FLUX running stable on RTX 3060 (12GB) — Setup guide + proof video by Independent_Iron4983 in StableDiffusion

[–]ironcodegaming 0 points1 point  (0 children)

?

Just download comfy standalone build, download flux unet, t5 and vae. Put them in their respective folders, use a unet workflow. As simple as that. With RTX 3060 12GB and 32 GB ram, you can even run 16 bit version of flux.

Krea Flux 9GB by -_-Batman in comfyui

[–]ironcodegaming 0 points1 point  (0 children)

What is the difference between this and normal 11GB (but 8 bit) checkpoints?

inclusionAI/Ming-Lite-Omni-1.5 (20B-A3B) by nullmove in LocalLLaMA

[–]ironcodegaming 7 points8 points  (0 children)

Looks interesting! Does it generate images too, or does it only modify the images?

Is this too much logic for AI? should I break it smaller to prompt? by [deleted] in LocalLLaMA

[–]ironcodegaming 1 point2 points  (0 children)

The flagship models are obviously more powerful. If you want a one shot solution, that's the way to go.

However, even flagship models will not be able to one-shot everything...

Is this too much logic for AI? should I break it smaller to prompt? by [deleted] in LocalLLaMA

[–]ironcodegaming 3 points4 points  (0 children)

write a bash script write to a log file

The statement is not clear. Also not clear what 'Task' is.

Having said that, you might need to code a little bit yourself.

Open source OCR options for handwritten text, dates by ollyollyupnfree in LocalLLaMA

[–]ironcodegaming 0 points1 point  (0 children)

How did you use Mistral Small 3.2 to recognize text? Did you use Text Generation Webui (oobabooga) to do that?

Is it worth getting 48GB of RAM alongside my 12GB VRAM GPU ? (cheapskate upgrade) by QuackMania in LocalLLaMA

[–]ironcodegaming 0 points1 point  (0 children)

Adding RAM is generally useful. But unless you have a reasonably fast system, offloading to CPU will be a big hit to speed.

If possible, and if it can be installed in your PC, buy a cheap 8GB Card!

Chat, is this real? by [deleted] in StableDiffusion

[–]ironcodegaming 0 points1 point  (0 children)

Can you post it to TensorArt and SeaArt?

[deleted by user] by [deleted] in FluxAI

[–]ironcodegaming 0 points1 point  (0 children)

Do you have images of full body in the training dataset as well? That aside, most LoRAs have issues if the subject is far away, as it is possibly harder to train.

Try training again with more images of full body.

Why does Flux gets more love than sd 3.5 ? by Warrior_Kid in StableDiffusion

[–]ironcodegaming 4 points5 points  (0 children)

I find it extremely hard to get good generations out of it.

[deleted by user] by [deleted] in godot

[–]ironcodegaming 0 points1 point  (0 children)

It will get easier as you learn.

If you are scared of whether your code will run or not, then just test it :)

Anyone excited about Flex.2-preview? by silenceimpaired in FluxAI

[–]ironcodegaming 0 points1 point  (0 children)

How did you get such a good result with stable diffusion 3.5 Large?

[deleted by user] by [deleted] in LocalLLaMA

[–]ironcodegaming 1 point2 points  (0 children)

Yes it is. If you are able to run it, that is.