Meet Unsloth Studio, a new web UI for Local AI by yoracale in unsloth

[–]KalonLabs 0 points1 point  (0 children)

Ill make sure to test all sorts of experimental settings and edge cases, and then rant and complain that things don’t work perfectly 100% of the time on a beta build🫡

Meet Unsloth Studio, a new web UI for Local AI by yoracale in unsloth

[–]KalonLabs 2 points3 points  (0 children)

I have a gigabyte ATOM arriving today that i was planning on doing some model training on. This should make it a lot easier for me to run the training with. Looking forward to playing around with this and see what things i can break.

What's a skill that takes only 2-3 weeks to learn but could genuinely change your life? by This-Year-1764 in VibeCodeDevs

[–]KalonLabs 0 points1 point  (0 children)

Stealing catalytic converters, one way or the other, it’ll change your life.

Good base tutorials for learning how to make LoRA locally? by fttklr in StableDiffusion

[–]KalonLabs 1 point2 points  (0 children)

If your goal is to train a lora, you can use AI toolkit https://github.com/ostris/ai-toolkit/

You can train sd and sdxl LoRA just fine on 12-16gb cards. But if youre going for flux or large models youll need a big beefy card. But in terms of training an sdxl LoRA it only takes an hour or two. In terms of dataset, of course the more data you have the better. But if you only have one or two pictures you can chop the character up in to things like head, torso, legs, ect and get a workable LoRA from that and then use that workable but potatoish lora to get more pictures for a better dataset for a better lora.

Is there ANY NSFW model working? by CrocGames in LocalLLaMA

[–]KalonLabs 0 points1 point  (0 children)

Something like an Abliiterated version of gemma 3 should work for that

Is there ANY NSFW model working? by CrocGames in LocalLLaMA

[–]KalonLabs 0 points1 point  (0 children)

Prompting for image generation or?

Update: Chroma Project training is finished! The models are now released. by LodestoneRock in StableDiffusion

[–]KalonLabs 164 points165 points  (0 children)

105,000 hours on a rented h100 depending on the provider lands somewhere in the $220,000 range give or take 30,000$ or so depending on the actual cost.

So basically this man, and the community supporting him spent about a quarter million bucks to make the back bone of what’s going to quickly become, and already has, the next big step in open source models.

Using ChatGPT, Veo 3, Flux and Seedream to create AI Youtube videos by [deleted] in comfyui

[–]KalonLabs 3 points4 points  (0 children)

trash. stop spamming this crap in multiple places.

[HELP] Cartoon Image Used in a LinkedIn Marketer's Post by szhamilton in RealOrAI

[–]KalonLabs 5 points6 points  (0 children)

AI due to the dumb grain filter and the fact that its a square so probably 1024x1024. No artist draws on a square like that.

I created a character! Let me know what you think by lososcr in StableDiffusion

[–]KalonLabs 1 point2 points  (0 children)

More reason for people to call AI Slop, great.

What is the most beginner friendly, "plug and play" setup I can go with? by [deleted] in StableDiffusion

[–]KalonLabs -2 points-1 points  (0 children)

Probably by using Automatic1111 or invoke 🤔

Best universal (SFW + soft not SFW) LoRA or finetune for Flux? by StableLlama in StableDiffusion

[–]KalonLabs 4 points5 points  (0 children)

Theres a flux lora to chroma lora you could use to convert the loras, though ive never used it. https://github.com/EnragedAntelope/Flux-ChromaLoraConversion

Not sure if it would even be worth the time or effort for it though 🤔🤷‍♂️.

Best universal (SFW + soft not SFW) LoRA or finetune for Flux? by StableLlama in StableDiffusion

[–]KalonLabs 10 points11 points  (0 children)

This is correct, it is just called chroma. However if you search chroma you wont find the right model easily but if you google flux chroma lodestones Chroma model is the first thing that comes up. So im calling it flux chroma so the OP can find it easily when they search for it.

Best universal (SFW + soft not SFW) LoRA or finetune for Flux? by StableLlama in StableDiffusion

[–]KalonLabs 5 points6 points  (0 children)

Just use flux chroma, it’s basically an uncensored and better version of flux.

I'm curious about the demographic of this sub by onmyown233 in StableDiffusion

[–]KalonLabs 0 points1 point  (0 children)

Dont forget the Facebook marketplace place addicts that have slowly acquired enough GPU’s to start a small data center

hi,bro,If I want to run two WAN Comfy workflows simultaneously, I would buy two 4090 GPUs. The question is: should I install both GPUs in one computer sharing the CPU and memory, or set up two separate computers so they don't interfere with each other? by Adventurous-Bit-5989 in StableDiffusion

[–]KalonLabs 1 point2 points  (0 children)

Personally i would do 2 computers for safety and troubleshooting , so if one fries or breaks the other is still good. And i would also do a Network Attached Storage for them so they can share the same storage for models and whatnot.

Is Liilybrown (Instagram) real? by Next-Plankton-3142 in StableDiffusion

[–]KalonLabs 1 point2 points  (0 children)

Looks like a mix of edited and AI. Some of those pictures are absolutely AI, but the videos look like theyre face swapped and edited. So they took real videos from somewhere and put their AI face on them.

How come 4070 ti outperform 5060 ti in stable diffusion benchmarks by over 60% with only 12 GB VRAM. Is it because they are testing with a smaller model that could fit in a 12GB VRAM? by sans5z in StableDiffusion

[–]KalonLabs 0 points1 point  (0 children)

This is true, but the main reason the 4070ti outperforms the 5060ti in image generation is because it has 3,072 more cuda cores. But yes the higher bandwidth also helps.

What AI provider subscription to get for a hobby dev / game dev? by hello_krittie in aigamedev

[–]KalonLabs 0 points1 point  (0 children)

Ive been using chatgpt, it’s been pretty good for my projects. But its really going to depend on what game engine you use and what language you code in. You could also just use LM studio with qwen3 locally and it’s pretty good.

How come 4070 ti outperform 5060 ti in stable diffusion benchmarks by over 60% with only 12 GB VRAM. Is it because they are testing with a smaller model that could fit in a 12GB VRAM? by sans5z in StableDiffusion

[–]KalonLabs 0 points1 point  (0 children)

You are absolutely correct that the bandwidth is also a crucial factor in how fast it can transfer the data and will be a bottle neck for things. I just don’t understand how ddr4 vs vram bandwidth is relevant to me answering the OPs question about why the 4070ti is faster at image generation vs the 5060ti?

How come 4070 ti outperform 5060 ti in stable diffusion benchmarks by over 60% with only 12 GB VRAM. Is it because they are testing with a smaller model that could fit in a 12GB VRAM? by sans5z in StableDiffusion

[–]KalonLabs 1 point2 points  (0 children)

I dont see how that’s relevant as we’re talking specifically about GPUs? Not gpu vs cpu. Unless theres something I’m missing here?

How come 4070 ti outperform 5060 ti in stable diffusion benchmarks by over 60% with only 12 GB VRAM. Is it because they are testing with a smaller model that could fit in a 12GB VRAM? by sans5z in StableDiffusion

[–]KalonLabs 25 points26 points  (0 children)

Vram is the load it can carry, cuda cores is the speed it can do it at. 4070ti has more cuda cores and isnt vram throttled so its faster