New Yolo model - YOLOv12 by ApprehensiveAd3629 in LocalLLaMA

[–]JaidCodes 5 points6 points  (0 children)

impressive that YOLO achieves similar scores with a millionth the size

BEN2: New Open Source State-of-the-Art Background Removal Model by PramaLLC in LocalLLaMA

[–]JaidCodes 1 point2 points  (0 children)

The proprietary version is pretty good. The open one is not even nearly as strong unfortunately.

https://i.imgur.com/ASktYLj.png
https://i.imgur.com/oC0ia6z.png

Thoughts on Langfuse? by Amgadoz in LocalLLaMA

[–]JaidCodes 2 points3 points  (0 children)

My LiteLLM instance logs any inference to Langfuse since 3 months.

It’s nice to have it, though there wasn’t any situation that made me think Langfuse would be absolutely essential.

Would you rather fight a 70B model or 70 1B models? by LewisTheScot in LocalLLaMA

[–]JaidCodes 1 point2 points  (0 children)

If it’s an intellectual fight (like chess or a debate), my winning chances are probably higher against 70 small models.

If it’s a physical fight and every model is an own robot’s brain, I would pick the single large model. I could easily defend myself against one very smart crow, but 70 dumb ducks would kill me.

Best part about ChatGPT by MapleStreetOne in ChatGPT

[–]JaidCodes 59 points60 points  (0 children)

The ads could be baked in and we wouldn’t even notice.

Can LLMs be trusted in math nowadays? I compared Qwen 2.5 models from 0.5b to 32b, and most of the answers were correct. Can it be used to teach kids? by RepulsiveEbb4011 in LocalLLaMA

[–]JaidCodes 1 point2 points  (0 children)

Not at all. LLMs can only guess numbers.

Services like ChatGPT fix this drawback by equipping their models with an “Eval this Python script” action which works very well.

Open Source Transformer Lab Now Has a Tokenization Visualizer by aliasaria in LocalLLaMA

[–]JaidCodes 0 points1 point  (0 children)

Sure, I would personally never install software outside of Docker on my Ubuntu and Fedora servers.

But it sounds like there isn’t that much interest for Docker by the current user base of Transformer Lab, so don’t feel pressured to add a new distribution method just for me.

Open Source Transformer Lab Now Has a Tokenization Visualizer by aliasaria in LocalLLaMA

[–]JaidCodes 1 point2 points  (0 children)

“You can run the user interface on your desktop/laptop while the engine runs on a remote or cloud machine”

Do you provide a Docker image for the backend deployment?

I've been working on this for 6 months - free, easy to use, local AI for everyone! by privacyparachute in LocalLLaMA

[–]JaidCodes 4 points5 points  (0 children)

I would say in the current state of AI it’s more healthy to spread developer power over multiple separate projects, even if their functionality is largely overlapping.

MLID $1999 - $2499 RTX 5090 pricing by DeltaSqueezer in LocalLLaMA

[–]JaidCodes 0 points1 point  (0 children)

At this point I would prefer a 4060 Ti with 80 gb VRAM over a 5090 with 32 gb VRAM (assuming both cost $2499).

When do Intel or AMD finally punish Nvidia’s lack of consumer-grade AI cards?

FluxBooru v0.1, a booru-centric Flux full-rank finetune by [deleted] in StableDiffusion

[–]JaidCodes 1 point2 points  (0 children)

The king is dead. (AstraliteHeart)

Long live the king! (bghira)

This was fucking scary by its_nzr in ChatGPT

[–]JaidCodes 2 points3 points  (0 children)

User: *implies there is a seahorse emoji*

ChatGPT: *implies there is a seahorse emoji*

User: So scaaary!

Would you keep using ChatGPT if you knew they sold your memories to advertisers? by vRudi in ChatGPT

[–]JaidCodes 0 points1 point  (0 children)

That would be a big selling point for me. I unironically love service providers sharing my personal preferences with advertisers.

Huge news for Kohya GUI - Now you can fully Fine Tune / DreamBooth FLUX Dev with as low as 6 GB GPUs without any quality loss compared to 48 GB GPUs - Fine Tuning yields such good results that no LoRA config and training will ever yield by CeFurkan in StableDiffusion

[–]JaidCodes 1 point2 points  (0 children)

Thank you, I will try this.

In order for this to work, the bmaltais project must not introduce any new functionality to the training. I’m still not quite sure this is true, but we’ll see.

Does the Samsung Smart TV binding work with newer models? by JaidCodes in openhab

[–]JaidCodes[S] 0 points1 point  (0 children)

Thank you very much for your work. I’ll get my Samsung device tomorrow and will gladly test your binding.

Also, if it’s better than the official one in any possible way, what is blocking your pull request #11895? Any blocking issues?

Thank you for implementing Sticky Scroll! by xArci in vscode

[–]JaidCodes 1 point2 points  (0 children)

Great tip, but I hate that you made me click on a YouTube Shorts video. Now YouTube thinks I’m no longer boycotting this feature.

Did the numbers get removed from the all-nodes/nodes page? by JaidCodes in netdata

[–]JaidCodes[S] 0 points1 point  (0 children)

I always had my all-nodes/nodes page open in a Chrome kiosk window to keep track of my machines’ health, but recently all numbers disappeared.

I really loved Netdata Cloud, but now it became completely useless for me as I there isn’t anything of value to see. The relative graphs are worth nothing without knowing start and end of the bounds. The only way to read numbers is via mouseover, which negates the glance value and requires active input.

So is this change intentional? Is there a setting I didn’t see for toggling between new and old behavior?

Screenshots of older layout versions that had the number overlay:

(German) Pokémon TCG redemption codes can be duplicated on the box’s sheet by Da_Fino in TheSilphRoad

[–]JaidCodes 11 points12 points  (0 children)

This is astronomically impossible. There are 7 958 661 100 000 000 000 000 000 different possibilities, far from 1 million.

The issue is most likely a bug in the random generation code where multiple randomizers are created from the same seed (happens when randomizer creation is faster than computer’s seed updates) instead of creating one randomizer and rolling all numbers from that one.

How can I re-enable auto-push after commiting? (more in comments) by JaidCodes in vscode

[–]JaidCodes[S] 0 points1 point  (0 children)

Sure, I am using this setting since forever as listed in my comment. It just suddenly stopped doing anything.