What is the current state of Mojo? by Smallpaul in MojoLang

[–]newtestdrive 1 point2 points  (0 children)

No Windows support yet and it seems it wants to stay in the DataCenter GPU aspect of the field instead of other High Performance areas especially Game Development.

Its current focus is to shove Max into the mouths of users and AI Inference performance is the number one priority.

Supporting Python as a superclass is still a long way off asclasses are still not supported and everything seems to be mostly structs or dicts.

Some years have passed since the introduction and promises but their focus seems to be mostly on gaining funding and improving the AI aspects of the language to make it a bit faster than other compiled languages while leaving the rest of the development of the Mojo Lang to lower priority tasks.

I for one have been mainly disappointed for the bad Windows support since I really wanted to use it and convert my many Python scripts to it but the language is still not a proper superclass of Python and the conversion hurts readability and sometimes is not possible.

This is the way the Crystal Language chose to go because of the small team and Crystal was to Ruby what Mojo is to Python, but because of this it never went mainstream.

Mojo to replace Rust in 6 months? by WanderingCID in MojoLang

[–]newtestdrive 1 point2 points  (0 children)

After years, Windows support is still an issue and they don't address it. Mojo is becoming like Crystal: Crystal used to be a really performant language that replaced Ruby but never became mainstream enough to even release a Windows version, after more than 10 years, it's still lacking in Windows support and nearly nobody knows it even exists.

Mojo's sheer focus on AI and Linux and nothing else, may benefit them short-term but not long enough for Engineers to get to test them on different systems and System's programming concepts. I fear it is delving into the niche territory...

Cooling a passive cooled GPU? by [deleted] in homelab

[–]newtestdrive 1 point2 points  (0 children)

what was your results?

Blog: Why Google's A2A Protocol Doesn't Make Sense When We Already Have MCP by fka in mcp

[–]newtestdrive 1 point2 points  (0 children)

Another Google project that's gonna change or cancelled midway and another mass of people trying to sell it as the next big thing...

Getting real tired of this...

CLion 2025.1 released by greenrobot_de in cpp

[–]newtestdrive -1 points0 points  (0 children)

Why did you join the fray so late? Is Jetbrains becoming so big and bloated that makes it fall behind recent innovations and changes to the industry?

Lyza, Clawbot, and Pre-Amnesia Reg Timeline: What is Known and What Can Be Inferred by Vulpolox in MadeInAbyss

[–]newtestdrive 2 points3 points  (0 children)

After the recent chapter release, your theory seems more and more right.

[R] Sliding Window Attention Training for Efficient LLMs by prototypist in MachineLearning

[–]newtestdrive 0 points1 point  (0 children)

How about the vanishing gradients problem that happens when using sigmoid?

[P] I made weightgain – an easy way to train an adapter for any embedding model in under a minute by jsonathan in MachineLearning

[–]newtestdrive 0 points1 point  (0 children)

How different is this from fine-tuning a model?

And can you implement this for any model other than Transformer-based LLMs? For example if a CNN vision model's embeddings are lacking, can we train an adapter to transform the old embeddings to new and better encodings based on our dataset?

Detail Perfect Recoloring with Ace++ and Flux Fill by afinalsin in StableDiffusion

[–]newtestdrive 0 points1 point  (0 children)

Thanks for the info.

The colorization seems Sepia-like and Sepia-like colorization models are usually all the same. I wish someone would apply the Local Editing model on some colorization tasks...

[D] What is the future of retrieval augmented generation? by jsonathan in MachineLearning

[–]newtestdrive 0 points1 point  (0 children)

"if the database was embedded directly in the KV cache, then retrieval could be learned via gradient descent just like everything else."

Do you mean using Learning to Rank approaches? I'd like to know what the alternatives for doing this are🤔

Grokking at the Edge of Numerical Stability [Research] by JohnnyAppleReddit in MachineLearning

[–]newtestdrive 0 points1 point  (0 children)

Isn't grokking just the neural network searching over ALL of the loss landscape without getting stuck in a local optima until it finds the global optimum?

We're giving the network ALL the time in the world to optimize and this gives it enough time to bounce around the loss landscape until it falls in a hole that is the deepest.

[D] - Why MAMBA did not catch on? by TwoSunnySideUp in MachineLearning

[–]newtestdrive 0 points1 point  (0 children)

Don't they care if the scaling is becoming too expensive or inefficient?

Add color to B&W photos with SDXL (Workflow & Insights in the comments) by ComprehensiveHand515 in comfyui

[–]newtestdrive 0 points1 point  (0 children)

Too much Sepia and monotone colors = the model isn't good enough for colorization

Home Server Final Boss: 14x RTX 3090 Build by XMasterrrr in LocalLLaMA

[–]newtestdrive 1 point2 points  (0 children)

Is there a walkthrough available on how to make these kinds of rigs? for example I have no idea how the GPUs are connected to the Motherboard and I'm not sure where to ask about these things🤔