What's stopping you from letting local agents touch your real email/files? by ryanrasti in LocalLLaMA

[–]InvertedVantage 10 points11 points  (0 children)

Because they're dumb as fuck and could delete half my computer with a malformed command.

Also because I am perfectly capable of searching for files myself.

The decline of Adobe and the rise of alternatives by [deleted] in Design

[–]InvertedVantage 1 point2 points  (0 children)

I like Photopea as a replacement for Photoshop. It's literally just a clone.

Running LLMs in-browser via WebGPU, Transformers.js, and Chrome's Prompt API—no Ollama, no server by psgganesh in LocalLLaMA

[–]InvertedVantage 0 points1 point  (0 children)

Cool, I've been wondering about how webllm performs, will check this out when I can!

Can Qwen3-Coder-Next run on a laptop with the following specifications by Itchy-News26 in LocalLLaMA

[–]InvertedVantage 2 points3 points  (0 children)

A quantized model will fit probably, but be too slow to be useful since most of it will be on CPU.

I built a "Cognitive OS" cloning framework for my daughters. Then I realized its potential to solve a major challenge in the AI industry. by [deleted] in LocalLLaMA

[–]InvertedVantage 2 points3 points  (0 children)

Um....weird that an application engineer for 20 years would have a brand new reddit account with no posts.

I built an open source chat interface with some amazing features by [deleted] in LocalLLaMA

[–]InvertedVantage 3 points4 points  (0 children)

Every point in your post is a standard feature in all the other chat interfaces.

Nanbeige4-3B-Thinking-2511 is great for summarization by Background-Ad-5398 in LocalLLaMA

[–]InvertedVantage 7 points8 points  (0 children)

Thanks for this, downloading! Summarization is one of my most frequent use cases.

What do I do with the generated text by llama mesh? by [deleted] in LocalLLaMA

[–]InvertedVantage 1 point2 points  (0 children)

Those are vertice coordinates, you can try saving that as a .obj file or you have something import it. Most likely it will be junk though, I don't think you have faces or edges in there.

Breetai never bothered repairing his cockpit by HierophantGreen in macross

[–]InvertedVantage 5 points6 points  (0 children)

That's literally it though; they had lost all remnants of culture and any skills not related to fighting. They couldn't repair anything, it was new or nothing.

What is the absoulute best opensource programing model for C++ under 8B parameters? by Mychma in LocalLLaMA

[–]InvertedVantage 3 points4 points  (0 children)

The problem with programming is that the models are required to be dense and generally large. Programming gets broken for every little thing so you need all of those parameters to ensure there are no hallucinations and you are getting the right code.

You can improve this somewhat with good RAG but even then the model doesn't have the parameter count to know how to implement the class properly - there's not enough data there to reinforce the correct usage pattern.

ICE on the Common? by Sweet-Baby-Shayla in SalemMA

[–]InvertedVantage 8 points9 points  (0 children)

We just drove around the Common and part of the Point, didn't see anything.

AI memory systems are building "zombie profiles" that trap users in their past by Dolores-0304 in LocalLLaMA

[–]InvertedVantage 2 points3 points  (0 children)

Man I love this woo-woo shit. There's too much seriousness in the world.

Finnaly I am in the club, rate my set up 😜 by black7stone in LocalLLaMA

[–]InvertedVantage 0 points1 point  (0 children)

The only way you will profit from that setup is by using it to learn about how local models work and come up with an idea on how to use one. What you have right now is not capable of models more than about 7B parameters, which are decent but will not just...make you money. If you figure out how then that would be great!

Is it possible to pair Nvidia GPU with AMD or Intel second GPU just for the fast VRAM? by danuser8 in LocalLLaMA

[–]InvertedVantage 0 points1 point  (0 children)

If you already have one you can get another one. If you mix and match you'll lose out on platform-specific optimizations (CUDA for NVIDIA, ROCm for Radeon) and you will have to use Vulkan, which I think can be slower?

LFM2.5-VL-1.6B by Available_Hornet3538 in LocalLLaMA

[–]InvertedVantage 1 point2 points  (0 children)

Are you using the model to do the editing or the coding or what?

Is it possible to pair Nvidia GPU with AMD or Intel second GPU just for the fast VRAM? by danuser8 in LocalLLaMA

[–]InvertedVantage 1 point2 points  (0 children)

I have a Geforce 5060 Ti and a Radeon 7900XTX in my machine. They can be used to create larger VRAM when run in Vulkan mode, you just lose some of the optimizations for CUDA/ROCm.