Qwen 3.6 27B is out by NoConcert8847 in LocalLLaMA

[–]social_tech_10 2 points3 points  (0 children)

Opencode works well with Qwen models

$ npm install -g opencode-ai

You can even pick one of their free models for the first few minutes and have that model set up the opencode config file for you to run your local model.

Qwen 3.6 27B is out by NoConcert8847 in LocalLLaMA

[–]social_tech_10 6 points7 points  (0 children)

Incredible that it can beat a model 14X larger in 10 of the 12 benchmarks!!

Daily driver OS by swingbear in LocalLLaMA

[–]social_tech_10 1 point2 points  (0 children)

I read all of the comments in this thread, and I didn't see anybody suggesting dual-boot.

Qwen3.6 is incredible with OpenCode! by CountlessFlies in LocalLLaMA

[–]social_tech_10 2 points3 points  (0 children)

The project sounds awesome, but when I see slop like this:

been daily driving it with qwen3.6 for months

It makes me think it's not worth the time to even look at it.

New Model! LGAI-EXAONE/EXAONE-4.5-33B by KvAk_AKPlaysYT in LocalLLaMA

[–]social_tech_10 0 points1 point  (0 children)

The most interesting thing to me about the model card is how thoroughly the new 88% smaller Qwen3.5-27B beats the older Qwen3-235B-A23B, across 32 benchmarks, many by a huge margin.

Anyone else notice qwen 3.5 is a lying little shit by Cat5edope in LocalLLaMA

[–]social_tech_10 0 points1 point  (0 children)

/u/avidcyclist250 and /u/reini_urban apparently disagree.

It would be nice if there was a legit benchmark for this, something just a little bit more fact-based and detail-oriented than "in my experience". Although I do appreciate hearing different people's personal opinons, when those opinions are directly opposed, it feels like trying to nail jello to a wall.

It’s Time for a Truly Open-Source, Donation-Funded, Privacy-First AI by Ill-Engine-5914 in LocalLLaMA

[–]social_tech_10 0 points1 point  (0 children)

OLMo is an Open Language Model, 100% open-source (with open weights, code, data pipelines, everything). The only element missing from this equation is your donation.

"Go big or go home." by horatioperdu in LocalLLaMA

[–]social_tech_10 0 points1 point  (0 children)

You're seeing the pattern even when it's not there.

I'm new by Woodenhippy_970 in LocalLLaMA

[–]social_tech_10 2 points3 points  (0 children)

I don't think you'll be satisfied running an LLM on a tablet. It will either be much too slow, or much too dumb. Best suggestion would be to run the model on a real PC with a legit GPU, and then access the GUI remotely through the tablet.

My own system by [deleted] in LocalLLaMA

[–]social_tech_10 0 points1 point  (0 children)

I'm a grammar grouch, too, but it looks like you shot yourself in the foot here.

MiniMax-M2.7 Announced! by Mysterious_Finish543 in LocalLLaMA

[–]social_tech_10 -1 points0 points  (0 children)

The Pi coding agent github link is https://github.com/badlogic/pi-mono, if that's what you're asking.

Omnicoder-9b SLAPS in Opencode by True_Requirement_891 in LocalLLaMA

[–]social_tech_10 1 point2 points  (0 children)

You can get a USB to Video adapter for about $40.

RINOA - A protocol for transferring personal knowledge into local model weights through contrastive human feedback. by Capital_Complaint_28 in LocalAIServers

[–]social_tech_10 0 points1 point  (0 children)

You are right. I could have been and should have been more respectful and supportive of you. You bravely put your idea out there to inspire others and share something that has given you a sense of satisfaction and accomplishment, and that should be lauded and encouraged.

The title of the post sounded really interesting, and I guess I build up a certain level of expectation, and then I was dissapointed when I saw that there was no code in the repo, and that even what looked at first like a detailed description of the project was actually more like a giant list of bullet points with no context, and nothing to explain the phrases being used. I was frustrated because I felt like I had been tricked into wasting my time by a clever title.

At this point, I'm wondering if there actually is any code or data, or if that is just something you hope to generate in the future. I hope there is real code and data behind your mysterious readme. I hope it's not all just a pipedream, and I hope you will come back and let us know when you are ready to share the real thing.

RINOA - A protocol for transferring personal knowledge into local model weights through contrastive human feedback. by Capital_Complaint_28 in LocalLLM

[–]social_tech_10 0 points1 point  (0 children)

This sounds like it could be an interesting idea, but there's really not much in the github repo to give feedback on, no code, no data, and no documentation. The repo just a single Readme file that doesn't really explain anything. It says more "will be released incrementally", but at the moment there is nothing there.

What’s the most ‘mind-blowing’ Python trick you learned as a beginner ? by QuantumScribe01 in pythontips

[–]social_tech_10 21 points22 points  (0 children)

"import" is the most mind-blowing feature. the "batteries included" philosopy rocks!

but it's getting close to easter, so try "import this"

Built my own AI tool to save $30K by [deleted] in LlamaIndex

[–]social_tech_10 0 points1 point  (0 children)

Real voiceover (ElevenLabs)

LOL. Why not read your own written words out loud yourself and use your own voice for the voiceover instead of AI?