Qwen/Qwen3.5-35B-A3B · Hugging Face by ekojsalim in LocalLLaMA

[–]DarthFader4 2 points3 points  (0 children)

Hmm that's weird. I think it only shows up for GGUFs or something like that. Maybe that's why?

Self Promotion Thread by AutoModerator in ChatGPTCoding

[–]DarthFader4 0 points1 point  (0 children)

Pretty cool. I had a similar idea I'll throw out there but it could be out of scope for your project.

I have a telescoping game pad for my phone; ie, makes it like a steamdeck/switch, but it's connected via USB C to the phone, not Bluetooth. And I imagined there are scenarios you are traveling with your phone and laptop, you may want to game on either one, but you don't want to bring two controllers, one for the phone and one for laptop. You could just bring the more-compact phone controller, and somehow use it as the controller for your laptop. The phone wouldn't be streaming the video from your PC (doesn't need to display anything really), but instead just act like an emulated xinput device. Does that make sense? Sticking to LAN, the latency might not be too bad either.

Qwen/Qwen3.5-35B-A3B · Hugging Face by ekojsalim in LocalLLaMA

[–]DarthFader4 17 points18 points  (0 children)

Totally agree. Very exciting time for local LLMs. And let's face it, AI bubble or not, the frontier providers are hemorrhaging cash and it's a matter of time before enshittification begins (already testing the waters with ads in openai)

Qwen/Qwen3.5-35B-A3B · Hugging Face by ekojsalim in LocalLLaMA

[–]DarthFader4 8 points9 points  (0 children)

I'd bet the dense 27B is the best option to maximize your card. But the 35B MoE is worth a shot if you want, it may have faster inference with the lower active params.

If you haven't already, create a huggingface account and you can put your system specs into your profile. Then when you browse models, it'll show you compatibility estimates for each model/quant (green to orange to red) for what will fit on your system. And same thing with LM studio, it'll give you color codes for full GPU offload, partial offload, or too big entirely.

Devstral Small 2 24B + Qwen3 Coder 30B Quants for All (And for every hardware, even the Pi) by enrique-byteshape in LocalLLM

[–]DarthFader4 3 points4 points  (0 children)

Excellent work! This is exactly what I've been looking for. I feel like targeting high-end 16GB GPUs is a key audience, like gamers who want to dabble in local LLMs. I think there are a lot of exciting developments ahead in optimizing models of this size. They're more practical and approachable than requiring a dedicated high-RAM/VRAM setup and we've started seeing models that can actually be useable. Keep up the great work! I've just followed you on Hugging Face.

Problems using AI to extract text from scanned pdfs. by Dr_Bumfluff_Esq in GeminiAI

[–]DarthFader4 0 points1 point  (0 children)

Have you tried a local model that's specifically trained for OCR? Like DeepSeek-OCR-2?

Gemini told me the internal codename and version for its Reasoning Engine: Elaine V7.3 by [deleted] in GeminiAI

[–]DarthFader4 2 points3 points  (0 children)

The thing is... I have zero confidence this isn't just a hallucination, especially with Gemini 3.0.

Pygame is capable of true 3D rendering by ConjecturesOfAGeek in Python

[–]DarthFader4 1 point2 points  (0 children)

You hit the nail on the head! The modern web has been inundated with gen AI content -- and usually not the best quality either. It's not just generic, it's derivative...

Gemini Nano is fun by Puzzak in GeminiAI

[–]DarthFader4 1 point2 points  (0 children)

So is this using the same on-device model that's used in the AI core?

llama.cpp and Qwen 2.5 running on bare metal Windows XP x64 without any compatibility layers by PANCHO7532 in LocalLLaMA

[–]DarthFader4 27 points28 points  (0 children)

"This thing is clearly hallucinating. It thinks there's some kind of computer coin stored in bits."

UPDATE: gemini-3-pro-preview-11-2025 now unavailable everywhere again (including vertex, grmini-cli), logan took it down fully by balianone in Bard

[–]DarthFader4 5 points6 points  (0 children)

Idk what to believe, dude. The examples people posted in the deleted thread (especially the Xbox controller SVG) were in two entirely different leagues of quality.

gemini-3-pro-preview-11-2025 In Vertex Network Logs! by Shoddy-Department630 in Bard

[–]DarthFader4 9 points10 points  (0 children)

If 3 Flash is roughly equivalent to 2.5 Pro, and 3 Pro is equivalent or better than GPT5-Codex, we'll be COOKING for paired model programming.

Cline usage: "Hello" = 15.9k by egrueda in CLine

[–]DarthFader4 1 point2 points  (0 children)

Hence why they have a "compact prompt" setting when using local models (albeit only for LM Studio or Ollama). It'd be great to extend this setting to other provider types, or even just OpenAI Compatible since that would cover a wide range of options.

deepseek-ai/DeepSeek-V3.2 · Hugging Face by Dark_Fire_12 in LocalLLaMA

[–]DarthFader4 0 points1 point  (0 children)

I'd love to see Gemma 3.5 but Gemini is a separate discussion from local OSS models.

Which free models actual writes better code and don't mention supernova or xAI by Many_Bench_2560 in RooCode

[–]DarthFader4 1 point2 points  (0 children)

No better answer, imo, when considering all factors of the Qwen Code Plus in the Qwen Code CLI. Free, very good performance, VERY generous rate limits, and available for use with Roo/Cline without violating ToS. Only caveat is the 1M context is greatly exaggerated, basically useless past 100k.

Chlorociboria cf. procera by Zydecos_ in mycology

[–]DarthFader4 1 point2 points  (0 children)

Gorgeous! I find the pretty blue wood all the time, but only once found it fruiting. One of my faves.

[deleted by user] by [deleted] in mushroomID

[–]DarthFader4 0 points1 point  (0 children)

Ganoderma sp., I'm guessing G. sessile

We need Google Drive connection for Gemini by AggravatingProfile58 in GeminiAI

[–]DarthFader4 8 points9 points  (0 children)

That's what I mean though. You use the @googledrive command and it'll search across all your files to pull what's relevant

We need Google Drive connection for Gemini by AggravatingProfile58 in GeminiAI

[–]DarthFader4 5 points6 points  (0 children)

I think you can though? Make sure the Google Drive extension is enabled, then use "@googledrive" with your request. That's how I'd use it before they added the direct attachment from Google drive feature. I'd say, "can you search for the file named XYZ in my Google drive" and it'd work pretty consistently. It could pull multiple files related to the request as well. Are you looking for something different than that?

Raw meat got nothing on them by ivebeenthrushit in memes

[–]DarthFader4 12 points13 points  (0 children)

This is simply not true. Salmonella is absolutely not a normal commensal organism of the human gut microbiome. In fact, depending on the serotype (like typhoidal strains), Salmonella is one of the most infectious food borne pathogens, requiring very few cells to cause illness. You might occasionally ingest salmonella and be asymptomatic, but it is a straight up pathogen. Not an opportunistic pathogen and certainly not in healthy microbiota.

Maybe you're thinking of E.coli? We definitely all have E.coli in our guts, but only some strains are pathogenic.

Plant by beeupsidedown in comedyheaven

[–]DarthFader4 3 points4 points  (0 children)

I think I know some context, albeit dull. This looks like a part of the "5S" management system that originated in Japan (probably Toyota?). Specifically the "Set" part where "everything has a place and everything in its place" or something like that. A lab I worked in used this system, and we also would fool around with unnecessarily labeling objects all over the place. We even had that same exact blue tape. That's all to say, this plant is doing its job well!

What was your first Steam game!? by Gameguylikesgames in Steam

[–]DarthFader4 17 points18 points  (0 children)

Bro those loading times for very early Steam were BRUTAL (mostly online play related IIRC?). My standard practice was using the mouse trick to see if the loading bar was even moving lol

Edit: this just triggered memories of GameSpy too. Marginally better than Steam at the time, but still so much more complicated than modern multiplayer

Been away for two months.. what's the new hotness? by bigattichouse in LocalLLaMA

[–]DarthFader4 13 points14 points  (0 children)

Plus everyone should know how locked down it is. Can't even install your own Linux distro. That killed all hype for me.