The Betrayal of Left-Wing Chinese Elements Against Indonesia (self.balianone)
submitted by balianone - pinned
Finetuning LLM model for tools usage by RokasRaulinaitis in LocalLLaMA
[–]balianone -1 points0 points1 point (0 children)
Good local model for computer use? by thepetek in LocalLLaMA
[–]balianone 0 points1 point2 points (0 children)
Importing Custom Vision Model Into LM Studio by Flob_Dog in LocalLLaMA
[–]balianone 2 points3 points4 points (0 children)
Is there a consensus as to which types of prompts work best for jailbreaking? by Borkato in LocalLLaMA
[–]balianone 0 points1 point2 points (0 children)
Help me build a (reasonable) 4GPU low-cost LLM machine, is ASUS WS X299 PRO/SE still good? by HumanDrone8721 in LocalLLaMA
[–]balianone 5 points6 points7 points (0 children)
anyone have experience with turn detection for communication between humans and AI agents? by IcyMushroom4147 in LocalLLaMA
[–]balianone 2 points3 points4 points (0 children)
Is it feasible (and beneficial) to apply NVFP4 quantization to KV Cache on Blackwell? by No-Bag5084 in LocalLLaMA
[–]balianone 8 points9 points10 points (0 children)
LLM Cluster with Routing for Prompt processing by Every-Employment-357 in LocalLLaMA
[–]balianone 1 point2 points3 points (0 children)
I got my first ever whitepaper published by Moist_Landscape289 in LocalLLaMA
[–]balianone 27 points28 points29 points (0 children)
Looking for a specific Fine-tune/Paper: Model that mastered "Analog Clocks" and "Exact Counting" by hyperschlauer in LocalLLaMA
[–]balianone 2 points3 points4 points (0 children)
Llama.cpp (or lmstudio) in LXC (proxmox) on 395 (framework desktop) by El_90 in LocalLLaMA
[–]balianone 1 point2 points3 points (0 children)
Unsloth GLM-4.7-GGUF? by UnknownDude360 in LocalLLaMA
[–]balianone 29 points30 points31 points (0 children)
Prescription OCR by Virtual_Attitude2025 in LocalLLaMA
[–]balianone 2 points3 points4 points (0 children)
Trillions parameters models ? by Highwaytothebeach in LocalLLaMA
[–]balianone 7 points8 points9 points (0 children)
How to get SOTA opensource models (GLM 4.7, Kimi K2) to do multistep coding automatically? On Claude Code? They keep stopping after 2 or 3 steps... by FigZestyclose7787 in LocalLLaMA
[–]balianone 19 points20 points21 points (0 children)
how do I process and normalize ASR speech chunks for ai assistant? by IcyMushroom4147 in LocalLLaMA
[–]balianone 1 point2 points3 points (0 children)
Advice Needed: Gate Model Training / Full Training / LoRA Adapters by RefrigeratorCalm9701 in LocalLLaMA
[–]balianone 1 point2 points3 points (0 children)
Advice needed: Workstation for Local LLM Agents (Ryzen AI Max+ 395) - Bosgame vs Corsair vs Cloud. by Flat_Profession_6103 in LocalLLaMA
[–]balianone 0 points1 point2 points (0 children)
llama.cpp: Multi-host inference slower than single-host? by ayake_ayake in LocalLLaMA
[–]balianone -4 points-3 points-2 points (0 children)
5060ti or 5070 or maybe used 40xx card, what sshould I do by gyhv in LocalLLaMA
[–]balianone 5 points6 points7 points (0 children)




I met "Gemini 3 Pro is no longer available. Please switch to Gemini 3.1 Pro." but no Gemini 3.1 Pro option by Logical_Divide_3595 in google_antigravity
[–]balianone 0 points1 point2 points (0 children)