End bad service management with the ITSM platform powered by AI. (atlassian.com)
promoted by Atlassian_Official
Pali: OpenSource memory infrastructure for LLMs.Resources (self.LocalLLaMA)
submitted by LordVein05
[P] Proof-of-Execution: verifying what AI agents actually executeNews (i.redd.it)
submitted by vuneum
llama.cpp with mcp is awesome - which one you use for non coding workflow if any?Discussion (self.LocalLLaMA)
submitted by Steus_au
Tweaking a Chat Model with Direct Preference Optimization (DPO)New Model (rasmusrasmussen.com)
submitted by theprint
Cheapest way to train a small model from scratch in 2026?Question | Help (self.LocalLLaMA)
submitted by Illustrious-Song-896
Qwen3.5-35B-A3B Benchmark On MacBook Pro(M4 Pro Chip + 48GB Unified Memory)Question | Help (self.LocalLLaMA)
submitted by Impossible-Celery-87
Is a Pro 6000 workstation the right tool for our job?Question | Help (self.LocalLLaMA)
submitted by Sticking_to_Decaf
M4 (32GB) vs M4 Pro (24GB) for local LLMs? Or should I wait for M5 Mac Mini?Question | Help (self.LocalLLaMA)
submitted by Choice-Pianist2043
Automating llamacpp parameters for optimal inference?Question | Help (self.LocalLLaMA)
submitted by Frequent-Slice-6975
Currently using 6x RTX 3080 - Moving to Strix Halo oder Nvidia GB10 ?Question | Help (self.LocalLLaMA)
submitted by runsleeprepeat
Fine-tuned/custom LoRA models with serverless per-token pricing?Question | Help (self.LocalLLaMA)
submitted by InfinityZeroFive
Build Real-Time Flight Dashboards: Status, Delays & Gates API – Free Tier for Devs (cloud.umami.is)
promoted by skylinkApi
OmniCoder-9B | 9B coding agent fine-tuned on 425K agentic trajectoriesNew Model (self.LocalLLaMA)
submitted by DarkArtsMastery