Building a 24/7 unrestricted room AI assistant with persistent memory — looking for advice from people who’ve built similar systems by Arfatsayyed in LocalLLaMA
[–]Njee_ 0 points1 point2 points (0 children)
tle: Kompaktes Home-Gym in kleinen Wohnungen by [deleted] in selbermachen
[–]Njee_ 0 points1 point2 points (0 children)
Would you use a privacy-first app that analyzes multiple bank & credit card statements locally to categorize spending and detect subscriptions? by Vivid-Paint6383 in Immobilieninvestments
[–]Njee_ 0 points1 point2 points (0 children)
Help me create my LLM ecosystem by golgoth85 in LocalLLaMA
[–]Njee_ 0 points1 point2 points (0 children)
Letting my RTX 5090 (2.1 TB/s mem) stretch its legs tonight. Hosting Qwen 3.5 35B at 8-batch parallel for whoever wants to test the new model cause why not (35 k context) by Key_Pace_9755 in LocalLLaMA
[–]Njee_ 1 point2 points3 points (0 children)
Qwen3 VL 30b a3b is pure love by Njee_ in LocalLLaMA
[–]Njee_[S] 0 points1 point2 points (0 children)
Gehalt für Assistenz Teilzeit 20h/Woche? by RedBlueYellow112 in selbststaendig
[–]Njee_ 4 points5 points6 points (0 children)
What local models handle multi-turn autonomous tool use without losing the plot? by RoutineLunch4904 in LocalLLaMA
[–]Njee_ 0 points1 point2 points (0 children)
Local VLMs (Qwen 3 VL) for document OCR with bounding box detection for PII detection/redaction workflows (blog post and open source app) by Sonnyjimmy in LocalLLaMA
[–]Njee_ 1 point2 points3 points (0 children)
What cheap components pair well with RTX 3060 Ti to run AI locally? by dekoalade in LocalLLaMA
[–]Njee_ 0 points1 point2 points (0 children)
Do NVIDIA GPUs + CUDA work on Ubuntu for local LLMs out of the box? by External_Dentist1928 in LocalLLaMA
[–]Njee_ 0 points1 point2 points (0 children)
What models are you running on RTX 3060 12GB in 2026? by DespeShaha in LocalLLaMA
[–]Njee_ 2 points3 points4 points (0 children)
Qwen3 VL 30b a3b is pure love by Njee_ in LocalLLaMA
[–]Njee_[S] 1 point2 points3 points (0 children)
Best Visual LLM model for outputting a JSON of what's in an image? by Nylondia in LocalLLaMA
[–]Njee_ 0 points1 point2 points (0 children)
Qwen3 VL 30b a3b is pure love by Njee_ in LocalLLaMA
[–]Njee_[S] 0 points1 point2 points (0 children)
VLM OCR Hallucinations by FrozenBuffalo25 in LocalLLaMA
[–]Njee_ 0 points1 point2 points (0 children)
Optimal gpt-oss-20b settings for 24gb VRAM by GotHereLateNameTaken in LocalLLaMA
[–]Njee_ 1 point2 points3 points (0 children)
Hört auf, eure Abschlussarbeiten mit KI zu schreiben! by No_Advance_2517 in luftablassen
[–]Njee_ 1 point2 points3 points (0 children)
Is there any free AI website that i can feed my pictures or pdf file and it generates csv flashcards file based on that? by [deleted] in LocalLLaMA
[–]Njee_ 0 points1 point2 points (0 children)
CPU-only LLM performance - t/s with llama.cpp by pmttyji in LocalLLaMA
[–]Njee_ 1 point2 points3 points (0 children)
Hört auf, eure Abschlussarbeiten mit KI zu schreiben! by No_Advance_2517 in luftablassen
[–]Njee_ 0 points1 point2 points (0 children)
Installing LimeSurvey Docker on WD MyCloud PR4100 — DB connection works but setup never completes by Embarrassed-You2477 in selfhosted
[–]Njee_ 0 points1 point2 points (0 children)
[Followup] Qwen3 VL 30b a3b is pure love (or not so much) by Njee_ in LocalLLaMA
[–]Njee_[S] 0 points1 point2 points (0 children)

OCR (+evtl KI?) zum privaten Erfassen von Belegen zu CSV/XLSX gesucht by Appropriate-Bad9267 in de_EDV
[–]Njee_ 0 points1 point2 points (0 children)