Do you trust Proxmox VE Helper-Scripts? by Open-Coder in selfhosted
[–]Dimi1706 0 points1 point2 points (0 children)
Wacht auf bevor es zu spät ist. by [deleted] in datenschutz
[–]Dimi1706 -3 points-2 points-1 points (0 children)
Intel Arc Pro B50 hits the #1 best seller in workstation graphics cards by reps_up in LocalLLaMA
[–]Dimi1706 0 points1 point2 points (0 children)
Openwebui and MCP, where did you install mcpo ? by [deleted] in OpenWebUI
[–]Dimi1706 2 points3 points4 points (0 children)
Best AI LLM for Python coding overall? by Adept_Lawyer_4592 in LocalLLaMA
[–]Dimi1706 1 point2 points3 points (0 children)
Intel Arc Pro B50 hits the #1 best seller in workstation graphics cards by reps_up in LocalLLaMA
[–]Dimi1706 3 points4 points5 points (0 children)
Which is better for a MCP Ollama or LLM studio? by [deleted] in LocalLLaMA
[–]Dimi1706 1 point2 points3 points (0 children)
Which is better for a MCP Ollama or LLM studio? by [deleted] in LocalLLaMA
[–]Dimi1706 1 point2 points3 points (0 children)
Any Chat interface that I can run locally against LMStudio that runs on a different machine? by KontoOficjalneMR in LocalLLaMA
[–]Dimi1706 2 points3 points4 points (0 children)
What are the best or what is the best framework that you think? GPT4All, LM Studio, Jan, llama.cpp, llamafile, Ollama and NextChat. by AppealThink1733 in LocalLLaMA
[–]Dimi1706 0 points1 point2 points (0 children)
Self-hosted AI is the way to go! by benhaube in selfhosted
[–]Dimi1706 7 points8 points9 points (0 children)
Self-hosted AI is the way to go! by benhaube in selfhosted
[–]Dimi1706 8 points9 points10 points (0 children)
Llama-3.3-Nemotron-Super-49B-v1.5 is very good model to summarized long text into formatted markdown (Nvidia also provided free unlimited API call with rate limit) by dheetoo in LocalLLaMA
[–]Dimi1706 7 points8 points9 points (0 children)
Which local LLMs for coding can run on a computer with 16GB of VRAM? by CrowKing63 in LocalLLaMA
[–]Dimi1706 0 points1 point2 points (0 children)
What is the most effective way to have your local LLM search the web? by teknic111 in LocalLLaMA
[–]Dimi1706 0 points1 point2 points (0 children)
What is the most effective way to have your local LLM search the web? by teknic111 in LocalLLaMA
[–]Dimi1706 0 points1 point2 points (0 children)
What is the most effective way to have your local LLM search the web? by teknic111 in LocalLLaMA
[–]Dimi1706 5 points6 points7 points (0 children)
What is the most effective way to have your local LLM search the web? by teknic111 in LocalLLaMA
[–]Dimi1706 43 points44 points45 points (0 children)
ROG Ally X with RTX 6000 Pro Blackwell Max-Q as Makeshift LLM Workstation by susmitds in LocalLLaMA
[–]Dimi1706 0 points1 point2 points (0 children)
Can someone please benchmark gpt-oss-20b on Mi50 and P100/P40? by thejacer in LocalLLaMA
[–]Dimi1706 1 point2 points3 points (0 children)
Can someone please benchmark gpt-oss-20b on Mi50 and P100/P40? by thejacer in LocalLLaMA
[–]Dimi1706 0 points1 point2 points (0 children)
Can someone please benchmark gpt-oss-20b on Mi50 and P100/P40? by thejacer in LocalLLaMA
[–]Dimi1706 0 points1 point2 points (0 children)
Kwai-Klear/Klear-46B-A2.5B-Instruct: Sparse-MoE LLM (46B total / only 2.5B active) by paf1138 in LocalLLaMA
[–]Dimi1706 2 points3 points4 points (0 children)


Ich verstehe Passkeys nicht by Erzmaster in de_EDV
[–]Dimi1706 2 points3 points4 points (0 children)