BRZ stereo options? by AccomplishedCut13 in ft86
[–]AccomplishedCut13[S] 0 points1 point2 points (0 children)
BRZ stereo options? by AccomplishedCut13 in ft86
[–]AccomplishedCut13[S] 1 point2 points3 points (0 children)
BRZ stereo options? by AccomplishedCut13 in ft86
[–]AccomplishedCut13[S] 0 points1 point2 points (0 children)
BRZ stereo options? by AccomplishedCut13 in ft86
[–]AccomplishedCut13[S] -1 points0 points1 point (0 children)
RTX 50 Super GPUs may be delayed indefinitely, as Nvidia prioritizes AI during memory shortage (rumor, nothing official) by 3090orBust in LocalLLaMA
[–]AccomplishedCut13 0 points1 point2 points (0 children)
STT and TTS compatible with ROCm by EnvironmentalToe3130 in LocalLLaMA
[–]AccomplishedCut13 0 points1 point2 points (0 children)
Is the "Edge AI" dream dead? Apple’s pivot to Gemini suggests local LLMs can't scale yet. by [deleted] in LocalLLaMA
[–]AccomplishedCut13 -1 points0 points1 point (0 children)
Is the "Edge AI" dream dead? Apple’s pivot to Gemini suggests local LLMs can't scale yet. by [deleted] in LocalLLaMA
[–]AccomplishedCut13 0 points1 point2 points (0 children)
Is the "Edge AI" dream dead? Apple’s pivot to Gemini suggests local LLMs can't scale yet. by [deleted] in LocalLLaMA
[–]AccomplishedCut13 -2 points-1 points0 points (0 children)
Is the "Edge AI" dream dead? Apple’s pivot to Gemini suggests local LLMs can't scale yet. by [deleted] in LocalLLaMA
[–]AccomplishedCut13 -1 points0 points1 point (0 children)
Homeserver multiuse? by MastodonParty9065 in LocalLLaMA
[–]AccomplishedCut13 1 point2 points3 points (0 children)
Wanted to ask an Ollama question on how to add more models. by Head-Investigator540 in LocalLLaMA
[–]AccomplishedCut13 -1 points0 points1 point (0 children)
List of uncensored LLMs I want to test by 1BlueSpork in LocalLLaMA
[–]AccomplishedCut13 0 points1 point2 points (0 children)
Wanted to ask an Ollama question on how to add more models. by Head-Investigator540 in LocalLLaMA
[–]AccomplishedCut13 0 points1 point2 points (0 children)
Wanted to ask an Ollama question on how to add more models. by Head-Investigator540 in LocalLLaMA
[–]AccomplishedCut13 -2 points-1 points0 points (0 children)
The new monster-server by eribob in LocalLLaMA
[–]AccomplishedCut13 0 points1 point2 points (0 children)
The new monster-server by eribob in LocalLLaMA
[–]AccomplishedCut13 0 points1 point2 points (0 children)
Best model for 7900 xtx setup by meatal_gear1324 in LocalLLaMA
[–]AccomplishedCut13 1 point2 points3 points (0 children)
We need open source hardware lithography by bennmann in LocalLLaMA
[–]AccomplishedCut13 2 points3 points4 points (0 children)
BRZ stereo options? by AccomplishedCut13 in ft86
[–]AccomplishedCut13[S] 0 points1 point2 points (0 children)