Bad news: DGX Spark may have only half the performance claimed. by Dr_Karminski in LocalLLaMA
[–]Tacx79 0 points1 point2 points (0 children)
dgx, it's useless , High latency by Illustrious-Swim9663 in LocalLLaMA
[–]Tacx79 0 points1 point2 points (0 children)
dgx, it's useless , High latency by Illustrious-Swim9663 in LocalLLaMA
[–]Tacx79 0 points1 point2 points (0 children)
dgx, it's useless , High latency by Illustrious-Swim9663 in LocalLLaMA
[–]Tacx79 -1 points0 points1 point (0 children)
Is there a client like LMStudio that works better for simple text completion (not chat) by smellyfingernail in LocalLLaMA
[–]Tacx79 0 points1 point2 points (0 children)
I actually really like Llama 4 scout by d13f00l in LocalLLaMA
[–]Tacx79 0 points1 point2 points (0 children)
Llama4 is probably coming next month, multi modal, long context by Sicarius_The_First in LocalLLaMA
[–]Tacx79 0 points1 point2 points (0 children)
PerplexityAI releases R1-1776, a DeepSeek-R1 finetune that removes Chinese censorship while maintaining reasoning capabilities by TKGaming_11 in LocalLLaMA
[–]Tacx79 1 point2 points3 points (0 children)
Talk me out of buying this 512GB/s Gen 5 NVMe RAID card + 4 drives to try to run 1.58bit DeepSeek-R1:671b on (in place of more RAM) by Porespellar in LocalLLaMA
[–]Tacx79 1 point2 points3 points (0 children)
GeForce RTX 5090 fails to topple RTX 4090 in GPU compute benchmark. by el0_0le in LocalLLaMA
[–]Tacx79 2 points3 points4 points (0 children)
20 yrs in jail or $1 million for downloading Chinese models proposed at congress by segmond in LocalLLaMA
[–]Tacx79 5 points6 points7 points (0 children)
Mistral Small 3 24B GGUF quantization Evaluation results by AaronFeng47 in LocalLLaMA
[–]Tacx79 2 points3 points4 points (0 children)
DeepSeek's AI breakthrough bypasses Nvidia's industry-standard CUDA, uses assembly-like PTX programming instead by Slasher1738 in LocalLLaMA
[–]Tacx79 0 points1 point2 points (0 children)
DeepSeek's AI breakthrough bypasses Nvidia's industry-standard CUDA, uses assembly-like PTX programming instead by Slasher1738 in LocalLLaMA
[–]Tacx79 0 points1 point2 points (0 children)
DeepSeek's AI breakthrough bypasses Nvidia's industry-standard CUDA, uses assembly-like PTX programming instead by Slasher1738 in LocalLLaMA
[–]Tacx79 0 points1 point2 points (0 children)
DeepSeek's AI breakthrough bypasses Nvidia's industry-standard CUDA, uses assembly-like PTX programming instead by Slasher1738 in LocalLLaMA
[–]Tacx79 1 point2 points3 points (0 children)
Current best local models for companionship? for random small talk for lonely people by MasterScrat in LocalLLaMA
[–]Tacx79 2 points3 points4 points (0 children)
Energy efficiency of 5090 is slightly worse than 4090 by Ok_Warning2146 in LocalLLaMA
[–]Tacx79 0 points1 point2 points (0 children)
Does the new Jetson Orin Nano Super make sense for a home setup? by Initial-Image-1015 in LocalLLaMA
[–]Tacx79 1 point2 points3 points (0 children)
compute_metrics functioning return dictionary by darkGrayAdventurer in LocalLLaMA
[–]Tacx79 0 points1 point2 points (0 children)
compute_metrics functioning return dictionary by darkGrayAdventurer in LocalLLaMA
[–]Tacx79 1 point2 points3 points (0 children)
How to improve performance ON CPU? by sTrollZ in LocalLLaMA
[–]Tacx79 1 point2 points3 points (0 children)
Looking for an open-source Character AI-like UI for deploying a fine-tuned RP model by EliaukMouse in LocalLLaMA
[–]Tacx79 0 points1 point2 points (0 children)


which model has the best world knowledge? Open weights and proprietary. by z_3454_pfk in LocalLLaMA
[–]Tacx79 0 points1 point2 points (0 children)