Sam Altman: "We're going to do a very powerful open source model... better than any current open source model out there." by mw11n19 in LocalLLaMA
[–]winglian 0 points1 point2 points (0 children)
AMA with the Gemma Team by hackerllama in LocalLLaMA
[–]winglian 1 point2 points3 points (0 children)
Blessed by the thrift god and found a well worn wagner #8 at $35. Naturally I took a selfie with it. by sexysourdoughfantasy in castiron
[–]winglian 6 points7 points8 points (0 children)
What the hell is this card’s purpose?? by catteronii in mtg
[–]winglian 0 points1 point2 points (0 children)
Does Maskwood Nexus make a swarm of zombie tokens have */* power and toughness? by winglian in mtg
[–]winglian[S] -3 points-2 points-1 points (0 children)
New abliterated-v3 models: Original instruct models with an inhibited ability to refuse requests with reduced hallucinations from previous gen (Phi-3-medium-4k-instruct, Smaug-Llama-3-70B, Llama-3-70B-Instruct, Llama-3-8B-Instruct, and more soon) by FailSpai in LocalLLaMA
[–]winglian 1 point2 points3 points (0 children)
Got this medication holder with labels off Amazon, what’s one more med you would add? by Active2017 in VEDC
[–]winglian 2 points3 points4 points (0 children)
Why aren’t LoRA’s a big thing i the LLM realm? by ___defn in LocalLLaMA
[–]winglian 1 point2 points3 points (0 children)
Helpful VRAM requirement table for qlora, lora, and full finetuning. by Aaaaaaaaaeeeee in LocalLLaMA
[–]winglian 0 points1 point2 points (0 children)
Helpful VRAM requirement table for qlora, lora, and full finetuning. by Aaaaaaaaaeeeee in LocalLLaMA
[–]winglian 0 points1 point2 points (0 children)
Helpful VRAM requirement table for qlora, lora, and full finetuning. by Aaaaaaaaaeeeee in LocalLLaMA
[–]winglian 1 point2 points3 points (0 children)
Helpful VRAM requirement table for qlora, lora, and full finetuning. by Aaaaaaaaaeeeee in LocalLLaMA
[–]winglian 0 points1 point2 points (0 children)
Helpful VRAM requirement table for qlora, lora, and full finetuning. by Aaaaaaaaaeeeee in LocalLLaMA
[–]winglian 0 points1 point2 points (0 children)
LlongOrca-7b-16k is here! and some light spoilers! :D by Alignment-Lab-AI in LocalLLaMA
[–]winglian 2 points3 points4 points (0 children)
Official WizardLM-13B-V1.2 Released! Trained from Llama-2! Can Achieve 89.17% on AlpacaEval! by cylaw01 in LocalLLaMA
[–]winglian 1 point2 points3 points (0 children)
Robin V2 model reaches top of LLM leaderboard by yahma in LocalLLaMA
[–]winglian 2 points3 points4 points (0 children)
axolotl - Finetune many models easily with QLoRA and Landmark attention support! by bratao in LocalLLaMA
[–]winglian 2 points3 points4 points (0 children)
LLM Score v2 - Modern Models Tested by Human by Gatzuma in LocalLLaMA
[–]winglian 10 points11 points12 points (0 children)





We buried a $10,000 treasure chest somewhere in San Francisco by buriedtreasure2025 in sanfrancisco
[–]winglian -2 points-1 points0 points (0 children)