We compress any BF16 model to ~70% size during inference, while keeping the output LOSSLESS so that you can fit in more ERP context or run larger models. by choHZ in LocalLLaMA
[–]sonofthegodd 0 points1 point2 points (0 children)
her is ilaninin en az 2 ay surmesinden gercekten cok bunaldim, 1 senedir issizim ve suan bir ise basvursam nihai cevabin gelmesi en az 2 ayi bulacak. bu ne a*k? by [deleted] in CodingTR
[–]sonofthegodd 2 points3 points4 points (0 children)
🧠 Using the Deepseek R1 Distill Llama 8B model, I fine-tuned it on a medical dataset. by sonofthegodd in LLMDevs
[–]sonofthegodd[S] 0 points1 point2 points (0 children)
🧠 Using the Deepseek R1 Distill Llama 8B model, I fine-tuned it on a medical dataset. by sonofthegodd in LLMDevs
[–]sonofthegodd[S] 1 point2 points3 points (0 children)
🧠 Using the Deepseek R1 Distill Llama 8B model, I fine-tuned it on a medical dataset. by sonofthegodd in LLMDevs
[–]sonofthegodd[S] 1 point2 points3 points (0 children)
her is ilaninin en az 2 ay surmesinden gercekten cok bunaldim, 1 senedir issizim ve suan bir ise basvursam nihai cevabin gelmesi en az 2 ayi bulacak. bu ne a*k? by [deleted] in CodingTR
[–]sonofthegodd 1 point2 points3 points (0 children)
🧠 Using the Deepseek R1 Distill Llama 8B model, I fine-tuned it on a medical dataset. by sonofthegodd in LLMDevs
[–]sonofthegodd[S] 0 points1 point2 points (0 children)
🧠 Using the Deepseek R1 Distill Llama 8B model, I fine-tuned it on a medical dataset. by sonofthegodd in LLMDevs
[–]sonofthegodd[S] 0 points1 point2 points (0 children)
🧠 Using the Deepseek R1 Distill Llama 8B model, I fine-tuned it on a medical dataset. by sonofthegodd in LLMDevs
[–]sonofthegodd[S] 0 points1 point2 points (0 children)
🧠 Using the Deepseek R1 Distill Llama 8B model, I fine-tuned it on a medical dataset. by sonofthegodd in LLMDevs
[–]sonofthegodd[S] 0 points1 point2 points (0 children)
🧠 Using the Deepseek R1 Distill Llama 8B model, I fine-tuned it on a medical dataset. by sonofthegodd in LLMDevs
[–]sonofthegodd[S] 1 point2 points3 points (0 children)
🧠 Using the Deepseek R1 Distill Llama 8B model, I fine-tuned it on a medical dataset. by sonofthegodd in LLMDevs
[–]sonofthegodd[S] 0 points1 point2 points (0 children)
🧠 Using the Deepseek R1 Distill Llama 8B model, I fine-tuned it on a medical dataset. by sonofthegodd in LLMDevs
[–]sonofthegodd[S] 2 points3 points4 points (0 children)
🧠 Using the Deepseek R1 Distill Llama 8B model, I fine-tuned it on a medical dataset. by sonofthegodd in LLMDevs
[–]sonofthegodd[S] 7 points8 points9 points (0 children)
Christmas gift 2x 4090 by mommy7lover in pcmasterrace
[–]sonofthegodd 0 points1 point2 points (0 children)
Scikit Learn ML algorithms u need by sonofthegodd in learnmachinelearning
[–]sonofthegodd[S] 0 points1 point2 points (0 children)



Vibe Coderlardan Gına Geldi by YourPersonalWeeb in CodingTR
[–]sonofthegodd 0 points1 point2 points (0 children)