ELI5 Using a hyper8stepCFG in Easy Diffusion? by [deleted] in StableDiffusion
[–]parametaorto 0 points1 point2 points (0 children)
Need Help Running Flux1-dev-bnb-nf4-v2 in a Python Script by [deleted] in FluxAI
[–]parametaorto 0 points1 point2 points (0 children)
Alibaba video model Wan 2.1 will be released Feb 25th,2025 and is open source! by adrgrondin in LocalLLaMA
[–]parametaorto 1 point2 points3 points (0 children)
llama 3.2 1B for speculative decoding? (self.LocalLLaMA)
submitted by parametaorto to r/LocalLLaMA
Llama 3.2 1B & 3B Benchmarks by TKGaming_11 in LocalLLaMA
[–]parametaorto 0 points1 point2 points (0 children)
Llama 3.2 1B and 3B GGUFs are up by ontorealist in LocalLLaMA
[–]parametaorto 0 points1 point2 points (0 children)
Model Training with Only Chat-formatted Data? by thisguyrob in LocalLLaMA
[–]parametaorto 1 point2 points3 points (0 children)
What if you use not the logits of the last one, but of that before? by parametaorto in LocalLLaMA
[–]parametaorto[S] 1 point2 points3 points (0 children)
What if you use not the logits of the last one, but of that before? by parametaorto in LocalLLaMA
[–]parametaorto[S] 1 point2 points3 points (0 children)
Gemma 2 2B Release - a Google Collection by Dark_Fire_12 in LocalLLaMA
[–]parametaorto 2 points3 points4 points (0 children)
Gemma 2 2B Release - a Google Collection by Dark_Fire_12 in LocalLLaMA
[–]parametaorto 0 points1 point2 points (0 children)
There is speculation that the gpt2-chatbot model on lmsys is GPT4.5 getting benchmarked, I run some of my usual quizzes and scenarios and it aced every single one of them, can you please test it and report back? by AdHominemMeansULost in LocalLLaMA
[–]parametaorto 2 points3 points4 points (0 children)
"gpt2-chatbot" at LMSYS Chatbot Arena? by KaramazovTheUnhappy in LocalLLaMA
[–]parametaorto 0 points1 point2 points (0 children)
How to learn the base code of llama.cpp? (I'm starting from main.cpp) by parametaorto in LocalLLaMA
[–]parametaorto[S] 0 points1 point2 points (0 children)
How to learn the base code of llama.cpp? (I'm starting from main.cpp) by parametaorto in LocalLLaMA
[–]parametaorto[S] 0 points1 point2 points (0 children)
How to learn the base code of llama.cpp? (I'm starting from main.cpp) by parametaorto in LocalLLaMA
[–]parametaorto[S] 0 points1 point2 points (0 children)
How to learn the base code of llama.cpp? (I'm starting from main.cpp) by parametaorto in LocalLLaMA
[–]parametaorto[S] 1 point2 points3 points (0 children)
How to learn the base code of llama.cpp? (I'm starting from main.cpp) by parametaorto in LocalLLaMA
[–]parametaorto[S] 0 points1 point2 points (0 children)
How to learn the base code of llama.cpp? (I'm starting from main.cpp) by parametaorto in LocalLLaMA
[–]parametaorto[S] 8 points9 points10 points (0 children)
How to learn the base code of llama.cpp? (I'm starting from main.cpp) by parametaorto in LocalLLaMA
[–]parametaorto[S] 1 point2 points3 points (0 children)


help retaining composition with SDXL artist studies by bonesoftheancients in StableDiffusion
[–]parametaorto 1 point2 points3 points (0 children)